In our last blog entry, we discussed the hidden advantages to using a distributed IDE for enterprise development, but that is just the tip of the iceberg. To fully realize a Continuous Delivery (CD) environment for legacy systems, you need to be able to deploy the new applications just as quickly as you develop them. Moreover, you need the ability to seamlessly transfer a running application in development to production without extensive delays. We can all agree that it doesn’t make sense to improve the processes around the development of new versions of your application if it takes months to test, QA, and deploy the application to your users. There are a number of obstacles that can cause delays in deployment, but let’s focus on the ones within your control; ones that you can improve readily or avoid if they aren’t integral to the Continuous Delivery process. In order to limit the scope of this discussion, let’s assume your company has a robust testing environment and an adequate testing strategy. Let’s also say the following assumptions are also present:
- The development process makes significant changes to the infrastructure that are not all documented.
- The systems to which the new applications are deployed to are not similar to the development environment.
- The documentation of the application infrastructure does not provide enough information to resolve issues.
The Common Challenge:
Given all these assumptions, how do you make the deployment process for a new application easy and as streamlined as possible? How do you avoid environmental and load issues when deploying a new application?
If you believe the majority of problems in deploying a new application is the result of resolving differences between development and production or QA, then the solution we are about to discuss will be of interest to you. Our proposal to speeding up the deployment process for new applications is to not only capture the development environment, but the developer’s knowledge of the application as well. By that we mean: capture the developer’s knowledge of dependencies and subtle timing changes in the new application. Often the integration of new modules requires more time to start or are dependent on configuration subtleties. By capturing this information you can reduce potential errors and ensure that testing and operations can concentrate on their goals instead of trying debug the infrastructure. In the traditional sense where the developer starts and stops the application with scripts, the majority of changes involved in moving from development to operations is centered in these scripts. The process of maintaining these scripts is the single most difficult task in migrating the application. The migration of application’s components and structure to the target system is trivial in comparison to modifying the scripts for the new system. Given the complexity of some applications, this is the place where changes are typically needed for a variety of reasons and the developer is usually the only one who knows what the issue is. What if there was a better way to capture this infrastructure so that it is portable, more adaptable to new systems, and easier to modify?
On more modern systems like CentOS and RHEL (and other Linux variants) and with Java applications, Docker is an excellent example of how the exact image of the OS can be created and the application can be captured in containers. A set of docker run commands with bash provides the install and run commands necessary to start individual apps. Docker also has restart policies to restart a container automatically when Docker restarts or if they exit. More complex applications can be started with a bootstrap also run in Docker. Even with these systems, the scripts sometimes need some changes. Scripts of these type are often complex with individual commands which usually require a programmer or systems engineer to modify and maintain. While this type of model might work well with Java, other languages like 3GLs present problems since they are not container based. One promising solution can be found in leveraging an Application Performance Management (APM) hybrid known as application orchestration. This APM hybrid is designed specifically for managing the application infrastructure: application startup and shutdown, using the developer’s knowledge of dependency and starting order, all captured in a configuration that can be replicated. Application orchestration is best described variant of the Application Performance Management software model, since it has no classification of its own. The APM model specifies several criteria useful for managing multiple processes in an application.
The Application Performance Management approach
Application Performance Management tools are a classification of software that facilitates monitoring applications. There are a few commercial tools that provide some but not all of the features that APMs are intended to provide. In this discussion, we are looking at the provision of key services that afford a DevOps environment significant advantages, ease of use and speed to deploy. While commercial APMs are focused on monitoring software to ensure performance, we are targeting features that provide some applicability for solving DevOps problems and specificallly for speeding up the move to production. If we look closely at these features, we can see how APM features can provide the base services that are needed for application orchestration that is essential to DevOps systems:
- End User Experience – in the world of application performance, this feature is probably the most important. In the DevOps world, we would argue our end user is the development and operations staff. Using the APM active monitoring feature would provide a great benefit after the deployment and would most probably provide the foundation for defining and documenting the variables in play during deployment. The EUE feature not only benefits the application end user once the deployment is complete, but it would improve the experience of operations and developers who need DevOps tools during the deployment process.
- Runtime application architecture – this feature is essential to managing application performance and greatly benefits the DevOps intelligent infrastructure because it is the best way to capture the components and put them in a configuration that can be managed. However, we believe that management is best done when you can visualize an application architecture is through a GUI console, rather than through scripts. A GUI can visualize the underlying application infrastructure by exposing each component and its sub-components and their dependencies. The key feature here is Application Dependency and Discovery, which is useful in managing the application and assisting the orchestration of the application during startup.
- Business transaction –this is a more obvious feature when it comes to describing tools for monitoring and performance purposes. In a DevOps tool, the APM business transactions feature is important because it reflects the application’s overall health which may be performed by the various discrete components. This information needs to be captured so that it can be monitored actively with health scripts and actions can be taken in the event of failures or degraded performance. Important application dependent transactions need to be identified so test cases or health scripts can be used to validate the accuracy of this feature.
- Deep dive component monitoring – another key feature that is quite important to the DevOps world because it relies on an intelligent infrastructure which is capable of reacting to external forces and can detect and fix problems with the application. In order to successfully accomplish this level of automated management, a deep dive into each component and its dependencies is needed to determine the scope of modules affected and handle any contingencies. It should then follow that the ability to capture this infrastructure intelligence will make replication of the environment for operations even simpler.
- Analytics and reporting – while a secondary feature of APMs relies on analytics and reporting to provide performance data, DevOps requires application orchestration, which doesn’t depend on this level of metrics to be successful. Analytics used in conjunction with application dependencies, however, are useful in determining application health and are useful in solving system dependent resource issues which may affect the application. In the DevOps world, the goal is to get the application functioning and running without issues. Looking at the analysis of resources is good for spotting trends and making adjustments for the application, but you often have application dependency issues that affect application performance. If you want to focus on the internals, application log files are the primary source of reporting application health for DevOps.
If an APM addresses each of these needs, it would probably be an effective DevOps tool. However, one very important issue is left unresolved: how does the knowledge of the application infrastructure get transferred to operations for a new environment? Is the documentation provided by development sufficient? Are you expecting the DevOps engineers to understand the intricacies of a developer’s scripts or do you expect your developer will be assigned for an indefinite period of time to operations to shepherd them though the process? If you are looking to incorporate commercial APM software into this process, then you should look at maximizing the impact of each of the APM features in speeding up Continuous Delivery. Do your homework and figure out what you need and incorporate those requirements into a hybrid solution for CD. Most likely you will come up with a set of requirements that needs a specific solution: Application Orchestration. Stay tuned for our next post which explores this idea.