DevOps is a key component for the success of any organization’s attempt to provide Continuous Delivery for their users. Historically, Continuous Delivery required some very sophisticated development and deployment tools to ensure the smooth delivery of software components across the enterprise. With the addition of private or public clouds as destinations for these deployments, it has placed even more integration demands on the capability of DevOps tools to perform the seamless deployment of applications from development to operations. Consequently, DevOps tools now have to have hybrid integration capabilities to perform this function.
eCube Systems’ background in enterprise application development and delivery for many years gives them a unique perspective in the evolution of DevOps tools, having provided flexible tools that have adapted for the changing development and testing requirements of software and the effective capture of the runtime environment for easy deployment. In this blog we will discuss how eCube’s products can accomplish this goal for extending Continuous Delivery with NXTware Hybrid Integration Platform for distributed development, NXTmonitor, a DevOps infrastructure for facilitating the migration of complex applications between development, test and production and NXTera, a high performance middleware for the Cloud.
While NXTera provides much of the hybrid integration platform for legacy systems with components for 3GL and 4GL languages, operating systems and databases, we will start our discussion with the NXTware Remote first, which provides a hybrid integration environment for agile developers, then discuss NXTmonitor, which is a hybrid infrastructure platform that manages the configuration, deployment, and automated execution of hybrid applications on the grid, in the cloud, or in-house systems.
NXTware Remote: Hybrid distributed development infrastructure
Having one development platform across all OS platforms can simplify developer training and save an organization a lot of money. NXTware Remote makes distributed agile development possible for a variety of server platforms including mainframes, Unix/Linux, and Windows. NXTware Remote simplifies distributed server development for C, C++, COBOL, BASIC, Pascal, FORTRAN and Java developers from a preferred desktop, making them more productive in an environment they like.
NXTware Remote is a distributed integrated development environment comprised of two hybrid integration components and a distributed middleware platform.
NXTware Remote Server can be deployed in the cloud or in house and NXTware Remote Studio, an Eclipse based cross platform IDE is installed on the developer workstations. Together, they enable developers to use modern IDEs to analyze and edit code on local workstations then compile, debug and deploy them on any remote or cloud server with greater productivity, better code quality and the agility needed to meet future needs. NXTware Remote can be leveraged to integrate with automated build platforms like Jenkins or combined with in house or third party testing utilities to verify proper functionality.
NXTmonitor: Hybrid Infrastructure Platform for DevOps
Organizations looking to manage their existing cross platform applications are choosing the hybrid infrastructure platform tools to handle the pervasive infrastructure requirements needed by digital transformation initiatives. A key component in this infrastructure is the ability to manage and scale applications across their enterprise and the Cloud. NXTmonitor application orchestration provides a GUI based tool that enables hybrid applications to be deployed by non-developers easily by handling the configuration dependent changes between Development and Operations. NXTmonitor, a modern DevOps tool, enables the capture and export of portable configuration file for the deployment and automated execution of distributed applications on the Grid or in the Cloud. Whenever you deploy new versions of applications into your production environment, you need to make sure your applications can be quickly deployed and tested so users can quickly use the new features. Most importantly, you need to able to monitor and manage these applications in enterprise and hybrid environments. To effectively manage distributed applications, you need to have a real-time management console to monitor these applications on each platform. In addition, you need the ability to start and stop these applications automatically, provide a mechanism to replicate services and scale them up during peak load times. NXTmonitor provides this console interface and simplifies the process for operations staff with easy to recognize indicators for application status. A maintenance interface is also provided for maintenance activities and development of configurations.
NXTmonitor also interfaces with agile development environments like Rational and NXTware IME for a seamless Eclipse-based software engineering environment designed to make software easier to manage and more cost effective to maintain and integrate.
NXTera: Hybrid Integration Platform
The biggest challenge most organizations face is integrating their existing application development with the latest technology. This migration to new technology is known as digital transformation, and its goal is to move some of their applications to the Cloud by extending their current integration platform software, such as ESBs or RPC based applications with new components. Proven Hybrid Integration Platforms like NXTera are based on existing middleware technologies like Entera, which hase been around for over 25 years. NXTera has evolved with the newest technologies to support new languages, new databases and new networking paradigms with the older technology. NXTera provides the legacy integration based tools that enables agile applications to be extended to the Cloud. In concert with NXTware, newer languages and technologies can be integrated with the legacy systems. To effectively integrate distributed applications with Cloud based applications, you need to have the ability to integrate each of these applications on each platform.
Providing contemporary and legacy applications with a scalable, modern infrastructure platform
In response to the demands of evolving legacy applications, Cloud computing and cross platform operations, a new paradigm is emerging to handle the infrastructure needs of this hybrid environment. Gartner has labeled this new infrastructure as Hybrid Integration Platform (November 2018). While this need has existing for many years, there has not been a focus on how much this infrastructure helps the application achieve total platform independence. There are many challenges that bring this focus to bear on legacy applications, but usually they fall into one of three categories:
Internal legacy skills tend to diminish as resources move on to new projects. Application service levels can decline without knowledgeable application maintenance.
Vendor support of various underlying platforms is discontinued.
Digital Transformation is driving a transformation of the software technology upon which most applications are built.
In this discussion, we will investigate the advantages of using a hybrid infrastructure platform (HIP) for hosting applications, rebuilding legacy applications and integrating them with the Cloud and contemporary architectures. Managers for maintaining legacy applications are looking for ways to improve productivity and enhance the future versions of their in-house applications as well as leverage the Cloud where it makes sense. Unfortunately many legacy applications are written in three GL languages and they don’t integrate very well with contemporary architectures and can’t be moved to the Cloud. Many companies have developed their own infrastructures, but they tend to be written for a specific purpose and are not easily modified for other platforms. To accommodate management of Cloud applications and cross platform integration, there are a number of vendors who say they have highly scalable infrastructures which can handle hybrid application management. The first step is to identify the platform dependencies that prevent multi-platform production.
Remediating Platform Dependence
The application dependencies can be language dependent, library or OS dependencies that are unique to each platform. There are subtle changes to the languages in spite of ANSI standards that make portability difficult. If you have a legacy application written in C, C ++, FORTRAN, COBOL, Pascal, Basic, PL/1 or,C# that needs to run on another platform, we will skip to the end of that effort to simply our discussion. Once ported to another OS platform, there will be issues you need to address on the operations side. The infrastructure that is needed to start and stop it is different depending on the OS. If you want one tool to do it across all platforms -that is what a Hybrid Infrastructure Platform (HIP) does. A HIP can do much more, but this is the basic need: start and stop your applications in a timely and performant manner.
Let’s say for the sake of argument you have an existing application that runs in a hybrid environment of onsite systems like Windows, Linux, Unix and Mainframe and you are moving selected applications to the Cloud. The on-site application servers or services could provide data to clients in a Federated SOA environment. The ability to start all of the components across Windows, Unix, linux and Mainframe would be a pretty complex task even with customized scripts on each platform. Complex diagnostic tools are often needed to discern the status of the overall system by testing each node in the network for valid connections and data.
Challenges of a Hybrid Environment
A hybrid environment is defined by Gartner as: “on-premises and cloud-based integration and governance capabilities that enables differently skilled personas (integration specialists and nonspecialists) to support a wide range of integration use cases.“ Hybrid environments can also include IoT, mobile and SaaS components as well. A truly hybrid environment has challenges in several areas like development, testing and deployment, but most focus on the operations aspect. For those companies who don’t know how to manage a hybrid environment, they try to mimic the existing operations built around the hardware and software silos. In lieu of an infrastructure that supports multiple platforms, they look to leverage existing knowledge and expertise to extend the management model to the Cloud.
Most companies don’t think that extending their deployment to the Cloud will affect their current development environment, but it inevitably does. Assuming the hybrid environment is dynamic and constantly changing, companies must account for a diverse development and testing environment. The development infrastructure must also allow the ability to generate automated builds with custom built scripts or with vendor supplied utilities like Jenkins. The ability to build and test applications and deploy them anywhere in the hybrid environment becomes an advantage. Having an infrastructure that allows you the flexibility to develop and test on any platform is a focal point for maintaining a competitive edge.
The next challenge is Deployment – having an automated build process that can be distributed is one thing, but being able to deploy new versions across a hybrid application environment on demand is a feature that assures the most critical aspect of Continuous Delivery. This means you can build and QA new versions and the intelligent infrastructure can coordinate what criteria is needed to deploy across all the platforms required.
The biggest challenge is Operations – being able to manage a hybrid application environment assures your company the continuity of performance that is necessary for a successful enterprise. This means you need an infrastructure that can coordinate across all the platform to ensure harmony during execution. This is known as application orchestration: application systems need to start and re-start on any node in the network in a repeatable, concise manner that minimizes the start up time and provides systematic, serial and parallel processes to ensure proper operation.
Application systems stop and shutdown in a systematic fashion, with services shutting down in sequence and with a minimum of failures.
A HIP contains intelligence on each managed service in a configuration, so that its health and proper operation are checked and assured.
A really good HIP can do application health checking – ensure services start in order, prevent failures and resolve issues before the client knows there is a problem.
A really good HIP is platform independent: its presentation remains consistent and the operations commands are the same on every platform.
A HIP needs to record the details of the application infrastructure in a configuration files so that it can be replicated on every system.
Orchestrate automated builds and analyze the success of the builds.
Deploy an application environment to any node.
Create a framework to effect seamless DevOps functions ensuring a Continuous Delivery model.
Integrate with Eclipse to provide a distributed development environment.
In summary, the features of hybrid integration platforms have to be ubiquitous across all platforms to effective. HIPs must provide the ability to do cross platform development, deployment and operational functions required to operate a hybrid environment. Having this infrastructure embedded in your environment can provide a complete solution that monitors and manages these discrete components and allows everything to work in harmony.
Is there one software package that does all this?
In reviewing the challenges and requirements of a hybrid environment, there is a question that arises from this discussion: Is there anything out there now that could do all of this? There is only one product which can meet each one of them and do it on any platform: NXTware.
The NXTware platform is comprised of three discrete components: NXTera, NXTware Remote and NXTmonitor. For the development challenge, NXTera and NXTware Remote combine to provide a modern, distributed environment for 3GL and 4GL applications. NXTware Deploy meets the challenges in deployment across a hybrid environment. NXTmonitor meets the operations challenge in a distributed environment and with the addition of web services connectors, it meets all the requirements of a hybrid infrastructure platform as well and can do it now. The NXTware platform can provide the infrastructure applications need to run on any platform or on the Cloud.
This the second half of eCube’s interviews with VMS Software Inc.’s John Reagan, who heads the compiler group. We have been able to condense the interview into two blogs and this part of the interview expounds on the history of LLVM and reasons why the OpenVMS compilers use it on X86/64. We will continue on with the interview from the point where the last blog left off.
John Reagan, VMS Software, Inc.
History of LLVM and compiler technology
Q: We had some discussions earlier on the history of LLVM, and I have done a little research on LLVM, and it was started around 2000. You had some earlier information on this technology and I was hoping you could share that with us.
A; I think your timeframe is correct. It was a project by Chris Lattner at the University of Illinois at Champaign-Urbana. Digital invented GEM and other backend projects, but we never shared them. This was before the days of Open Sourcing. I looked at what Chris did with LLVM and I looked at his original Masters thesis, and while he did great work and I don’t think he cheated or stole it from anyone else, but some of what he invented is conceptually similar to GEM. We had it in GEM already, and had it generally for 10 years, except we were never able to tell anyone about it. Chris, however, did create a framework that the GEM backend did not have.
So to back up to answer your question before, I started with Digital in 1983 working on compilers and I worked in VMS compilers all the way to the point when HP transitioned VMS engineering to India and that engineering work went over there. I moved to another area inside of HP, I worked for compilers both for HP/UX for a little bit but also compilers for the NonStop group and one of things we did was we ported NonStop to X86 and this was several years before what we’re doing now for VMS. We had a decision then on which back end technology would we use to generate X86 instructions and at the time the choices were you can go to Intel and license their back end for a hefty sum of money and probably some royalty structure or there is GCC which is the de facto large gorilla in the room in the open source community. It comes with the GPL license for better or for worse and it comes with a very vocal and active user community. There was another one called Open64.net which was what was chosen by NonStop but has since withered down because I think everybody that was there has switched over to LLVM. At the time it was very new and I think it only had it’s x86. So NonStop didn’t choose it but we had a chance to look at it. Even for me I said I like it because it has a really strong flavor of what I knew from GEM so it fit me from a personality point of view.
The other thing is I think LLVM and the LLVM foundation now have done a really good job to promote LLVM and it is actively used by Apple, Google, NVIDIA, Sony, Qualcomm, and many others. Major players in the industry are now using LLVM. The version of Android on my Pixel 2 phone was built with the LLVM. It’s putting pressure in the GCC space they now have a real competitor. LLVM has great online documentation. The community is very helpful and reasonable. I’ve been to several of the LLVM developer’s conferences and have been well received. It has been helpful to get VMS additional visibility. I gave a presentation at 2017 Fall LLVM conference on using LLVM for our compilers. They record of all the sessions at the conference. There is a YouTube channel where you go on YouTube and search. Just Google “youtube John Reagan LLVM” and you will find my five minute lightning talk where I squeeze everything in 5 minutes. I apologize it’s cringe worthy watching me do it but it gives a good high level survey of what we’re doing and I’m trying to keep VMS in their minds; some of the older guys like us say: “Oh, yeah I remember VMS” but many of the new people don’t.
Q: Thanks, that is useful. Well I noticed when I was researching LLVM that COBOL was not listed as a language.
A: That I don’t know – if someone’s put a COBOL frontend on LLVM; if not, we might be the first. I know no one to put a Basic frontend on it we will be the first for that. There have been Pascal frontends or Pascal-like languages. I think there have been FORTRANs on it before. As a matter of fact, .there is now a new FORTRAN compiler for LLVM and I love their naming scheme for it. So if the C language “clang” is the name of the frontend for LLVM and so what do you think the Fortran one should be named?
Q: It should be “flang”!
A:It is exactly “flang” and it’s an open source compiler from NVIDIA. It has a lot of DEC Fortran extensions in it so one of the things we may provide in the future in addition to our own Fortran 90, 95 compiler for VMS; this is the one you’ve known for years that’s generating GEM and works with the GEM to LLVM. So if you want things from a newer Fortran standards you’ll have a newer Fortran compiler. You might have to give up some of your ancient DEC Fortran features if you have any RAD50 constants or something weird like that still in your Fortran program, you’ll have to get those out.
Also by using LLVM and Clang, (we talked about all the other languages, but what I haven’t talked about is what we’re doing for C++) – the C++ compiler on the Alpha was a Digital written C++ compiler conformed to whatever C++ was at the time – there wasn’t even a standard at the time – this is early in the 90s. Then on the Itanium the C++ compiler is actually derived from the Intel C++ compiler and it is only C++03 standard compliant. People want even more standards support since C++ 03 there has been C++11, a C++14, and C++17. The committee is already working on C++20 so that’s a very active language and people want all these new features in those standards. And so we need a better C++ compiler for VMS on X86. I can’t use the ancient Alpha one it doesn’t have anything in it that I want; the one on Itanium is one that we’ve licensed from Intel. I don’t even have the ability to take that forward to a X86 if I want to so I said we need to C++ compiler – hey clang is a C++ compiler it’s highly compatible, it is portable, it is standards compliant. So for the C++ compiler, we will take clang which is a linux-y front end and will add a DCL interface on it; we will add it will add a handful of new VMS features. That’s exactly what we did on Itanium; we took the reference Intel C++ compiler, added a DCL interface on it, added dual size pointer support and a bunch of other stuff: some of it easy, some of it may take a couple months to chew through but we will do exactly the same again for clang and so when we’re done you’ll have a C++ compiler on VMS that is the current language standards C++ compiler so it makes it easy for people to open source code that are using C or C++. And clang besides a C++ compiler, is also a C compiler.
So you can see if you have code that’s coming from some open source or some Linux box, by definition has no VMS is in it. No one is calling SYS$QIO on a Linux system today so you bring all the code over to VMS X86, you have our C++ compiler based on clang. It will compile it just fine. So now that thing that kept you from moving things to VMS was I don’t have a modern C ++ compiler, now we will.
In addition, we will be bringing over a better standard template library. LLVM, besides having clang compiler has another project called libcxx which is a standard compliant open sourced with the very flexible LLVM license, so I get to go take that and will have a standard compliant C++ 17 standard template library, and a C++ 17 compiler. We’ll have to update some of the system headers probably have to put a little wax and polish on the CRTL that we have today
Q: That’s a fascinating approach. So, changing gears a little, how hard do you think it would be to have a COBLANG or a COBOL compiler?
A: Yeah well I certainly don’t think hooking on our COBOL compiler is going to be a problem for OpenVMS on x86 but I don’t know if I’m ever going to be able to open source our stuff that we inherited through HP.
Q: I am sure a good open source COBOL compiler would have a lot of traction. I mean there’s a few ones out there, Fujitsu got one, right?
A: There is OpenCOBOL; some people like it or don’t like it. It is the same thing in the Pascal space; there is GPascal, freepascal; etc. Just from a fun perspective I’d like to open source the Bliss compiler; have Bliss everywhere because of course there’s really no monetary value to what’s left on the Bliss frontend. There’s nothing in there that patentable; has no market value; there’s no corporate secrets in there it’s a boring compiler that parses tokens and move bits around and generates an intermediate language just like every other compiler you’ll find on the internet. Yeah but what I just saying was technically don’t own it so I technically can’t open source it.
Q: Bliss? I thought it was open source.
A: No, it’s still got Digital or HP copyrights on it. I would like us to make some agreement with HP to let us do some of these things- I think it would help I think help some of our customers do that that be something that I would you don’t ask them the VSI management to take up with the HP management.
Q: I just I just installed Bliss on our VMS systems back in our office to test our new Eclipse editor for NXTware Remote, and there was no product license so I wasn’t sure.
A: That’s right it doesn’t require a license pack, but it’s not open source..
Q: OK. Let’s move on. – I am curious about the VAX architecture being handled by the compilers and we talked about this earlier. I have always known the VAX to be a stack machine. My question is that MACRO-11 has a lot of instructions for the stack – is this emulated for new architectures like X86?
A: So there is the MACRO-32 compiler for the Alpha and Itanium machines, and VMS is still – well at one point in the history of VAX VMS, the whole operating system was written in MACRO-32. There was some Bliss, but the majority was MACRO-32. Over the years, it has changed a little. Right now it is mostly Bliss and C, but I think about a third is still MACRO-32. The MACRO compiler on Alpha reads the VAX instructions from the source file and turns them into Alpha instructions. It is relatively straightforward, you have a VAX ADDL3 instruction –which is add 3 things together, whether they are memory locations, registers or whatever and the compiler magically does the right thing on all those different architectures. We are writing another MACRO-32 compiler for X86; same thing generally, but it goes in a little different interface than the rest of the compilers – we still use LLVM to generate our object files, but we have to do some of the code generation ourselves. MACRO-32 has the peculiar thing that a lot of MACRO-32 code knows how to jump from the middle of one routine into the middle of another routine and talk about the same registers. And you can’t reasonably represent that in some compiler Intermediate representation. You can’t represent that in LLVM – it is hard to talk about that – you can write a C program that does a go to from routine A to routine B. You can do setjmp and longjmp and work it that way but that is high overhead and that doesn’t get you what you want. But MACRO-32 does that all the time: come into the entry point of A, save some registers, jump into B, jump into C, optionally jump into D and go out the bottom of some other routine. The epilog of those routines have to know what registers to put back the way it should. So that is something we have to into a lower level of LLVM, but they have an assembler interface in LLVM as opposed to a higher level language interface. So we use that.
GEM had the same thing. It had the GEM Intermediate Representation that everybody used except for MACRO-32 – there was a back door interface that GEM used on Alpha and Itanium. LLVM not surprisingly has a similar kind of interface.
Q: So LLVM IR is not used on MACRO-32?
A: No. Unlike the other front ends, which don’t care about their target, the MACRO-32 compiler, a portion of it does cares about the target and a portion of it doesn’t. The parser for MACRO-32 is a parser. The optimizer for MACRO-32 is an optimizer. Since MACRO-32 has to emulate a VAX and you know VAX has condition codes but Alpha doesn’t and Itanium doesn’t. Hey, X86 has condition codes, but they aren’t quite the same as the VAX condition codes. On VAX just about every instruction sets the condition code; on X86 only a subset of them sets the condition codes, but we are leveraging that to get reasonable quality code for X86. But every architecture is a little different; and MACRO-32 is a little different for each target. So it looks like a VAX to you; you do a PUSHL and you do push things on the stack, you do a CALLS and the magic happens. We do the right thing on each target, but it is different on every target.
Editor: This concludes the interview with John Reagan on the subject of LLVM for OpenVMS X86.
The following blog documents a series of interviews with VMS Software Inc.’s John Reagan, who heads the compiler group. John was giving a series of presentations on leveraging LLVM for the new OpenVMS X86 port. After listening to his presentation in Malmo, Sweden, I decided to contact him and ask him to expound on his fascinating view of modern compiler technology and how they work. John has worked on various compilers at DEC, Compaq and HP and has a very deep background in many 3GL and 4GL languages and their tools.
John spent 31 years working for Hewlett Packard as a Compiler engineer for OpenVMS, specializing on Pascal, COBOL, Macro-32, the GEM backend, and assorted other compiler-related projects. He is currently working for VMS Software, Inc. as a Compiler architect/developer for compilers and compilation toolchain for OpenVMS on Itanium and future platforms.
Q: I have seen your presentation at the Bootcamp and at the Connect Conference in Malmo earlier this year. I had a number of questions on the presentation and I will start there. So, in your presentation, you explained you worked on the development of GEM and it was created as a common code generator for the Alpha 64 bit RISC architecture with a target independent IR which is used by all the languages. You mentioned GEM and GEMIR – those terms were unfamiliar to me, so I had several questions:
What is the difference between the two?
Does IR stand for Intermediate Representation?
Can you describe how this was done on GEM and how LLVM does it and how is this step is important for generating object code?
A:” Those are all good questions. To be clear, I wasn’t one of the original GEM developers. As a front-end developer, I used GEM, provided feedback to GEM, even worked with them to solve particular problems. I only became a real GEM developer when most of the original GEM developers were transitioned from Compaq to Intel. You asked about GEM; GEM is not an acronym. Back in the Digital days, there was a Prism project and Prism was going to be the follow-on architecture to VAX. It wasn’t done and Alpha was done instead and it was almost Prism. At the time, everyone got enamored with things that looked like jewels, so there was GEM and Opal and there are other names that were involved. GEM is not an acronym that had to do with Prism. So all of our VAX compilers were in this match of modules: different code generators, different approaches, different optimizers – some had great optimizers like the VAX Fortran compiler and some had almost no optimizers: the VAX COBOL compiler had no optimizer at all. So deciding from everything we learned on VAX, going to Alpha they decided: let’s write a common back end that everyone uses. We had some experience on VAX; you may have heard of the VCG or VAX Common Code Generator which was used by the VAX PL1, the VAX Ada and the VAX C compiler shared a common code generator. It wasn’t really common – it was cloned from each other and shared resources, but we knew as a compiler group we were going to have write compilers not only for the VMS, but we had to write compilers for MIPS Ultrix. That group came to us and said we want your Fortran compiler; it is a great Fortran compiler. We said how do we get our VAX Fortran compiler on MIPS Ultrix? You switch architecture and operating systems.
GEM was invented as a multi OS, multi-target backend. It provides interfaces that deal with command line processes, with file reading, file writing, etc. So a front end could say I need to read my source file and on VMS it knew how to call RMS and on Tru64 or MIPS ULTRIX, it knew how to call those file systems. That part of GEM is dealing with environmental issues. The other part of GEM is this intermediate representation where you take the source code and you turn it into a…think of it as a high level assembler: I want to create some variables and I want to add them together or multiply them or do a test and a branch. If you dumped it out you would say this has an assembler feel to it, but it is a little more abstract. It knows about descriptors, it knows about loops, but it has almost no target architecture information in it at all. For some things like weird BLISS linkages, we had to have register numbers encoded in some linkages, but for 99% of the generated code its target independent. GEM took that intermediate representation and internally, depending on whether it was an Alpha or MIPS target version of GEM, it turned it into a tree, does optimizations of code hoisting, and common sub-expressions and loop unrolling, stuff that you needed to do for Fortran and we needed to make sure that all the tricks we played for VAX Fortran were also performed by GEM. GEM also had to the learn new tricks for Alpha; it would be horribly disappointing if we came up with this really fast Alpha chip and then find out the Fortran compiler really sucked and then gave you worse performance than your older VAX did.
Having fast chips are one thing, but if you don’t have a good compiler, you are in trouble. So GEM does all that transformation, it knows how to write out the object information, because on different platforms it knows the object file is a different format. I look at GEM sometimes as Mr. Potato head; there’s different pieces inside of GEM; there’s the optimizer which is mostly target independent, but there are several code generators, there is one for Alpha, there is one for MIPS, there’s one for x86 32-bit for Visual Fortran. It knows how to write COFF files for Tru64 object file, it knows the Windows object format for the Windows systems, it knows how to write VMS object format, so when we build GEM for different targets, we just pick and choose the pieces we want and end up with a GEM that gets linked with the front end. The front end produces a symbol table and a generic sequence of the intermediate representation and tells GEM: here, go have fun and go generate code. What that enabled is that the Fortran compiler front end now can be the same Fortran front end for VMS Alpha as it was for MIPS Ultrix as it was for Tru64 Alpha as it was for linux on Itanium as it was for all those different targets and flavors that we had. It allows for the first time for COBOL to be optimized. COBOL now runs the same optimizer that Fortran runs through. The Alpha COBOL could now do strength reduction and loop unrolling. Our Alpha BASIC compile knows how to do pointer analysis. We had never done that before because it wasn’t worth the effort because no one was really looking for that type of performance on those languages, but you get all for free, right? That’s the advantage of that backend.
Going to something like LLVM, we have the same advantage again. We were presented with the issue of how do we keep the existing VMS customers happy; they’re moving code forward, but I also want to modernize VMS and give you newer language standards for things like C or C++ and Fortran and good quality code for all the different types of X86 chips whether they come from AMD or Intel. LLVM has support for all the different types, not just one or two type of chips. On Alpha, there’s just two or three different flavors of Alpha chips and differences between them were pretty minor in most cases. Internally they were faster, but from an outside view they behaved pretty much the same. Go look on X86 and over the years there are dozens of subsets of different instructions on different chips. Some chips like one sequence of instructions versus another sequence of instructions so the optimizations need to almost know what chip they are optimizing best for. To try to keep track of that is a difficult job; it takes a lot people. Something like LLVM; there are hundreds of people around the planet that are watching that stuff, constantly tweaking it and keeping track of that so I don’t have to, right?
We said we have all these front ends that generated this GEM intermediate language, how do we get that same front-end hooked to LLVM? We need those front-ends, because we have customers out there with millions of lines of BASIC, of Pascal, COBOL and Fortran, all those things – they expect our frontends – there’s no other place to go for those things. We could have made massive changes to each frontend to have GEM directly generate LLVM intermediate language, but why bother? They all generate a nice GEM intermediate language – pretty understandable, so we have written a GEM intermediate language converter. It takes the GEM intermediate language and generates the LLVM intermediate language. Now LLVM, being a compiler tool, this is not really rocket science. At this level for people in this industry it is the same thing: the LLVM intermediate language lets you record variables, which you can add two together and then talk about the results.
There are 200 GEM intermediate language intermediate representation node types: add, subtract, multiply, shift left, shift right, bit fetch, bit store all the various flavors 75% of them are one to one mappings on something equivalent on LLVM. Add is add, and so on. There is not really much else you need to do. There are a few places that are a little different since GEM has some built-in knowledge about knowing how to build VMS descriptors that made it easy for every frontend to build VMS descriptors. LLVM doesn’t know about VMS descriptors, so the one GEM intermediate node that talks about building a VMS descriptor that the frontend generates, the converter has to generate a sequence of fifteen or twenty intermediates on LLVM, but pretty much filled the datatype in first byte, filled the class in the second byte, filled the length in the next word it is relatively straight forward and very mechanical. All we had to go through these several hundred GEM nodes to convert them all: just brute force. Just pound your way through it. For the most part that gets you 90% the way there.”
Q: So what about the language of GEM? Is it written in C?
A: “No, GEM is a mixture. It started life as all BLISS. Over the years, it morphed into different languages. There was a switch over when all new stuff was written in C, but there is a lot written still in BLISS. Of course, if you work on GEM, you have to learn how to do it in BLISS. Our converter that we just wrote is in C++, there’s no reason to do it in BLISS. But of course all of these front ends, the BASIC front end, the COBOL front end, the Pascal front end – all in BLISS. The Fortran front end is in C Of course the C frontend is in C, and the BLISS frontend is in BLISS. So being in our group, you still need to know how read and manipulate BLISS.”
Q: So LLVM – is it written in C?
A: “Yes , all in C – well, 99% in C++ and uses extensive C++ features. There are a few lower level utility routines that are straight forward C99 C. It is very object oriented in its interface to build the LLVM IR. You do have to go and learn to read classes and inheritance and all those things. Having a good working knowledge of C++ is a good thing to read the LLVM sources. Whether it is things like C++, Swift or Rust or other things in the industry on linux platforms, the LLVM IR has already been shown to work for a lot of different languages.
I downloaded the LLVM source code and built it on OpenVMS Itanium to generate code for X86. That is what we are using for our cross compilers. There are hundreds of files that are part of LLVM that we compile. We had to make a few changes for VMS. I think right now it is under 500 lines across the entire code base. We’ve had a couple of extensions, for example in the unix world, every routine doesn’t have an argument count; in the VMS world, every routine expects an argument count passed in. So we’ve added some support to make sure every routine call has an argument count. That had to be done inside the code generator, not in the converter, that’s too high level for that. There’s some things we have to add for debug information; there’s some debug information for some of our languages like BASIC or PASCAL or COBOL or BLISS that still aren’t in the current DWARF standard.
So we have added some extensions to make sure those additional types get produced that our Debugger expects. It is pretty much very mechanical and high level. We haven’t made any fundamental changes to LLVM because we want to pick up new versions of LLVM going forward. I don’t want make the mistake of snapshotting LLVM today and sticking with it for the next ten years. It will be out of date in six months. LLVM churns out new versions quite often to add additional optimizations, new chipsets support, code mitigations for security issues like SPECTRE and Meltdown. So we need to stay current with that code generator technology is very important. “
Editor’s note: This seems to be a good place to stop the first blog article. Coming up in the next blog of our interview: history of LLVM and futures.
Companies that depend on applications that have been abandoned by the vendor whose technology upon which it is based is a serious problem that needs to be addressed. As we discussed in last year’s blog, many companies are already facing problems just keeping the developer skills necessary for keeping these kinds of applications going. In that blog posting, we used this picture to describe the progression of risk and return on investment (which may not accurately convey the risk or problems companies face):
In some cases, the problem may be more acute since some of these legacy applications may be built on obsolete, third party technology, which has been abandoned by the vendor for one reason or another. This is what is known as “abandonware”. The vendor’s first step in this process is usually a publication of a support matrix which indicates the number of years the vendor’s software will be supported. In this way, the vendor can announce the successor product and “extended support” fees that kick in when the software reaches end of life.
Many companies see this notification as a decision making tipping point for the keeping or unloading the application that uses the “abandonware” software. Assuming a readily available replacement in not present, does the company continue with the existing application and increase their budget for support or do they drop support entirely and support the application internally? This depends on many factors including the importance of the application, the reliability of the third party software, current hardware and software upgrade issues and compatibility with other parts of their enterprise.
Usually the vendor gives its clients a few years notice, but in some cases it can be quite abrupt like a bankruptcy or acquisition. In either case, you can assume the vendor is preparing to unload or re-train its technical and support personnel rather quickly in anticipation of the end of life date. This will likely mean problem resolution and bug fixes with the vendor software will be delayed or curtailed. If adoption of a new system is cost prohibitive, then there are options to continue in the short term with the existing system. While freezing the application development is an option, maintaining continuity for the application can be accomplished with a balanced approach of services and careful maintenance. Depending on the application, abandonware can be sustained in the long term if you follow some helpful guidelines for legacy application support:
Extended support – if extended support for the abandonware is available at a reasonable cost, this would be the best and lowest risk option. One caveat: you must make sure the vendor still retains enough expertise to support the product and fix any potential problems, otherwise the support is not worth the additional cost.
Alternative support – if the vendor has no extended support or not able to provide adequate support, third party support companies like eCube Systems with the expertise in these legacy abandonware products and can provide support until the replacement system is ready.
Enterprise Evolution/Legacy Modernization – employing an intelligent, phased approach to analyzing, replacing and modernizing the application component by component is a viable alternative to remove dependency on the abandoned software, and it should begin as soon as possible.
By extending support on the existing system, you eliminate the immediate impact of change to your application; and by implementing a phased plan to modernize the application, you retain the continuity of the application while eliminating the dependency on abandonware over time. This will extend the return on investment while minimizing the risk so it won’t adversely affect your users.
The future of many legacy applications are frequently determined by one or two decisive events that happen during the lifetime of a critical application and it is usually not related to the performance of the software itself. Normally, the software lifecycle tends to conform to two bell curves as related to return on investment (ROI) and risk (of failure). Over time, the return on investment and subsequent risk rises as a significant amount of time passes. This is displayed in the graphic below:
More often than not, it is the risk is associated with keeping the skills needed to keep the application healthy that is the biggest issue. Often an event occurs related to the software maintenance or management staff that causes problems for the future of the application. As an integrator with many years experience, we have seen this dynamic play out many times before: a company’s critical application that was developed and maintained for many years very successfully suddenly reaches an impasse on its future. It can be traced back to one or two events:
1. a key developer/engineer with in-depth knowledge of the application retires or leaves the company. Suddenly, the company in unprepared to handle future maintenance on the application.
2. There is a radical management change due to promotion, M&A or retirement. New management comes in and decides that the old system has to go.
As a result of either of these scenarios, the management decides to re-evaluate the worthiness of keeping the application and cites rising costs as the driving factor. While this can be used to drive management’s agenda to replace the application, it is usually the risk or loss of technical expertise that seals an application’s fate. Replacing the skills of a long time developer or maintenance analyst is difficult even if the application is well documented and maintained. If you add the requirements of a legacy language like COBOL, Fortran, C, Pascal or Basic, the task becomes very challenging since these programming skills are scarce and with very few exceptions, they are not taught in universities or colleges. Add to that the liability of using a legacy platform, and you have a good case for application replacement. In addition to the risk, the biggest problem is cost, which is sometimes ten to twenty times the cost of yearly maintenance. Totally replacing an application that is working smoothly but has a foreseeable maintenance issue in the future makes it very hard to justify immediate action to upper management. They are averse to spending money on an application that hasn’t cost them a lot of money over the years.
So you are stuck with maintaining an application with little or no programming expertise on the platform or the language the application is written in. You have the departing analyst’s salary in your budget and if you can find a consultant with the expertise, you can hire them part time, but even that is a temporary fix. So what do you do?
If a programming course for the language does not exist, you can try to get your consultant to teach your new developer the application language basics. How do you get the newbie trained in the quickest possible manner? Here is a solution: use the Eclipse-based NXTware Remote for COBOL, Fortran, C, Pascal and Basic development. NXTware Remote provides the ability to develop in legacy languages with intelligent editors on Windows, Mac or linux in a distributed development environment. Specifically, the NXTware platform contains an engine that runs on the legacy system and communicates with the desktop IDE, interprets commands to manage the source files, compile, debug and execute the binaries (executables) on the legacy platform remotely. The process of learning a new language becomes simpler since the newbie is using an environment they already know from Java developer classes. The interface breaks down the barrier to learning and provides an environment with which the developer is already comfortable. The new developer learns to program in the legacy language remotely without having to learn the legacy job control language or any of the infrastructure of the mainframe.
In this way, a neophyte can learn the structure of a legacy programming language and not worry about the mainframe infrastructure necessary to compile and test the code. Using this distributed development environment can jump start your legacy application’s entry into newer development technologies.
Many managers have a problem managing legacy software systems and this article will discuss some of the reasons for that. Almost every company that has been using computers for any length of time has developed some in-house software that meets a very specific need that cannot be met by COTs packages. Anytime you deploy a software system into production, it becomes a legacy system by definition, and for good or bad it becomes part of the company’s portfolio. Some of these systems become very successful and enjoy a long deployment cycle.
Almost immediately after the new system was deployed, however, the most skilled developers on the project leave to develop on other projects or find bigger challenges with other companies. Unless the system design was well documented, most of the knowledge of the application left at that point. As the application entered maintenance mode, fewer and fewer developers who knew the design of the original system remained. Over time, the application evolved from its original version usually in response to business and technology needs, and it continued to provide a greater value for the company. These are the kind of applications which companies decided to keep.
Eventually, the application begins to show signs of age and the technology upon which it was based becomes obsolete and incapable of supporting the new business requirements. Patching the code for new features is no longer a viable alternative. Your legacy application has evolved to a certain point, but now the changes needed exceed the constraints of its design and it has become brittle and resistant to further change. This is a normal evolutionary process and something that every organization has to respond to.
Unfortunately, as your application has evolved, the skills of the developers that maintain it haven’t always kept up with the latest technology. With more and more new features being provided by open source, third party software and new technology, managers find themselves looking for outside help to bridge the technology skills gap. They run into problems fitting these developers into their legacy environment because of a mismatch of skills. Newer developers usually lack the background in legacy systems required to understand the design of the application and there are very few places they can pick this knowledge up. The older developers who have been maintaining the old system often don’t investigate the new technology since it is not a part of their job description.
Along with the maturation of the application, the company’s perception of the importance of the developer is reduced, so the resources to keep these skills are also reduced. Pretty soon the expertise needed for legacy development has evaporated and the knowledge of the history of the application is reduced to one or two maintenance programmers. Their strength lies in the understanding of the history of the application and not necessarily in the inherent design. The design knowledge is gone but the need for redesign will require this same type of skill (usually a mismatch with the skill set of the remaining developers). In order to extend the design of your legacy system, you need the skills of a highly skilled developer and to attract this type of developer, you will need to have a modern environment for them to develop in.
This is the problem many companies with legacy systems face: the business logic that legacy systems perform and value to the company needs to be retained and extended because the cost of any other option (i.e. a re-write) is too risky and cost prohibitive. Many CIOs have faced this problem and have analyzed and assessed their inherited legacy systems only to defer any changes due to the risk. That risk is now becoming greater with the passage of time as the technology gap widens and the expertise that is currently maintaining the legacy system is getting close to retirement age. The talent pool of legacy developers is shrinking every year and it is not being replenished with college graduates who are willing to learn this old technology.
While newer technology can replace much of the functionality of legacy systems, the business logic which it performs cannot be replaced. So the problem you have is finding the right mix of developers and technology tools to implement a new design through integration. What is the best way to achieve this goal? Perhaps there is a solution already out there – check out this video:
From our previous discussions, you can see there are many tangible benefits from using Application Orchestration for DevOps. Not only do you leverage the monitoring benefits of a GUI, but you also create the intelligence you need to quickly deploy and run new applications. The most prominent benefit is for the operations staff because now they can have a tool which can show them the current health of the applications and they can manage problems without needing the developer’s help. The DevOps engineer now has the ability to migrate the exact infrastructure intelligence from one system to another and the developer is now freed from babysitting the application for the first few months while operations learns how it works and how to resolve problems. The intelligence created in AO to pre-define and orchestrate the interactions of discrete components that make up complex applications provides the basis for these benefits. As many DevOps engineers know, modern enterprise applications can often contain inter-dependencies with other applications and components and are sometimes a part of a Hybrid IT solution that can make them difficult to manage. As many DevOps engineers can attest, just restarting a process that has died does not necessarily resolve the problem. Some complex interactions between these components can be difficult to isolate and that can slow down or even stop the effective deployment of a new application. Since many of these interactions cannot be “discovered” and are not be obvious to operations, they are a significant barrier to the successful DevOps operation for Continuous Delivery.
Abstracting Complexity with AO
If we consider the overall goal for DevOps, we want a tool to deliver new software versions quickly and effectively which means the new software needs to be deployed, started and tested as quickly as possible. Application Orchestration is the solution that can simplify the deployment and management process by abstracting infrastructure complexity with easy-to-use GUI-enabled, dependency tools which can create relationships between components. The component status will then automatically determine which modules are affected. With an effective display, application status becomes more apparent and precise, thus making error detection much easier. Similarly, corrective action needs to be automated based on the infrastructure or surfaced in the GUI for the operator to affect rapid response. To achieve this level of infrastructure intelligence, a number of features are needed for Application Orchestration, including intelligent display and control, scalability and portability.
Visualization of the application is important for understanding everything else in Application Orchestration. If the true status of an application is to be understood, its infrastructure and relationships need to be represented in an intelligent display with readily available controls for management. By providing a navigation frame which itemizes each component and provides a drill down into each, a multi-tabbed display affords the DevOps engineer a wealth of information with a click of the mouse.
Perhaps the biggest challenge for operations is scaling up an application from development to production usage. AO needs to provide the infrastructure to readily modify the throughput capability of the application. Creating objects with its own start/stop/dependency properties enables AO to replicate each component in the configuration: i.e. copy and paste the objects inside the configuration to enable scalability.
The portability of this intelligent infrastructure is a necessity for DevOps, speeding up the deployment process and preserving the intelligence needed to manage it. The knowledge required to manage the application has to be consolidated into a configuration file which can be exported to other systems.
Given the features which we have identified here in this discussion, I think we have presented a good case that Application Orchestration will have a positive impact on how application are deployed and the ease in which they can be managed on enterprise systems. For more information on this capability, check out our further discussion of Application Orchestration features in our next blog. For a free evaluation of an Application Orchestration product from eCube Systems, click on this link:
Developers in a DevOps environment are always looking for vendor solutions to help them hand off the management of their deployed applications. One type of software is particularly suited to provide this functionality and it is known as Application Performance Management. APMs can not only start and stop processes in a timely manner, but they can ensure processes are restarted when they fail. While most APMs are based on managing newer languages only on contemporary platforms like Windows and linux, these script based solutions do provide a valuable infrastructure to help companies manage and deploy new applications faster. Many DevOps vendors provide this type of solution completely with their own proprietary scripts which require a significant investment of time to get it right.These scripts do a variety of detailed functions like setting up the file system structure, copying files and creating VMs to host the new application. While much of this is very useful, it does not solve the biggest DevOps problem of fixing application deployment run time issues. That is where APMs come in and specifically a type of solution that is a hybrid of Application Performance Management systems known as Application Orchestration – it augments the APM functionality with a unique approach:
capture the developer knowledge with rules in a configuration file
provide GUI controls needed to make DevOps much easier
Visualize applications with easy to understand displays
Enable scalability of applications by operations with performance groups
This paper will outline these enhancements to the APM model and show how AO helps the DevOps engineer understand and modify the application infrastructure, which is essential to managing enterprise applications where the problems can be complex and timely response is critical. Commercial DevOps tools like Puppet Labs and attempt to solve this issue with scripts which provide the ability to set up and deploy an application to the enterprise, but fail to manage them effectively. This is because they are script based that are inherently:
Developer written and maintained
Statically written, but may require frequent changes
Designed to interpret the effects of problems with constant polling, and
Execute complex logic to determine the appropriate actions to remediate
Using the Application Orchestration approach, the application infrastructure intelligence is captured in a configuration file so the retry logic, timing, dependencies and relationships within the application can be used to resolve issues. In many cases this knowledge can predict where problems will occur and resolves the cause of the problem before the application user sees the failure. By managing the application through a configuration file which captures the infrastructure, the intelligence is built into each module, making it easier to replicate for scalability and facilitates its display in a GUI. Component dependencies are captured for each module so the effects of problems are automatically propagated to each affected component, making it easier to display accurate status and predict problems before they surface to the application users. The GUI which displays the configuration also gives the DevOps engineer the confidence that the application is running smoothly and that they will be notified proactively should potential issues occur. This approach differs from the dependence on scripts that look at the effect of a problem and then execute copious logic to find where it originated so it can determine how to fix it.
In our next blog, we will itemize the features and the benefits Application Orchestration can provide for DevOps.
In our last blog entry, we discussed the hidden advantages to using a distributed IDE for enterprise development, but that is just the tip of the iceberg. To fully realize a Continuous Delivery (CD) environment for legacy systems, you need to be able to deploy the new applications just as quickly as you develop them. Moreover, you need the ability to seamlessly transfer a running application in development to production without extensive delays. We can all agree that it doesn’t make sense to improve the processes around the development of new versions of your application if it takes months to test, QA, and deploy the application to your users. There are a number of obstacles that can cause delays in deployment, but let’s focus on the ones within your control; ones that you can improve readily or avoid if they aren’t integral to the Continuous Delivery process. In order to limit the scope of this discussion, let’s assume your company has a robust testing environment and an adequate testing strategy. Let’s also say the following assumptions are also present:
The development process makes significant changes to the infrastructure that are not all documented.
The systems to which the new applications are deployed to are not similar to the development environment.
The documentation of the application infrastructure does not provide enough information to resolve issues.
The Common Challenge:
Given all these assumptions, how do you make the deployment process for a new application easy and as streamlined as possible? How do you avoid environmental and load issues when deploying a new application?
If you believe the majority of problems in deploying a new application is the result of resolving differences between development and production or QA, then the solution we are about to discuss will be of interest to you. Our proposal to speeding up the deployment process for new applications is to not only capture the development environment, but the developer’s knowledge of the application as well. By that we mean: capture the developer’s knowledge of dependencies and subtle timing changes in the new application. Often the integration of new modules requires more time to start or are dependent on configuration subtleties. By capturing this information you can reduce potential errors and ensure that testing and operations can concentrate on their goals instead of trying debug the infrastructure. In the traditional sense where the developer starts and stops the application with scripts, the majority of changes involved in moving from development to operations is centered in these scripts. The process of maintaining these scripts is the single most difficult task in migrating the application. The migration of application’s components and structure to the target system is trivial in comparison to modifying the scripts for the new system. Given the complexity of some applications, this is the place where changes are typically needed for a variety of reasons and the developer is usually the only one who knows what the issue is. What if there was a better way to capture this infrastructure so that it is portable, more adaptable to new systems, and easier to modify?
On more modern systems like CentOS and RHEL (and other Linux variants) and with Java applications, Docker is an excellent example of how the exact image of the OS can be created and the application can be captured in containers. A set of docker run commands with bash provides the install and run commands necessary to start individual apps. Docker also has restart policies to restart a container automatically when Docker restarts or if they exit. More complex applications can be started with a bootstrap also run in Docker. Even with these systems, the scripts sometimes need some changes. Scripts of these type are often complex with individual commands which usually require a programmer or systems engineer to modify and maintain. While this type of model might work well with Java, other languages like 3GLs present problems since they are not container based. One promising solution can be found in leveraging an Application Performance Management (APM) hybrid known as application orchestration. This APM hybrid is designed specifically for managing the application infrastructure: application startup and shutdown, using the developer’s knowledge of dependency and starting order, all captured in a configuration that can be replicated. Application orchestration is best described variant of the Application Performance Management software model, since it has no classification of its own. The APM model specifies several criteria useful for managing multiple processes in an application.
The Application Performance Management approach
Application Performance Management tools are a classification of software that facilitates monitoring applications. There are a few commercial tools that provide some but not all of the features that APMs are intended to provide. In this discussion, we are looking at the provision of key services that afford a DevOps environment significant advantages, ease of use and speed to deploy. While commercial APMs are focused on monitoring software to ensure performance, we are targeting features that provide some applicability for solving DevOps problems and specificallly for speeding up the move to production. If we look closely at these features, we can see how APM features can provide the base services that are needed for application orchestration that is essential to DevOps systems:
End User Experience – in the world of application performance, this feature is probably the most important. In the DevOps world, we would argue our end user is the development and operations staff. Using the APM active monitoring feature would provide a great benefit after the deployment and would most probably provide the foundation for defining and documenting the variables in play during deployment. The EUE feature not only benefits the application end user once the deployment is complete, but it would improve the experience of operations and developers who need DevOps tools during the deployment process.
Runtime application architecture – this feature is essential to managing application performance and greatly benefits the DevOps intelligent infrastructure because it is the best way to capture the components and put them in a configuration that can be managed. However, we believe that management is best done when you can visualize an application architecture is through a GUI console, rather than through scripts. A GUI can visualize the underlying application infrastructure by exposing each component and its sub-components and their dependencies. The key feature here is Application Dependency and Discovery, which is useful in managing the application and assisting the orchestration of the application during startup.
Business transaction –this is a more obvious feature when it comes to describing tools for monitoring and performance purposes. In a DevOps tool, the APM business transactions feature is important because it reflects the application’s overall health which may be performed by the various discrete components. This information needs to be captured so that it can be monitored actively with health scripts and actions can be taken in the event of failures or degraded performance. Important application dependent transactions need to be identified so test cases or health scripts can be used to validate the accuracy of this feature.
Deep dive component monitoring – another key feature that is quite important to the DevOps world because it relies on an intelligent infrastructure which is capable of reacting to external forces and can detect and fix problems with the application. In order to successfully accomplish this level of automated management, a deep dive into each component and its dependencies is needed to determine the scope of modules affected and handle any contingencies. It should then follow that the ability to capture this infrastructure intelligence will make replication of the environment for operations even simpler.
Analytics and reporting – while a secondary feature of APMs relies on analytics and reporting to provide performance data, DevOps requires application orchestration, which doesn’t depend on this level of metrics to be successful. Analytics used in conjunction with application dependencies, however, are useful in determining application health and are useful in solving system dependent resource issues which may affect the application. In the DevOps world, the goal is to get the application functioning and running without issues. Looking at the analysis of resources is good for spotting trends and making adjustments for the application, but you often have application dependency issues that affect application performance. If you want to focus on the internals, application log files are the primary source of reporting application health for DevOps.
If an APM addresses each of these needs, it would probably be an effective DevOps tool. However, one very important issue is left unresolved: how does the knowledge of the application infrastructure get transferred to operations for a new environment? Is the documentation provided by development sufficient? Are you expecting the DevOps engineers to understand the intricacies of a developer’s scripts or do you expect your developer will be assigned for an indefinite period of time to operations to shepherd them though the process? If you are looking to incorporate commercial APM software into this process, then you should look at maximizing the impact of each of the APM features in speeding up Continuous Delivery. Do your homework and figure out what you need and incorporate those requirements into a hybrid solution for CD. Most likely you will come up with a set of requirements that needs a specific solution: Application Orchestration. Stay tuned for our next post which explores this idea.