John Reagan Interview on LLVM, part 2.

This the second half of eCube’s interviews with VMS Software Inc.’s John Reagan, who heads the compiler group. We have been able to condense the interview into two blogs and this part of the interview expounds on the history of LLVM and reasons why the OpenVMS compilers use it on X86/64. We will continue on with the interview from the point where the last blog left off.

John Reagan, VMS Software, Inc.

History of LLVM and compiler technology

Q: We had some discussions earlier on the history of LLVM, and I have done a little research on LLVM, and it was started around 2000. You had some earlier information on this technology and I was hoping you could share that with us.

A; I think your timeframe is correct. It was a project by Chris Lattner at the University of Illinois at Champaign-Urbana. Digital invented GEM and other backend projects, but we never shared them. This was before the days of Open Sourcing. I looked at what Chris did with LLVM and I looked at his original Masters thesis, and while he did great work and I don’t think he cheated or stole it from anyone else, but some of what he invented is conceptually similar to GEM.  We had it in GEM already, and had it generally for 10 years, except we were never able to tell anyone about it.  Chris, however, did create a framework that the GEM backend did not have.

So to back up to answer your question before, I started with Digital in 1983 working on compilers and I worked in VMS compilers all the way to the point when HP transitioned VMS engineering to India and that engineering work went over there.  I moved to another area inside of HP, I worked for compilers both for HP/UX for a little bit but also compilers for the NonStop group and one of things we did was we ported NonStop to X86 and this was several years before what we’re doing now for VMS. We had a decision then on which back end technology would we use to generate X86 instructions and at the time the choices were you can go to Intel and license their back end for a hefty sum of money and probably some royalty structure or there is GCC which is the de facto large gorilla in the room in the open source community. It comes with the GPL license for better or for worse and it comes with a very vocal and active user community. There was another one called Open64.net which was what was chosen by NonStop but has since withered down because I think everybody that was there has switched over to LLVM. At the time it was very new and I think it only had it’s x86. So NonStop didn’t choose it but we had a chance to look at it. Even for me I said I like it because it has a really strong flavor of what I knew from GEM so it fit me from a personality point of view.

The other thing is I think LLVM and the LLVM foundation now have done a really good job to promote LLVM and it is actively used by Apple, Google, NVIDIA, Sony, Qualcomm, and many others.  Major players in the industry are now using LLVM. The version of Android on my Pixel 2 phone was built with the LLVM. It’s putting pressure in the GCC space they now have a real competitor. LLVM has great online documentation. The community is very helpful and reasonable. I’ve been to several of the LLVM  developer’s conferences and have been well received. It has been helpful to get VMS additional visibility. I gave a presentation at 2017 Fall LLVM conference on using LLVM for our compilers. They record of all the sessions at the conference. There is a YouTube channel where you go on YouTube and search. Just Google “youtube John Reagan LLVM” and you will find my five minute lightning talk where I squeeze everything in 5 minutes.  I apologize it’s cringe worthy watching me do it but it gives a good high level survey of what we’re doing and I’m trying to keep VMS in their minds; some of the older guys like us say: “Oh, yeah I remember VMS” but many of the new people don’t.  

Q: Thanks, that is useful. Well I noticed when I was researching LLVM that COBOL was not listed as a language.

A: That I don’t know – if someone’s put a COBOL frontend on LLVM; if not, we might be the first. I know no one to put a Basic frontend on it we will be the first for that. There have been Pascal frontends or Pascal-like languages. I think there have been FORTRANs on it before. As a matter of fact, .there is now a new FORTRAN compiler for LLVM and I love their naming scheme for it.  So if the C language  “clang” is the name of the frontend for LLVM and so what do you think the Fortran one should be named?

Q: It should be “flang”!

A:It is exactly “flang” and it’s an open source compiler from NVIDIA. It has a lot of DEC Fortran  extensions in it so one of the things we may provide in the future in addition to our own Fortran 90, 95 compiler for VMS; this is the one you’ve known for years that’s generating GEM and works with the GEM to LLVM. So if you want things from a newer Fortran standards you’ll have a newer Fortran compiler. You might have to give up some of your ancient DEC Fortran features if you have any RAD50 constants or something weird like that still in your Fortran program, you’ll have to get those out.

Also by using LLVM and Clang, (we talked about all the other languages, but what I haven’t talked about is what we’re doing for C++) –  the C++ compiler on the Alpha was a Digital written C++ compiler conformed to whatever C++ was at the time – there wasn’t even a standard at the time – this is early in the 90s. Then on the Itanium the C++ compiler is actually derived from the Intel C++ compiler and it is only C++03 standard compliant. People want even more standards support since C++ 03 there has been C++11, a C++14, and C++17. The committee is already working on C++20 so that’s a very active language and people want all these new features in those standards.  And so we need a better C++ compiler for VMS on X86. I can’t use the ancient Alpha one it doesn’t have anything in it that I want; the one on Itanium is one that we’ve licensed from Intel.  I don’t even have the ability to take that forward to a X86 if I want to so I said we need to C++ compiler – hey clang is a C++ compiler it’s highly compatible, it is  portable, it is standards compliant. So for the C++ compiler, we will take clang which is a linux-y front end and will add a DCL interface on it; we will add it will add a handful of new VMS features.  That’s exactly what we did on Itanium; we took the reference Intel  C++ compiler, added a DCL interface on it,   added dual size pointer support and a  bunch of other stuff: some of it easy, some of it may take a couple months to chew through but we will do exactly the same again for clang and so when we’re done you’ll have a C++ compiler on VMS that is the current language standards C++ compiler so it makes it easy for people to open source code that are using C or C++. And clang besides a C++ compiler, is also a C compiler.

So you can see if you have code that’s coming from some open source or some Linux box, by definition has no VMS is in it. No one is calling SYS$QIO on a Linux system today so you bring all the code over to VMS X86, you have our C++ compiler based on clang.  It will compile it just fine. So now that thing that kept you from moving things to VMS was I don’t have a modern C ++ compiler, now we will.

In addition, we will be bringing over a better standard template library. LLVM, besides having clang compiler has another project called libcxx which is a standard compliant open sourced with the very flexible LLVM license, so I get to go take that and will have a standard compliant  C++ 17 standard template library, and a C++ 17 compiler.  We’ll have to update some of the system headers probably have to put a little wax and polish on the CRTL that we have today

Q: That’s a fascinating approach. So, changing gears a little, how hard do you think it would be to have a COBLANG or a COBOL compiler?

A: Yeah well I certainly don’t think hooking on our COBOL compiler is going to be a problem for OpenVMS on x86 but I don’t know if I’m ever going to be able to open source our stuff that we inherited through HP.  

Q:  I am sure a good open source COBOL compiler would have a lot of traction. I mean there’s a few ones out there, Fujitsu got one, right?

A: There is OpenCOBOL; some people like it or don’t like it.  It is the same thing in the Pascal space; there is GPascal,  freepascal;  etc. Just from a fun perspective I’d like to open source the Bliss compiler; have Bliss everywhere because of course there’s really no monetary value to what’s left on the Bliss frontend. There’s nothing in there that patentable; has no market value; there’s no corporate secrets in there it’s a boring compiler that parses tokens and move bits around and generates an intermediate language just like every other compiler you’ll find on the internet. Yeah but what I just saying was technically don’t own it so I technically can’t open source it.

Q: Bliss?  I thought it was open source.

A: No, it’s still got Digital or HP copyrights on it.  I would like us to make some agreement with HP to let us do some of these things-  I think it would help I think help some of our customers do that that be something that I would you don’t ask them the VSI management to take up with the HP management.

Q:  I just I just installed Bliss on our VMS systems back in our office to test our new Eclipse editor for NXTware Remote, and there was no product license so I wasn’t sure.

A: That’s right it doesn’t require a license pack, but it’s not open source..

Q:  OK. Let’s move on. –  I am curious about the VAX architecture being handled by the compilers and we talked about this earlier. I have always known the VAX to be a stack machine. My question is that MACRO-11 has a lot of instructions for the stack – is this emulated for new architectures like X86?

A:  So there is the MACRO-32 compiler for the Alpha and Itanium machines, and VMS is still – well at one point in the history of VAX VMS, the whole operating system was written in MACRO-32. There was some Bliss, but the majority was MACRO-32. Over the years, it has changed a little. Right now it is mostly Bliss and C, but I think about a third is still MACRO-32. The MACRO compiler on Alpha reads the VAX instructions from the source file and turns them into Alpha instructions. It is relatively straightforward, you have a VAX ADDL3 instruction –which is add 3 things together, whether they are memory locations, registers or whatever and the compiler magically does the right thing on all those different architectures. We are writing another MACRO-32 compiler for X86; same thing generally, but it goes in a little different interface than the rest of the compilers – we still use LLVM to generate our object files, but we have to do some of the code generation ourselves. MACRO-32 has the peculiar thing that a lot of MACRO-32 code knows how to jump from the middle of one routine into the middle of another routine and talk about the same registers. And you can’t reasonably represent that in some compiler Intermediate representation. You can’t represent that in LLVM – it is hard to talk about that – you can write a C program that does a go to from routine A to routine B. You can do setjmp and longjmp and work it that way but that is high overhead and that doesn’t get you what you want. But MACRO-32 does that all the time: come into the entry point of A, save some registers, jump into B, jump into C, optionally jump into D and go out the bottom of some other routine. The epilog of those routines have to know what registers to put back the way it should. So that is something we have to into a lower level of LLVM, but they have an assembler interface in LLVM as opposed to a higher level language interface. So we use that.

GEM had the same thing. It had the GEM Intermediate Representation that everybody used except for MACRO-32 – there was a back door interface that GEM used on Alpha and Itanium. LLVM not surprisingly has a similar kind of interface.

Q: So LLVM IR is not used on MACRO-32?

A: No. Unlike the other front ends, which don’t care about their target, the MACRO-32 compiler, a portion of it does cares about the target and a portion of it doesn’t. The parser for MACRO-32 is a parser. The optimizer for MACRO-32 is an optimizer. Since MACRO-32 has to emulate a VAX and you know VAX has condition codes but Alpha doesn’t and Itanium doesn’t. Hey, X86 has condition codes, but they aren’t quite the same as the VAX condition codes. On VAX just about every instruction sets the condition code; on X86  only a subset of them sets the condition codes, but we are leveraging that to get reasonable quality code for X86. But every architecture is a little different; and MACRO-32 is a little different for each target. So it looks like a VAX to you; you do a PUSHL and you do push things on the stack, you do a CALLS and the magic happens. We do the right thing on each target, but it is different on every target.

Editor: This concludes the interview with John Reagan on the subject of LLVM for OpenVMS X86.  

VSI’s John Reagan Interview on GEM vs. LLVM for X86/64

The following blog documents a series of interviews with VMS Software Inc.’s John Reagan, who heads the compiler group. John was giving a series of presentations on leveraging LLVM for the new OpenVMS X86 port. After listening to his presentation in Malmo, Sweden, I decided to contact him and ask him to expound on his fascinating view of modern compiler technology and how they work. John has worked on various compilers at DEC, Compaq and HP and has a very deep background in many 3GL and 4GL languages and their tools.

John Reagan-headshot

Background

John spent 31 years working for Hewlett Packard as a Compiler engineer for OpenVMS, specializing on Pascal, COBOL, Macro-32, the GEM backend, and assorted other compiler-related projects. He is currently working for VMS Software, Inc. as a Compiler architect/developer for compilers and compilation toolchain for OpenVMS on Itanium and future platforms.

Q: I have seen your presentation at the Bootcamp and at the Connect Conference in Malmo earlier this year. I had a number of questions on the presentation and I will start there. So, in your presentation, you explained you worked on the development of GEM and it was created as a common code generator for the Alpha 64 bit RISC architecture with a target independent IR which is used by all the languages. You mentioned GEM and GEMIR – those terms were unfamiliar to me, so I had several questions:

  1. What is the difference between the two?
  2. Does IR stand for Intermediate Representation?

Can you describe how this was done on GEM and how LLVM does it and how is this step is important for generating object code?

A:” Those are all good questions. To be clear, I wasn’t one of the original GEM developers. As a front-end developer, I used GEM, provided feedback to GEM, even worked with them to solve particular problems.  I only became a real GEM developer when most of the original GEM developers were transitioned from Compaq to Intel.  You asked about GEM; GEM is not an acronym. Back in the Digital days, there was a Prism project and Prism was going to be the follow-on architecture to VAX. It wasn’t done and Alpha was done instead and it was almost Prism. At the time, everyone got enamored with things that looked like jewels, so there was GEM and Opal and there are other names that were involved.  GEM is not an acronym that had to do with Prism. So all of our VAX compilers were in this match of modules: different code generators, different approaches, different optimizers – some had great optimizers like the VAX Fortran compiler and some had almost no optimizers: the VAX COBOL compiler had no optimizer at all. So deciding from everything we learned on VAX, going to Alpha they decided: let’s write a common back end that everyone uses. We had some experience on VAX; you may have heard of the VCG or VAX Common Code Generator which was used by the VAX PL1, the VAX Ada and the VAX C compiler shared a common code generator. It wasn’t really common – it was cloned from each other and shared resources, but we knew as a compiler group we were going to have write compilers not only for the VMS, but we had to write compilers for MIPS Ultrix. That group came to us and said we want your Fortran compiler; it is a great Fortran compiler. We said how do we get our VAX Fortran compiler on MIPS Ultrix? You switch architecture and operating systems.

GEM was invented as a multi OS, multi-target backend. It provides interfaces that deal with command line processes, with file reading, file writing, etc.  So a front end could say I need to read my source file and on VMS it knew how to call RMS and on Tru64 or MIPS ULTRIX, it knew how to call those file systems. That part of GEM is dealing with environmental issues. The other part of GEM is this intermediate representation where you take the source code and you turn it into a…think of it as a high level assembler: I want to create some variables and I want to add them together or multiply them or do a test and a branch.  If you dumped it out you would say this has an assembler feel to it, but it is a little more abstract. It knows about descriptors, it knows about loops, but it has almost no target architecture information in it at all. For some things like weird BLISS linkages, we had to have register numbers encoded in some linkages, but for 99% of the generated code its target independent. GEM took that intermediate representation and internally, depending on whether it was an Alpha or MIPS target version of GEM, it turned it into a tree, does optimizations of code hoisting, and common sub-expressions and loop unrolling, stuff that you needed to do for Fortran and we needed to make sure that all the tricks we played for VAX Fortran were also performed by GEM. GEM also had to the learn new tricks for Alpha; it would be horribly disappointing if we came up with this really fast Alpha chip and then find out the Fortran compiler really sucked and then gave you worse performance than your older VAX did.  

Having fast chips are one thing, but if you don’t have a good compiler, you are in trouble. So GEM does all that transformation, it knows how to write out the object information, because on different platforms it knows the object file is a different format. I look at GEM sometimes as Mr. Potato head; there’s different pieces inside of GEM; there’s the optimizer which is mostly target independent, but there are several code generators, there is one for Alpha, there is one for MIPS, there’s one for x86 32-bit for Visual Fortran. It knows how to write COFF files for Tru64 object file, it knows the Windows object format for the Windows systems, it knows how to write VMS object format, so when we build GEM for different targets, we just pick and choose the pieces we want and end up with a GEM that gets linked with the front end. The front end produces a symbol table and a generic sequence of the intermediate representation and tells GEM: here, go have fun and go generate code. What that enabled is that the Fortran compiler front end now can be the same Fortran front end for VMS Alpha as it was for MIPS Ultrix as it was for Tru64 Alpha as it was for linux on Itanium as it was for all those different targets and flavors that we had.  It allows for the first time for COBOL to be optimized. COBOL now runs the same optimizer that Fortran runs through. The Alpha COBOL could now do strength reduction and loop unrolling. Our Alpha BASIC compile knows how to do pointer analysis. We had never done that before because it wasn’t worth the effort because no one was really looking for that type of performance on those languages, but you get all for free, right? That’s the advantage of that backend.

Going to something like LLVM, we have the same advantage again. We were presented with the issue of how do we keep the existing VMS customers happy; they’re moving code forward, but I also want to modernize VMS and give you newer language standards for things like C or C++ and Fortran and good quality code for all the different types of X86 chips whether they come from AMD or Intel. LLVM has support for all the different types, not just one or two type of chips. On Alpha, there’s just two or three different flavors of Alpha chips and differences between them were pretty minor in most cases. Internally they were faster, but from an outside view they behaved pretty much the same. Go look on X86 and over the years there are dozens of subsets of different instructions on different chips. Some chips like one sequence of instructions versus another sequence of instructions so the optimizations need to almost know what chip they are optimizing best for.  To try to keep track of that is a difficult job; it takes a lot people. Something like LLVM; there are hundreds of people around the planet that are watching that stuff, constantly tweaking it and keeping track of that so I don’t have to, right?

We said we have all these front ends that generated this GEM intermediate language, how do we get that same front-end hooked to LLVM? We need those front-ends, because we have customers out there with millions of lines of BASIC, of Pascal, COBOL and Fortran, all those things – they expect our frontends – there’s no other place to go for those things. We could have made massive changes to each frontend to have GEM directly generate LLVM intermediate language, but why bother? They all generate a nice GEM intermediate language – pretty understandable, so we have written a GEM intermediate language converter. It takes the GEM intermediate language and generates the LLVM intermediate language. Now LLVM, being a compiler tool, this is not really rocket science. At this level for people in this industry it is the same thing: the LLVM intermediate language lets you record variables, which you can add two together and then talk about the results.

There are 200 GEM intermediate language intermediate representation node types: add, subtract, multiply, shift left, shift right, bit fetch, bit store all the various flavors 75% of them are one to one mappings on something equivalent on LLVM. Add is add, and so on.  There is not really much else you need to do.  There are a few places that are a little different since GEM has some built-in knowledge about knowing how to build VMS descriptors that made it easy for every frontend to build VMS descriptors. LLVM doesn’t know about VMS descriptors, so the one GEM intermediate node that talks about building a VMS descriptor that the frontend generates, the converter has to generate a sequence of fifteen or twenty intermediates on LLVM, but pretty much filled the datatype in first byte, filled the class in the second byte, filled the length in the next word it is relatively straight forward and very mechanical. All we had to go through these several hundred GEM nodes to convert them all: just brute force. Just pound your way through it. For the most part that gets you 90% the way there.”

Q: So what about the language of GEM? Is it written in C?

A: “No, GEM is a mixture. It started life as all BLISS. Over the years, it morphed into different languages.  There was a switch over when all new stuff was written in C, but there is a lot written still in BLISS. Of course, if you work on GEM, you have to learn how to do it in BLISS. Our converter that we just wrote is in C++, there’s no reason to do it in BLISS. But of course all of these front ends, the BASIC front end, the COBOL front end, the Pascal front end – all in BLISS. The Fortran front end is in C  Of course the C frontend is in C, and the BLISS frontend is in BLISS. So being in our group, you still need to know how read and manipulate BLISS.”

Q: So LLVM – is it written in C?

A: “Yes , all in C – well, 99% in C++ and uses extensive C++ features. There are a few lower level utility routines that are straight forward C99 C. It is very object oriented in its interface to build the LLVM IR. You do have to go and learn to read classes and inheritance and all those things. Having a good working knowledge of C++ is a good thing to read the LLVM sources. Whether it is things like C++, Swift or Rust or other things in the industry on linux platforms, the LLVM IR has already been shown to work for a lot of different  languages.

 I downloaded the LLVM source code and built it on OpenVMS Itanium to generate code for X86. That is what we are using for our cross compilers. There are hundreds of files that are part of LLVM that we compile. We had to make a few changes for VMS. I think right now it is under 500 lines across the entire code base. We’ve had a couple of extensions, for example in the unix world, every routine doesn’t have an argument count; in the VMS world, every routine expects an argument count passed in. So we’ve added some support to make sure every routine call has an argument count. That had to be done inside the code generator, not in the converter, that’s too high level for that. There’s some things we have to add for debug information; there’s some debug information for some of our languages like BASIC or PASCAL or COBOL or BLISS that still aren’t in the current DWARF standard.

So we have added some extensions to make sure those additional types get produced that our Debugger expects. It is pretty much very mechanical and high level. We haven’t made any fundamental changes to LLVM because we want to pick up new versions of LLVM going forward. I don’t want make the mistake of snapshotting LLVM today and sticking with it for the next ten years. It will be out of date in six months. LLVM churns out new versions quite often to add additional optimizations, new chipsets support, code mitigations for security issues like SPECTRE and Meltdown. So we need to stay current with that code generator technology is very important. “

Editor’s note: This seems to be a good place to stop the first blog article. Coming up in the next blog of our interview: history of LLVM and futures.  

How can you extend the value of your company’s abandonware?

Companies that depend on applications that have been abandoned by the vendor whose technology upon which it is based is a serious problem that needs to be addressed. As we discussed in last year’s blog, many companies are already facing problems just keeping the developer skills necessary for keeping these kinds of applications going. In that blog posting, we used this picture to describe the progression of risk and return on investment (which may not accurately convey the risk or problems companies face):

In some cases, the problem may be more acute since some of these legacy applications may be built on obsolete, third party technology, which has been abandoned by the vendor for one reason or another. This is what is known as “abandonware”.  The vendor’s first step in this process is usually a publication of a support matrix which indicates the number of years the vendor’s software will be supported. In this way, the vendor can announce the successor product and “extended support” fees that kick in when the software reaches end of life.

Many companies see this notification as a decision making tipping point for the keeping or unloading the application that uses the “abandonware” software.  Assuming a readily available replacement in not present, does the company continue with the existing application  and increase their budget for support or do they drop support entirely and support the application internally? This depends on many factors including the importance of the application,  the reliability of the third party software, current hardware and software upgrade issues and compatibility with other parts of their enterprise.

Usually the vendor gives its clients a few years notice, but in some cases it can be quite abrupt like a bankruptcy or acquisition. In either case, you can assume the vendor is preparing to unload or re-train its technical and support personnel rather quickly in anticipation of the end of life date. This will likely mean problem resolution and bug fixes with the vendor software will  be delayed or curtailed. If adoption of a new system is cost prohibitive, then there are options to continue in the short term with the existing system. While freezing the application development is an option, maintaining continuity for the application can be accomplished with a balanced approach of services and careful maintenance. Depending on the application, abandonware can be sustained in the long term if you follow some helpful guidelines for legacy application support:

  1.  Extended support – if extended support for the abandonware is available at a reasonable cost, this would be the best and lowest risk  option. One caveat: you must make sure the vendor still retains enough expertise to support the product and fix any potential problems, otherwise the support is not worth the additional cost.
  2. Alternative support – if the vendor has no extended support or not able to provide adequate support, third party support companies like eCube Systems with the expertise in these legacy abandonware products and can provide support until the replacement system is ready.
  3. Enterprise Evolution/Legacy Modernization – employing an intelligent, phased approach to analyzing, replacing and modernizing the application component by component is a viable alternative to remove dependency on the abandoned software, and it should begin as soon as possible.

By extending support on the existing system, you eliminate the immediate impact of change to your application; and by implementing a phased plan to modernize the application, you retain the continuity of the application while eliminating the dependency on abandonware over time. This will extend the return on investment while minimizing the risk so it won’t adversely affect your users.

How do you maintain your company’s programming skills on legacy systems?

The future of many legacy applications are frequently determined by one or two decisive events that happen during the lifetime of a critical application and it is usually not related to the performance of the software itself. Normally, the software lifecycle tends to conform to two bell curves as related to return on investment (ROI) and risk (of failure). Over time, the return on investment and subsequent risk rises as a significant amount of time passes. This is displayed in the graphic below:

Application Return on Investment and Risk over time

More often than not, it is the risk is associated with keeping the skills needed to keep the application healthy that is the biggest issue. Often an event occurs related to the software maintenance or management staff that causes problems for the future of the application. As an integrator with many years experience, we have seen this dynamic play out many times before: a company’s critical application that was developed and maintained for many years very successfully suddenly reaches an impasse on its future. It can be traced back to one or two events:

1. a key developer/engineer with in-depth knowledge of the application retires or leaves the company. Suddenly, the company in unprepared to handle future maintenance on the application.

2. There is a radical management change due to promotion, M&A or retirement. New management comes in and decides that the old system has to go.

As a result of either of these scenarios, the management decides to re-evaluate the worthiness of keeping the application and cites rising costs as the driving factor. While this can be used to drive management’s agenda to replace the application, it is usually the risk or loss of technical expertise that seals an application’s fate. Replacing the skills of a long time developer or maintenance analyst is difficult even if the application is well documented and maintained. If you add the requirements of a legacy language like COBOL, Fortran, C, Pascal or Basic, the task becomes very challenging since these programming skills are scarce and with very few exceptions, they are not taught in universities or colleges. Add to that the liability of using a legacy platform, and you have a good case for application replacement. In addition to the risk, the biggest problem is cost, which is sometimes ten to twenty times the cost of yearly maintenance. Totally replacing an application that is working smoothly but has a foreseeable maintenance issue in the future makes it very hard to justify immediate action to upper management. They are averse to spending money on an application that hasn’t cost them a lot of money over the years.

So you are stuck with maintaining an application with little or no programming expertise on the platform or the language the application is written in. You have the departing analyst’s salary in your budget and if you can find a consultant with the expertise, you can hire them part time, but even that is a temporary fix. So what do you do?

If a programming course for the language does not exist, you can try to get your consultant to teach your new developer the application language basics. How do you get the newbie trained in the quickest possible manner?  Here is a solution: use the Eclipse-based NXTware Remote for COBOL, Fortran, C, Pascal and Basic development. NXTware Remote provides the ability to develop in legacy languages with intelligent editors on Windows, Mac or linux in a distributed development environment. Specifically, the NXTware platform contains an engine that runs on the legacy system and communicates with the desktop IDE, interprets commands to manage the source files, compile, debug and execute the binaries (executables) on the legacy platform remotely. The process of learning a new language becomes simpler since the newbie is using an environment they already know from Java developer classes. The interface breaks down the barrier to learning and provides an environment with which the developer is already comfortable. The new developer learns to program in the legacy language remotely without having to learn the legacy job control language or any of the infrastructure of the mainframe.

By modernizing with NXTware Remote you extend the ROI

In this way, a neophyte can learn the structure of a legacy programming language and not worry about the mainframe infrastructure necessary to compile and test the code.  Using this distributed development environment can jump start your legacy application’s entry into newer development technologies.

Reasons why outsourcing of applications maintenance and development doesn’t save you money

In response to rising costs of maintaining their source code, many companies are making a move to outsource the development and maintenance of their legacy applications. In the climate of managers having to do more with less and trying to find ways to save money on the IT budget, the outsourcing solution can be a quick fix for a lack of legacy IT management skills. It looks like a good cost saving move initially since the cost of salaried developers along with benefits and office space required to house them is a substantial cost savings. However, with a little research into the application itself, you can find similar reductions in costs and address the substantial risks that may not be accounted for in outsourcing. Here is the question that you need to ask before making this decision:

Are the high costs of application maintenance and development due to inadequate documentation, poorly maintained code or poorly designed code?

Poor maintenance of even the best written code can be the cause of higher maintenance costs due to the ripple effect: when a change in one section of code causes a ripple effect and breaks other sections of code.

Another cause of high costs may be poor documentation on the design or operation of the application. A lack of understanding on how it was designed can lead maintenance programmers to hack in quick fixes rather than try to understand the code.

If the code is poorly designed and not modular, then maintenance costs will soar whenever changes are required because the code was not designed to handle it; sections of code may need refactoring, duplicate code to introduce new features or fix existing ones.

Can addressing these three areas significantly lower your maintenance costs? It should. In any event, all of these potential problems should be resolved before you take that step to outsource the application maintenance; this will make changes easier to integrate, ensure the existing system is properly documented and running within its current specs. The maintenance problem is only going to get worse if a disinterested party takes charge. If you don’t think there is a maintenance problem, then here are some questions you should also consider before outsourcing:

Does the outsourcing company provide the same service to your competitors?

  • Is it is possible that your provider is outsourcing support of similar applications for your competitors? The knowledge they acquire from maintaining your application might provide an insight into refactoring a similar section of code for a competitor’s application.

How will your source code be maintained once it is out of your hands?

  • Does the outsourcing company provide comparable security for your code in their possession as your company might employ? What about proper version control? How much training do they have in programming? Do they do code reviews?  How many people have access to the code?

What is the quality of the workmanship provided by the outsourcing company?

  • A modular design and the intelligent construction of the application are the most important factors in application maintenance cost. The question becomes whether or not your outsourcing company will use quality developers to maintain your code. Most importantly, the lack of vested interest in the application’s success carries even more risk. Having knowledgeable, expensive in-house developers may be a bargain if you consider what damage a neophyte can do to stable, running code.  A lack of familiarity with the code and/or multi-tasked developers might result in hastily installed or poorly designed patches to the application, increasing maintenance costs and making the code harder to read or modify in the future. Outsourcing companies tend to have a high turnover rate and this correspondingly increases the risk of failure.

Outsourcing stifles innovation

The biggest loss to a company from outsourcing is the loss of innovation. As many computer systems managers will attest, innovation is often a by-product of maintaining a system, seeing problems that arise and resolving them. The process of discovery, investigation and resolution of a problem often leads to a better understanding of the cause and developing new technology to circumvent or prevent it, making the process more efficient. Assuming a comparable skill set between the in-house developer and the outsourced developer, both might run across this opportunity to improve the application. While your developer might inform his management or simply implement the improvement to see what affect it has, the outsourced developer has no incentive to do it. For the outsourced developer, improving the quality of the code is not usually included in the terms of the outsourcing contract. The outsourcing company’s goal is usually tied to closing as many problems as they can, as soon as possible; which in some cases leads to improper handling of bugs or problems. Improving the quality of the code or increasing performance will reduce the number of bugs and lessen maintenance requirements on the application – that might possibly reduce the scope (monetary compensation) of the outsourcing contract. Depending on the outsourcing agreement, that might lead to a reduction in the cost of the service. Most outsourcing companies are not interested in that.

Why is Legacy Software Maintenance so difficult to manage?

Many managers have a problem managing legacy software systems and this article will discuss some of the reasons for that. Almost every company that has been using computers for any length of time has developed some in-house software that meets a very specific need that cannot be met by COTs packages. Anytime you deploy a software system into production, it becomes a legacy system by definition, and for good or bad it becomes part of the company’s portfolio. Some of these systems become very successful and enjoy a long deployment cycle.

Almost immediately after the new system was deployed, however, the most skilled developers on the project leave to develop on other projects or find bigger challenges with other companies. Unless the system design was well documented, most of the knowledge of the application left at that point. As the application entered maintenance mode, fewer and fewer developers who knew the design of the original system remained. Over time, the application evolved from its original version usually in response to business and technology needs, and it continued to provide a greater value for the company. These are the kind of applications which companies decided to keep.

Eventually, the application begins to show signs of age and the technology upon which it was based becomes obsolete and incapable of supporting the new business requirements. Patching the code for new features is no longer a viable alternative. Your legacy application has evolved to a certain point, but now the changes needed exceed the constraints of its design and it has become brittle and resistant to further change. This is a normal evolutionary process and something that every organization has to respond to.

Unfortunately, as your application has evolved, the skills of the developers that maintain it haven’t always kept up with the latest technology. With more and more new features being provided by open source, third party software and new technology, managers find themselves looking for outside help to bridge the technology skills gap. They run into problems fitting these developers into their legacy environment because of a mismatch of skills. Newer developers usually lack the background in legacy systems required to understand the design of the application and there are very few places they can pick this knowledge up. The older developers who have been maintaining the old system often don’t investigate the new technology since it is not a part of their job description.

Along with the maturation of the application, the company’s perception of the importance of the developer is reduced, so the resources to keep these skills are also reduced. Pretty soon the expertise needed for legacy development has evaporated and the knowledge of the history of the application is reduced to one or two maintenance programmers. Their strength lies in the understanding of the history of the application and not necessarily in the inherent design. The design knowledge is gone but the need for redesign will require this same type of skill (usually a mismatch with the skill set of the remaining developers). In order to extend the design of your legacy system, you need the skills of a highly skilled developer and to attract this type of developer, you will need to have a modern environment for them to develop in.

This is the problem many companies with legacy systems face: the business logic that legacy systems perform and value to the company needs to be retained and extended because the cost of any other option (i.e. a re-write) is too risky and cost prohibitive. Many CIOs have faced this problem and have analyzed and assessed their inherited legacy systems only to defer any changes due to the risk. That risk is now becoming greater with the passage of time as the technology gap widens and the expertise that is currently maintaining the legacy system is getting close to retirement age. The talent pool of legacy developers is shrinking every year and it is not being replenished with college graduates who are willing to learn this old technology.

While newer technology can replace much of the functionality of legacy systems, the business logic which it performs cannot be replaced. So the problem you have is finding the right mix of developers and technology tools to implement a new design through integration. What is the best way to achieve this goal? Perhaps there is a solution already out there – check out this video:

https://youtu.be/Vsj3BiozDfY

 

What are the DevOps Benefits of Application Orchestration?

From our previous discussions, you can see there are many tangible benefits from using Application Orchestration for DevOps. Not only do you leverage the monitoring benefits of a GUI, but you also create the intelligence you need to quickly deploy and run new applications. The most prominent benefit is for the operations staff because now they can have a tool which can show them the current health of the applications and they can manage problems without needing the developer’s help. The DevOps engineer now has the ability to migrate the exact infrastructure intelligence from one system to another and the developer is now freed from babysitting the application for the first few months while operations learns how it works and how to resolve problems. The intelligence created in AO to pre-define and orchestrate the interactions of discrete components that make up complex applications provides the basis for these benefits. As many DevOps engineers know, modern enterprise applications can often contain inter-dependencies with other applications and components and are sometimes a part of a Hybrid IT solution that can make them difficult to manage. As many DevOps engineers can attest, just restarting a process that has died does not necessarily resolve the problem. Some complex interactions between these components can be difficult to isolate and that can slow down or even stop the effective deployment of a new application. Since many of these interactions cannot be “discovered” and are not be obvious to operations, they are a significant barrier to the successful DevOps operation for Continuous Delivery.

Abstracting Complexity with AO

If we consider the overall goal for DevOps, we want a tool to deliver new software versions quickly and effectively which means the new software needs to be deployed, started and tested as quickly as possible. Application Orchestration is the solution that can simplify the deployment and management process by abstracting infrastructure complexity with easy-to-use GUI-enabled, dependency tools which can create relationships between components. The component status will then automatically determine which modules are affected. With an effective display, application status becomes more apparent and precise, thus making error detection much easier. Similarly, corrective action needs to be automated based on the infrastructure or surfaced in the GUI for the operator to affect rapid response. To achieve this level of infrastructure intelligence, a number of features are needed for Application Orchestration, including intelligent display and control, scalability and portability.

Visualization

Visualization of the application is important for understanding everything else in Application Orchestration. If the true status of an application is to be understood, its infrastructure and relationships need to be represented in an intelligent display with readily available controls for management. By providing a navigation frame which itemizes each component and provides a drill down into each, a multi-tabbed display affords the DevOps engineer a wealth of information with a click of the mouse.

Scalability

Perhaps the biggest challenge for operations is scaling up an application from development to production usage. AO needs to provide the infrastructure to readily modify the throughput capability of the application. Creating objects with its own start/stop/dependency properties enables AO to replicate each component in the configuration: i.e. copy and paste the objects inside the configuration to enable scalability.

Portability

The portability of this intelligent infrastructure is a necessity for DevOps, speeding up the deployment process and preserving the intelligence needed to manage it. The knowledge required to manage the application has to be consolidated into a configuration file which can be exported to other systems.

Given the features which we have identified here in this discussion, I think we have presented a good case that Application Orchestration will have a positive impact on how application are deployed and the ease in which they can be managed on enterprise systems. For more information on this capability, check out our further discussion of Application Orchestration features in our next blog. For a free evaluation of an Application Orchestration product from eCube Systems, click on this link:

http://www.ecubesystems.com/products/NXTmonitor.html

What is Application Orchestration and why does it help DevOps?

Developers in a DevOps environment are always looking for vendor solutions to help them hand off the management of their deployed applications. One type of software is particularly suited to provide this functionality and it is known as Application Performance Management. APMs can not only start and stop processes in a timely manner, but they can ensure processes are restarted when they fail. While most APMs are based on managing newer languages only on contemporary platforms like Windows and linux, these script based solutions do provide a valuable infrastructure to help companies manage and deploy new applications faster. Many DevOps vendors provide this type of solution completely with their own proprietary scripts which require a significant investment of time to get it right.These scripts do a variety of detailed functions like setting up the file system structure, copying files and creating VMs to host the new application. While much of this is very useful, it does not solve the biggest DevOps problem of fixing application deployment run time issues. That is where APMs come in and specifically a type of solution that is a hybrid of Application Performance Management systems known as Application Orchestration – it augments the APM functionality with a unique approach:

  • capture the developer knowledge with rules in a configuration file
  • provide GUI controls needed to make DevOps much easier
  • Visualize applications with easy to understand displays
  • Enable scalability of applications by operations with performance groups

This paper will outline these enhancements to the APM model and show how AO helps the DevOps engineer understand and modify the application infrastructure, which is essential to managing enterprise applications where the problems can be complex and timely response is critical. Commercial DevOps tools like Puppet Labs and attempt to solve this issue with scripts which provide the ability to set up and deploy an application to the enterprise, but fail to manage them effectively. This is because they are script based that are inherently:

  •    Developer written and maintained
  •    Statically written, but may require frequent changes
  •    Designed to interpret the effects of problems with constant polling, and
  •    Execute complex logic to determine the appropriate actions to remediate

Using the Application Orchestration approach, the application infrastructure intelligence is captured in a configuration file so the retry logic, timing, dependencies and relationships within the application can be used to resolve issues. In many cases this knowledge can predict where problems will occur and resolves the cause of the problem before the application user sees the failure. By managing the application through a configuration file which captures the infrastructure, the intelligence is built into each module, making it easier to replicate for scalability and facilitates its display in a GUI. Component dependencies are captured for each module so the effects of problems are automatically propagated to each affected component, making it easier to display accurate status and predict problems before they surface to the application users. The GUI which displays the configuration also gives the DevOps engineer the confidence that the application is running smoothly and that they will be notified proactively should potential issues occur. This approach differs from the dependence on scripts that look at the effect of a problem and then execute copious logic to find where it originated so it can determine how to fix it.

In our next blog, we will itemize the features and the benefits Application Orchestration can provide for DevOps.

 

 

How to speed up deployment with Hybrid Application Performance Managers

In our last blog entry, we discussed the hidden advantages to using a distributed IDE for enterprise development, but that is just the tip of the iceberg. To fully realize a Continuous Delivery (CD) environment for legacy systems, you need to be able to deploy the new applications just as quickly as you develop them. Moreover, you need the ability to seamlessly transfer a running application in development to production without extensive delays. We can all agree that it doesn’t make sense to improve the processes around the development of new versions of your application if it takes months to test, QA, and deploy the application to your users. There are a number of obstacles that can cause delays in deployment, but let’s focus on the ones within your control; ones that you can improve readily or avoid if they aren’t integral to the Continuous Delivery process. In order to limit the scope of this discussion, let’s assume your company has a robust testing environment and an adequate testing strategy. Let’s also say the following assumptions are also present:

Assumptions:

  1. The development process makes significant changes to the infrastructure that are not all documented.
  2. The systems to which the new applications are deployed to are not similar to the development environment.
  3. The documentation of the application infrastructure does not provide enough information to resolve issues.

The Common Challenge:

Given all these assumptions, how do you make the deployment process for a new application easy and as streamlined as possible?  How do you avoid environmental and load issues when deploying a new application?

 

Overview:

If you believe the majority of problems in deploying a new application is the result of resolving differences between development and production or QA, then the solution we are about to discuss will be of interest to you. Our proposal to speeding up the deployment process for new applications is to not only capture the development environment, but the developer’s knowledge of the application as well. By that we mean: capture the developer’s knowledge of dependencies and subtle timing changes in the new application. Often the integration of new modules requires more time to start or are dependent on configuration subtleties. By capturing this information you can reduce potential errors and ensure that testing and operations can concentrate on their goals instead of trying debug the infrastructure. In the traditional sense where the developer starts and stops the application with scripts, the majority of changes involved in moving from development to operations is centered in these scripts. The process of maintaining these scripts is the single most difficult task in migrating the application. The migration of application’s components and structure to the target system is trivial in comparison to modifying the scripts for the new system. Given the complexity of some applications, this is the place where changes are typically needed for a variety of reasons and the developer is usually the only one who knows what the issue is. What if there was a better way to capture this infrastructure so that it is portable, more adaptable to new systems, and easier to modify?

On more modern systems like CentOS and RHEL (and other Linux variants) and with Java applications, Docker is an excellent example of how the exact image of the OS can be created and the application can be captured in containers. A set of docker run commands with bash provides the install and run commands necessary to start individual apps. Docker also has restart policies to restart a container automatically when Docker restarts or if they exit. More complex applications can be started with a bootstrap also run in Docker. Even with these systems, the scripts sometimes need some changes. Scripts of these type are often complex with individual commands which usually require a programmer or systems engineer to modify and maintain. While this type of model might work well with Java, other languages like 3GLs present problems since they are not container based. One promising solution can be found in leveraging an Application Performance Management (APM) hybrid known as application orchestration. This APM hybrid is designed specifically for managing the application infrastructure: application startup and shutdown, using the developer’s knowledge of dependency and starting order, all captured in a configuration that can be replicated. Application orchestration is best described variant of the Application Performance Management software model, since it has no classification of its own. The APM model specifies several criteria useful for managing multiple processes in an application.

The Application Performance Management approach

Application Performance Management tools are a classification of software that facilitates monitoring applications. There are a few commercial tools that provide some but not all of the features that APMs are intended to provide. In this discussion, we are looking at the provision of key services that afford a DevOps environment significant advantages, ease of use and speed to deploy. While commercial APMs are focused on monitoring software to ensure performance, we are targeting features that provide some applicability for solving DevOps problems and specificallly for speeding up the move to production. If we look closely at these features, we can see how APM features can provide the base services that are needed for application orchestration that is essential to DevOps systems:

  1. End User Experience – in the world of application performance, this feature is probably the most important. In the DevOps world, we would argue our end user is the development and operations staff. Using the APM active monitoring feature would provide a great benefit after the deployment and would most probably provide the foundation for defining and documenting the variables in play during deployment. The EUE feature not only benefits the application end user once the deployment is complete, but it would improve the experience of operations and developers who need DevOps tools during the deployment process.
  2. Runtime application architecture – this feature is essential to managing application performance and greatly benefits the DevOps intelligent infrastructure because it is the best way to capture the components and put them in a configuration that can be managed. However, we believe that management is best done when you can visualize an application architecture is through a GUI console, rather than through scripts. A GUI can visualize the underlying application infrastructure by exposing each component and its sub-components and their dependencies. The key feature here is Application Dependency and Discovery, which is useful in managing the application and assisting the orchestration of the application during startup.
  3. Business transaction –this is a more obvious feature when it comes to describing tools for monitoring and performance purposes. In a DevOps tool, the APM business transactions feature is important because it reflects the application’s overall health which may be performed by the various discrete components. This information needs to be captured so that it can be monitored actively with health scripts and actions can be taken in the event of failures or degraded performance. Important application dependent transactions need to be identified so test cases or health scripts can be used to validate the accuracy of this feature.
  4. Deep dive component monitoring – another key feature that is quite important to the DevOps world because it relies on an intelligent infrastructure which is capable of reacting to external forces and can detect and fix problems with the application. In order to successfully accomplish this level of automated management, a deep dive into each component and its dependencies is needed to determine the scope of modules affected and handle any contingencies. It should then follow that the ability to capture this infrastructure intelligence will make replication of the environment for operations even simpler.
  5. Analytics and reporting – while a secondary feature of APMs relies on analytics and reporting to provide performance data, DevOps requires application orchestration, which doesn’t depend on this level of metrics to be successful. Analytics used in conjunction with application dependencies, however, are useful in determining application health and are useful in solving system dependent resource issues which may affect the application. In the DevOps world, the goal is to get the application functioning and running without issues. Looking at the analysis of resources is good for spotting trends and making adjustments for the application, but you often have application dependency issues that affect application performance. If you want to focus on the internals, application log files are the primary source of reporting application health for DevOps.

If an APM addresses each of these needs, it would probably be an effective DevOps tool. However, one very important issue is left unresolved: how does the knowledge of the application infrastructure get transferred to operations for a new environment? Is the documentation provided by development sufficient? Are you expecting the DevOps engineers to understand the intricacies of a developer’s scripts or do you expect your developer will be assigned for an indefinite period of time to operations to shepherd them though the process?  If you are looking to incorporate commercial APM software into this process, then you should look at maximizing the impact of each of the APM features in speeding up Continuous Delivery. Do your homework and figure out what you need and incorporate those requirements into a hybrid solution for CD. Most likely you will come up with a set of requirements that needs a specific solution: Application Orchestration. Stay tuned for our next post which explores this idea.

 

Intelligent Infrastructure for DevOps: Using a GUI to manage and monitor instead of scripts

Application Performance Management tools have been around in one form or another for many years, mostly operating anonymously and unnoticed. While most of these tools were home grown, there were a few vendors who supplied generic APM tools. These tools addressed application infrastructure needs in the background and were not considered a strategic part of IT; they kept applications running with a series of scripts developed by the architects of the applications. Since this software is neither development nor operations specific, they were often not claimed by either department. Until DevOps became a buzz word, they operated under overhead accounts or were assigned to different departments in spite of the job that they did. However, since Agile Development has become mainstream, the need for an intelligent infrastructure and the unification between Development and Operations (DevOps) has refocused attention on APM tools. Agile infrastructure is now a necessity to ensure the rapid and accurate deployment of newly developed applications as fast as the Agile Development tools that were used to implement the new business logic. For years companies have developed their own script based infrastructure and due to internal changes it has evolved into obsolescence over the years. This has happened for a variety of reasons, but mainly because the scripts were written for one purpose and once the author departed, no one knew exactly how it worked – so it fell into disrepair. Out of this need, many companies have turned to DevOps companies like Chef and Puppetlabs, et. al. that have developed command line, template or script based tools to provide this intelligent infrastructure, primarily focused on the linux platform. Some of these scripts are Ruby based, but in most cases the scripts and templates are a proprietary language. These proprietary scripting languages can perform many different intelligent functions to handle deploying new applications and even creating new systems to deploy them to. The advantages of this approach is that scripting language is very powerful and simple to learn, making the ability to generate intelligent infrastructure easy to implement and deploy.

The problem with using scripts is that they are:

a. written to handle a specific action, requiring constant change whenever the infrastructure changes.

b. written for specific platforms, so one change has to be propagated to the others.

c. requires knowledge of the infrastructure.

d. requires knowledge of the scripting language.

More complex scripts becomes difficult to read and understand beyond a few lines of trivial operations, which makes it hard for maintenance by anyone other than the author. Unlike the goal of DevOps, which is intended to unify Development and Operations as a team with one goal, scripting languages requires an analyst maintain the scripts so that new development can be seamlessly managed or deployed. Operations typically does not have this knowledge and it can be difficult for them to develop this skill (not being programmers as such). In order to facilitate a DevOps environment, the complexity of the scripts required to implement this infrastructure intelligence needs to be abstracted into simple actions so that operations can use it. This is usually accomplished with a Graphical User Interface. Historically, Operation’s goal is to keep everything in system running smoothly and applications are an unknown area: they perform business logic on data, consume resources indiscriminately and typically don’t handle system problems very well. So operations rarely understands the infrastructure of each application; they simply monitor it from the system resources aspect to see if there are any problems. Using scripts to automate this function is helpful, but tools to display the results are arcane. They are more familiar with desktop tools that have menus and displays because they are easy to use, easy to understand, and easy to modify. Operations is familiar the system operations and how it provides resources to applications like CPU, Memory, Disk and Network speed. These are things they can understand using commands for. However, for DevOps to be successful, using command line scripts  or knowledge of the application infrastructure should not be required to manage applications. What is needed is a tool that manages the infrastructure from the application side and notifies operations of any potential problems before they impact the overall system.

Now application management and monitoring tools have been around for many years and a few of these existing tools can be adapted to meet this need. One such tool is a third generation application management tool called NXTmonitor. NXTmonitor’s heritage comes from the Open Environment and Borland tools known as Netminder, Appminder and AppCenter. These tools evolved from simple monitors to a three tier architecture (presentation, business and database layers) to manage applications with a rules-based configuration file to specify each application’s environment. By leveraging middleware to facilitate multi-tier communication for agents and master servers, these tools were built on a fast, scalable infrastructure that enabled UNIX and Windows applications to be managed by an independent middle tier through a GUI presentation layer. The best description of this powerful tool is Application Orchestration: a tool that enables the definition and maintenance of complex applications and powerful monitoring capability with definitions of interdependencies across multiple platforms. This allows a complex structure to be captured in an intelligent, XML based configuration file that is built with the help of a Graphical User Interface (GUI). By recording the structure required to start the applications across multiple nodes to a configuration file, NXTmonitor can replicate the development environment exactly and even scale it up for production use. The use of environment variables enables NXTmonitor to assign variable infrastructure components to values that can be changed, making the tool easy to use in defining and replicating a complex environment. When the environment changes, these variables can be captured in environment variables which are different for each environment.

NXTmonitor is the next evolution of DevOps infrastructure tools – managing the deployment of a multi-tier application with a GUI console which can define the intelligent infrastructure needed for agile development. Unlike other DevOps tools, NXTmonitor provides a full featured APM that supports application orchestration across multiple platforms like IBM i-series, z-series, OpenVMS, Windows, and UNIX in addition to Linux. Application Orchestration is a relatively new term: it is the ability to start a complex application consisting of multiple, discrete components that are interdependent in a concise manner so that the application can be ready to process information in an orderly manner. In essence, Application Orchestration captures the developer’s knowledge of the infrastructure in a configuration file that can be used as a playbook for starting up applications. It is intended to be displayed and monitored by operations staff, with mouse actions that allow them to easily inspect or modify the configuration for scaling or performance purposes. Action scripts and Timers enable custom execution of actions designed to resolve runtime problems and increase performance. The definition of Performance groups makes scalability simple and easy to modify capacity, failover and service levels.

One of the key benefits of using NXTmonitor is that the NXTmonitor Console GUI is designed to appear the same across all platforms and perform in the same manner regardless of the desktop software or the nodes upon which the applications are being managed. Once you understand the operation of managing applications on linux, you can do the same operations on Windows or OpenVMS, for instance. With this in mind, the Console is designed for maximum efficiency and understanding and includes these features:

  • Multiple node login capability from one console.
  • Short cut buttons at the top for quick reference and use.
  • Navigation panel to display nodes and components being managed.
  • Content panel provides complete application configuration and status information.
  • Audit function provide history of the process restarts, failures, starts and stops
  • System statistics for each node includes, CPU, memory bar chart, devices, and errors.
  • Operator access to the process table now includes the ability to stop or kill an individual process in case of deadlock or runaways
  • Performance groups can now have actions associated with state changes so that changes in status can be propogated to operations or development.
  • SNMP support for other tools like HP OpenView and BMC Patrol.

.
Benefits

The benefits of using a Graphical User Interface (GUI) to monitor and maintain your infrastructure instead of creating scripts is as follows:

  • All Application sub-components are easily displayed in a console window.
  • Infrastructure rules are easy to change in a GUI.
  • Console GUI displays all application statuses in one, easy to understand display.
  • GUIs don’t require operations to learn a scripting language.
  • A GUI console can provide management, monitoring and configuration tools all with the click of a mouse.
  • Focuses interaction with the application performance and application monitoring.
  • A mobile device can display the GUI making management more dynamic.
  • Simple, pro-active health scripts prevent outages, increase performance.
  • Mouse click operations speed up scalability functions with copy and paste.
  • Configuration files capture application infrastructure for easy deployment to additional machines.

 

Summary

The difference in DevOps tools today is usability. Do you want to manage scripts for infrastructure intelligence, or would you rather use a Graphical User Interface? Is it easier to teach operations how to maintain scripts or learn an Operations console? It is faster to capture the developer knowledge in a script or in a configuration file read by a GUI console?

NXTmonitor, which has evolved application performance management for decades, has advanced the creation of this intelligence with a rich GUI console, making it easier to define, modify and monitor application configurations in Development, Test, QA and Production. Once you have the developer knowledge captured in an XML configuration file, you can monitor and maintain it easily in the NXTmonitor Console.

For more information, go to: http://www.ecubesystems.com/nxtmonitor.html