Reasons why outsourcing of applications maintenance and development doesn’t save you money

In response to rising costs of maintaining their source code, many companies are making a move to outsource the development and maintenance of their legacy applications. In the climate of managers having to do more with less and trying to find ways to save money on the IT budget, the outsourcing solution can be a quick fix for a lack of legacy IT management skills. It looks like a good cost saving move initially since the cost of salaried developers along with benefits and office space required to house them is a substantial cost savings. However, with a little research into the application itself, you can find similar reductions in costs and address the substantial risks that may not be accounted for in outsourcing. Here is the question that you need to ask before making this decision:

Are the high costs of application maintenance and development due to inadequate documentation, poorly maintained code or poorly designed code?

Poor maintenance of even the best written code can be the cause of higher maintenance costs due to the ripple effect: when a change in one section of code causes a ripple effect and breaks other sections of code.

Another cause of high costs may be poor documentation on the design or operation of the application. A lack of understanding on how it was designed can lead maintenance programmers to hack in quick fixes rather than try to understand the code.

If the code is poorly designed and not modular, then maintenance costs will soar whenever changes are required because the code was not designed to handle it; sections of code may need refactoring, duplicate code to introduce new features or fix existing ones.

Can addressing these three areas significantly lower your maintenance costs? It should. In any event, all of these potential problems should be resolved before you take that step to outsource the application maintenance; this will make changes easier to integrate, ensure the existing system is properly documented and running within its current specs. The maintenance problem is only going to get worse if a disinterested party takes charge. If you don’t think there is a maintenance problem, then here are some questions you should also consider before outsourcing:

Does the outsourcing company provide the same service to your competitors?

  • Is it is possible that your provider is outsourcing support of similar applications for your competitors? The knowledge they acquire from maintaining your application might provide an insight into refactoring a similar section of code for a competitor’s application.

How will your source code be maintained once it is out of your hands?

  • Does the outsourcing company provide comparable security for your code in their possession as your company might employ? What about proper version control? How much training do they have in programming? Do they do code reviews?  How many people have access to the code?

What is the quality of the workmanship provided by the outsourcing company?

  • A modular design and the intelligent construction of the application are the most important factors in application maintenance cost. The question becomes whether or not your outsourcing company will use quality developers to maintain your code. Most importantly, the lack of vested interest in the application’s success carries even more risk. Having knowledgeable, expensive in-house developers may be a bargain if you consider what damage a neophyte can do to stable, running code.  A lack of familiarity with the code and/or multi-tasked developers might result in hastily installed or poorly designed patches to the application, increasing maintenance costs and making the code harder to read or modify in the future. Outsourcing companies tend to have a high turnover rate and this correspondingly increases the risk of failure.

Outsourcing stifles innovation

The biggest loss to a company from outsourcing is the loss of innovation. As many computer systems managers will attest, innovation is often a by-product of maintaining a system, seeing problems that arise and resolving them. The process of discovery, investigation and resolution of a problem often leads to a better understanding of the cause and developing new technology to circumvent or prevent it, making the process more efficient. Assuming a comparable skill set between the in-house developer and the outsourced developer, both might run across this opportunity to improve the application. While your developer might inform his management or simply implement the improvement to see what affect it has, the outsourced developer has no incentive to do it. For the outsourced developer, improving the quality of the code is not usually included in the terms of the outsourcing contract. The outsourcing company’s goal is usually tied to closing as many problems as they can, as soon as possible; which in some cases leads to improper handling of bugs or problems. Improving the quality of the code or increasing performance will reduce the number of bugs and lessen maintenance requirements on the application – that might possibly reduce the scope (monetary compensation) of the outsourcing contract. Depending on the outsourcing agreement, that might lead to a reduction in the cost of the service. Most outsourcing companies are not interested in that.

Why is Legacy Software Maintenance so difficult to manage?

Many managers have a problem managing legacy software systems and this article will discuss some of the reasons for that. Almost every company that has been using computers for any length of time has developed some in-house software that meets a very specific need that cannot be met by COTs packages. Anytime you deploy a software system into production, it becomes a legacy system by definition, and for good or bad it becomes part of the company’s portfolio. Some of these systems become very successful and enjoy a long deployment cycle.

Almost immediately after the new system was deployed, however, the most skilled developers on the project leave to develop on other projects or find bigger challenges with other companies. Unless the system design was well documented, most of the knowledge of the application left at that point. As the application entered maintenance mode, fewer and fewer developers who knew the design of the original system remained. Over time, the application evolved from its original version usually in response to business and technology needs, and it continued to provide a greater value for the company. These are the kind of applications which companies decided to keep.

Eventually, the application begins to show signs of age and the technology upon which it was based becomes obsolete and incapable of supporting the new business requirements. Patching the code for new features is no longer a viable alternative. Your legacy application has evolved to a certain point, but now the changes needed exceed the constraints of its design and it has become brittle and resistant to further change. This is a normal evolutionary process and something that every organization has to respond to.

Unfortunately, as your application has evolved, the skills of the developers that maintain it haven’t always kept up with the latest technology. With more and more new features being provided by open source, third party software and new technology, managers find themselves looking for outside help to bridge the technology skills gap. They run into problems fitting these developers into their legacy environment because of a mismatch of skills. Newer developers usually lack the background in legacy systems required to understand the design of the application and there are very few places they can pick this knowledge up. The older developers who have been maintaining the old system often don’t investigate the new technology since it is not a part of their job description.

Along with the maturation of the application, the company’s perception of the importance of the developer is reduced, so the resources to keep these skills are also reduced. Pretty soon the expertise needed for legacy development has evaporated and the knowledge of the history of the application is reduced to one or two maintenance programmers. Their strength lies in the understanding of the history of the application and not necessarily in the inherent design. The design knowledge is gone but the need for redesign will require this same type of skill (usually a mismatch with the skill set of the remaining developers). In order to extend the design of your legacy system, you need the skills of a highly skilled developer and to attract this type of developer, you will need to have a modern environment for them to develop in.

This is the problem many companies with legacy systems face: the business logic that legacy systems perform and value to the company needs to be retained and extended because the cost of any other option (i.e. a re-write) is too risky and cost prohibitive. Many CIOs have faced this problem and have analyzed and assessed their inherited legacy systems only to defer any changes due to the risk. That risk is now becoming greater with the passage of time as the technology gap widens and the expertise that is currently maintaining the legacy system is getting close to retirement age. The talent pool of legacy developers is shrinking every year and it is not being replenished with college graduates who are willing to learn this old technology.

While newer technology can replace much of the functionality of legacy systems, the business logic which it performs cannot be replaced. So the problem you have is finding the right mix of developers and technology tools to implement a new design through integration. What is the best way to achieve this goal? Perhaps there is a solution already out there – check out this video:

https://youtu.be/Vsj3BiozDfY

 

What are the DevOps Benefits of Application Orchestration?

From our previous discussions, you can see there are many tangible benefits from using Application Orchestration for DevOps. Not only do you leverage the monitoring benefits of a GUI, but you also create the intelligence you need to quickly deploy and run new applications. The most prominent benefit is for the operations staff because now they can have a tool which can show them the current health of the applications and they can manage problems without needing the developer’s help. The DevOps engineer now has the ability to migrate the exact infrastructure intelligence from one system to another and the developer is now freed from babysitting the application for the first few months while operations learns how it works and how to resolve problems. The intelligence created in AO to pre-define and orchestrate the interactions of discrete components that make up complex applications provides the basis for these benefits. As many DevOps engineers know, modern enterprise applications can often contain inter-dependencies with other applications and components and are sometimes a part of a Hybrid IT solution that can make them difficult to manage. As many DevOps engineers can attest, just restarting a process that has died does not necessarily resolve the problem. Some complex interactions between these components can be difficult to isolate and that can slow down or even stop the effective deployment of a new application. Since many of these interactions cannot be “discovered” and are not be obvious to operations, they are a significant barrier to the successful DevOps operation for Continuous Delivery.

Abstracting Complexity with AO

If we consider the overall goal for DevOps, we want a tool to deliver new software versions quickly and effectively which means the new software needs to be deployed, started and tested as quickly as possible. Application Orchestration is the solution that can simplify the deployment and management process by abstracting infrastructure complexity with easy-to-use GUI-enabled, dependency tools which can create relationships between components. The component status will then automatically determine which modules are affected. With an effective display, application status becomes more apparent and precise, thus making error detection much easier. Similarly, corrective action needs to be automated based on the infrastructure or surfaced in the GUI for the operator to affect rapid response. To achieve this level of infrastructure intelligence, a number of features are needed for Application Orchestration, including intelligent display and control, scalability and portability.

Visualization

Visualization of the application is important for understanding everything else in Application Orchestration. If the true status of an application is to be understood, its infrastructure and relationships need to be represented in an intelligent display with readily available controls for management. By providing a navigation frame which itemizes each component and provides a drill down into each, a multi-tabbed display affords the DevOps engineer a wealth of information with a click of the mouse.

Scalability

Perhaps the biggest challenge for operations is scaling up an application from development to production usage. AO needs to provide the infrastructure to readily modify the throughput capability of the application. Creating objects with its own start/stop/dependency properties enables AO to replicate each component in the configuration: i.e. copy and paste the objects inside the configuration to enable scalability.

Portability

The portability of this intelligent infrastructure is a necessity for DevOps, speeding up the deployment process and preserving the intelligence needed to manage it. The knowledge required to manage the application has to be consolidated into a configuration file which can be exported to other systems.

Given the features which we have identified here in this discussion, I think we have presented a good case that Application Orchestration will have a positive impact on how application are deployed and the ease in which they can be managed on enterprise systems. For more information on this capability, check out our further discussion of Application Orchestration features in our next blog. For a free evaluation of an Application Orchestration product from eCube Systems, click on this link:

http://www.ecubesystems.com/products/NXTmonitor.html

What is Application Orchestration and why does it help DevOps?

Developers in a DevOps environment are always looking for vendor solutions to help them hand off the management of their deployed applications. One type of software is particularly suited to provide this functionality and it is known as Application Performance Management. APMs can not only start and stop processes in a timely manner, but they can ensure processes are restarted when they fail. While most APMs are based on managing newer languages only on contemporary platforms like Windows and linux, these script based solutions do provide a valuable infrastructure to help companies manage and deploy new applications faster. Many DevOps vendors provide this type of solution completely with their own proprietary scripts which require a significant investment of time to get it right.These scripts do a variety of detailed functions like setting up the file system structure, copying files and creating VMs to host the new application. While much of this is very useful, it does not solve the biggest DevOps problem of fixing application deployment run time issues. That is where APMs come in and specifically a type of solution that is a hybrid of Application Performance Management systems known as Application Orchestration – it augments the APM functionality with a unique approach:

  • capture the developer knowledge with rules in a configuration file
  • provide GUI controls needed to make DevOps much easier
  • Visualize applications with easy to understand displays
  • Enable scalability of applications by operations with performance groups

This paper will outline these enhancements to the APM model and show how AO helps the DevOps engineer understand and modify the application infrastructure, which is essential to managing enterprise applications where the problems can be complex and timely response is critical. Commercial DevOps tools like Puppet Labs and attempt to solve this issue with scripts which provide the ability to set up and deploy an application to the enterprise, but fail to manage them effectively. This is because they are script based that are inherently:

  •    Developer written and maintained
  •    Statically written, but may require frequent changes
  •    Designed to interpret the effects of problems with constant polling, and
  •    Execute complex logic to determine the appropriate actions to remediate

Using the Application Orchestration approach, the application infrastructure intelligence is captured in a configuration file so the retry logic, timing, dependencies and relationships within the application can be used to resolve issues. In many cases this knowledge can predict where problems will occur and resolves the cause of the problem before the application user sees the failure. By managing the application through a configuration file which captures the infrastructure, the intelligence is built into each module, making it easier to replicate for scalability and facilitates its display in a GUI. Component dependencies are captured for each module so the effects of problems are automatically propagated to each affected component, making it easier to display accurate status and predict problems before they surface to the application users. The GUI which displays the configuration also gives the DevOps engineer the confidence that the application is running smoothly and that they will be notified proactively should potential issues occur. This approach differs from the dependence on scripts that look at the effect of a problem and then execute copious logic to find where it originated so it can determine how to fix it.

In our next blog, we will itemize the features and the benefits Application Orchestration can provide for DevOps.

 

 

How to speed up deployment with Hybrid Application Performance Managers

In our last blog entry, we discussed the hidden advantages to using a distributed IDE for enterprise development, but that is just the tip of the iceberg. To fully realize a Continuous Delivery (CD) environment for legacy systems, you need to be able to deploy the new applications just as quickly as you develop them. Moreover, you need the ability to seamlessly transfer a running application in development to production without extensive delays. We can all agree that it doesn’t make sense to improve the processes around the development of new versions of your application if it takes months to test, QA, and deploy the application to your users. There are a number of obstacles that can cause delays in deployment, but let’s focus on the ones within your control; ones that you can improve readily or avoid if they aren’t integral to the Continuous Delivery process. In order to limit the scope of this discussion, let’s assume your company has a robust testing environment and an adequate testing strategy. Let’s also say the following assumptions are also present:

Assumptions:

  1. The development process makes significant changes to the infrastructure that are not all documented.
  2. The systems to which the new applications are deployed to are not similar to the development environment.
  3. The documentation of the application infrastructure does not provide enough information to resolve issues.

The Common Challenge:

Given all these assumptions, how do you make the deployment process for a new application easy and as streamlined as possible?  How do you avoid environmental and load issues when deploying a new application?

 

Overview:

If you believe the majority of problems in deploying a new application is the result of resolving differences between development and production or QA, then the solution we are about to discuss will be of interest to you. Our proposal to speeding up the deployment process for new applications is to not only capture the development environment, but the developer’s knowledge of the application as well. By that we mean: capture the developer’s knowledge of dependencies and subtle timing changes in the new application. Often the integration of new modules requires more time to start or are dependent on configuration subtleties. By capturing this information you can reduce potential errors and ensure that testing and operations can concentrate on their goals instead of trying debug the infrastructure. In the traditional sense where the developer starts and stops the application with scripts, the majority of changes involved in moving from development to operations is centered in these scripts. The process of maintaining these scripts is the single most difficult task in migrating the application. The migration of application’s components and structure to the target system is trivial in comparison to modifying the scripts for the new system. Given the complexity of some applications, this is the place where changes are typically needed for a variety of reasons and the developer is usually the only one who knows what the issue is. What if there was a better way to capture this infrastructure so that it is portable, more adaptable to new systems, and easier to modify?

On more modern systems like CentOS and RHEL (and other Linux variants) and with Java applications, Docker is an excellent example of how the exact image of the OS can be created and the application can be captured in containers. A set of docker run commands with bash provides the install and run commands necessary to start individual apps. Docker also has restart policies to restart a container automatically when Docker restarts or if they exit. More complex applications can be started with a bootstrap also run in Docker. Even with these systems, the scripts sometimes need some changes. Scripts of these type are often complex with individual commands which usually require a programmer or systems engineer to modify and maintain. While this type of model might work well with Java, other languages like 3GLs present problems since they are not container based. One promising solution can be found in leveraging an Application Performance Management (APM) hybrid known as application orchestration. This APM hybrid is designed specifically for managing the application infrastructure: application startup and shutdown, using the developer’s knowledge of dependency and starting order, all captured in a configuration that can be replicated. Application orchestration is best described variant of the Application Performance Management software model, since it has no classification of its own. The APM model specifies several criteria useful for managing multiple processes in an application.

The Application Performance Management approach

Application Performance Management tools are a classification of software that facilitates monitoring applications. There are a few commercial tools that provide some but not all of the features that APMs are intended to provide. In this discussion, we are looking at the provision of key services that afford a DevOps environment significant advantages, ease of use and speed to deploy. While commercial APMs are focused on monitoring software to ensure performance, we are targeting features that provide some applicability for solving DevOps problems and specificallly for speeding up the move to production. If we look closely at these features, we can see how APM features can provide the base services that are needed for application orchestration that is essential to DevOps systems:

  1. End User Experience – in the world of application performance, this feature is probably the most important. In the DevOps world, we would argue our end user is the development and operations staff. Using the APM active monitoring feature would provide a great benefit after the deployment and would most probably provide the foundation for defining and documenting the variables in play during deployment. The EUE feature not only benefits the application end user once the deployment is complete, but it would improve the experience of operations and developers who need DevOps tools during the deployment process.
  2. Runtime application architecture – this feature is essential to managing application performance and greatly benefits the DevOps intelligent infrastructure because it is the best way to capture the components and put them in a configuration that can be managed. However, we believe that management is best done when you can visualize an application architecture is through a GUI console, rather than through scripts. A GUI can visualize the underlying application infrastructure by exposing each component and its sub-components and their dependencies. The key feature here is Application Dependency and Discovery, which is useful in managing the application and assisting the orchestration of the application during startup.
  3. Business transaction –this is a more obvious feature when it comes to describing tools for monitoring and performance purposes. In a DevOps tool, the APM business transactions feature is important because it reflects the application’s overall health which may be performed by the various discrete components. This information needs to be captured so that it can be monitored actively with health scripts and actions can be taken in the event of failures or degraded performance. Important application dependent transactions need to be identified so test cases or health scripts can be used to validate the accuracy of this feature.
  4. Deep dive component monitoring – another key feature that is quite important to the DevOps world because it relies on an intelligent infrastructure which is capable of reacting to external forces and can detect and fix problems with the application. In order to successfully accomplish this level of automated management, a deep dive into each component and its dependencies is needed to determine the scope of modules affected and handle any contingencies. It should then follow that the ability to capture this infrastructure intelligence will make replication of the environment for operations even simpler.
  5. Analytics and reporting – while a secondary feature of APMs relies on analytics and reporting to provide performance data, DevOps requires application orchestration, which doesn’t depend on this level of metrics to be successful. Analytics used in conjunction with application dependencies, however, are useful in determining application health and are useful in solving system dependent resource issues which may affect the application. In the DevOps world, the goal is to get the application functioning and running without issues. Looking at the analysis of resources is good for spotting trends and making adjustments for the application, but you often have application dependency issues that affect application performance. If you want to focus on the internals, application log files are the primary source of reporting application health for DevOps.

If an APM addresses each of these needs, it would probably be an effective DevOps tool. However, one very important issue is left unresolved: how does the knowledge of the application infrastructure get transferred to operations for a new environment? Is the documentation provided by development sufficient? Are you expecting the DevOps engineers to understand the intricacies of a developer’s scripts or do you expect your developer will be assigned for an indefinite period of time to operations to shepherd them though the process?  If you are looking to incorporate commercial APM software into this process, then you should look at maximizing the impact of each of the APM features in speeding up Continuous Delivery. Do your homework and figure out what you need and incorporate those requirements into a hybrid solution for CD. Most likely you will come up with a set of requirements that needs a specific solution: Application Orchestration. Stay tuned for our next post which explores this idea.

 

The hidden advantages of using a distributed IDE for mainframe DevOps

For most organizations, the need for Continuous Delivery and Continuous Integration is the driving force behind moving to Agile development on the mainframe. The business requirements of needing to continuously update and respond to market needs often outweighs the savings in time and resources, so most DevOps managers don’t consider that in the equation. However, Continuous Delivery on a mainframe can be problematic since many companies don’t have the ability to do true agile development on the mainframe. A distributed agile development IDE may be the solution they are looking for. Distributed agile development offloads the GUI to a desktop environment where the environment is more conducive to IDE tools and interactions with newer tools and technology. In addition, the mainframe proprietary infrastructure can be abstracted on the desktop to further assist developers in creating an agile environment for development, testing and deployment. While the advantages of using agile software are apparent, distributed software provides even more subtle enhancements. Even though DevOps’ goal is unified around software delivery, managers must allocate their resources for the separate functions wisely. For the DevOps manager, he is concerned with more than just the productivity of developers; the operations group and the systems group have their own concerns when embracing DevOps. Productivity on the operation’s side of DevOps has evolved from focusing on stability and reliability to how it affects the bottom line. An Agile Infrastructure is key to providing this productivity, but let’s just focus on how a distributed agile development environment can help first.

For the developers, the inherent advantages of a distributed agile development environment is more apparent, but can often be overlooked:

  • a desktop based IDE with a project based, distributed file system environment provides quicker responses to changes affecting code on the mainframe.
  • File transfer times to the mainframe no longer are an issue, since dynamic updates happen all the time.
  •  Issues with updating the source repository disappear in a distributed environment;  check ins and check outs are a mouse click away.
  •  Complex build environments can be abstracted in an IDE with one click.
  • New deployments can be abstracted into a one click menu item in the IDE.
  • Menus can be customized to fit any environmental or functional requirements.
  • Debugging in an integrated environment with the mainframe not only provides a better view of information, but greatly speeds up the debugging process.

For the operations side of DevOps or the systems department, there are some advantages that may not be apparent with a distributed, Agile development tool. It is just a matter of identifying all the changes to your Continuous Delivery process and how a distributed environment can affect them:

  1. Less overall system resources are used in a distributed development environment, reducing the need for additional capacity planning.
  2. Memory resources are significantly reduced because of the reduction in the number of processes.
  3. Mainframe CPU resources are reduced for a couple of reasons: developers are using Desktop resources to modify their code, and intelligent editors with syntax highlighting or code assist catch syntax errors before the developer sends off the code to the compiler.
  4. Less disk space is required to maintain copies of the source code and temporary files generated by the compilers on the mainframe.
  5. IO is significantly reduced because your developers are using workstations for code development and manipulation and that IO is not being performed on the mainframe.
  6. A lower load on the mainframe results in lower power consumption and lower maintenance costs.
  7. Less load on the mainframe could translate into a reduction in hardware configuration: i.e.  – number of CPUs, Memory or Disks on the servers.
  8. If hardware configurations are reduced, then some software maintenance costs based on the machine configuration,would be reduced as well.

As you can see from this list, a distributed, agile development environment can not only improve developer productivity, but  significantly reduce the resource consumption and enhance the performance of the mainframe. As a systems manager, you should be able to assign costs to these changes and calculate an overall savings. If the systems department is still being operated separately as a profit center, then the distributed agile development savings alone can be used to justify agile infrastructure software to speed up the deployment process as well. In our next blog: How to speed up deployment with Application Performance Managers for DevOps.

How does a Modern, distributed IDE help new developers use OpenVMS?

One of the most daunting tasks ahead of new developers on mainframes is learning all of the operating system dependent infrastructure necessary to do compiles, test suites, and debugging of the code when errors occur. Getting up to speed on that requires a large investment of time and most companies these days don’t have the robust training departments they used to have in the past. Most newbie developers don’t know mainframes like OpenVMS (and don’t really care to if they can avoid it). So the chances of finding top rated talent with this knowledge from college or out in the market are very low. If by chance you manage to find one, they are usually turned off by all of the antiquated job control language they have to learn.

The Common Challenge:

How do you attract and keep new mainframe developers when there are so many more modern agile development systems out there with a more contemporary interface?

The solution:

You make your development environment more modern and provide all of the agile tools developers like: syntax highlighting, code assist, integrated compile and link, integrated debugger and integrated source control. These are the tools new developers need to thrive in their new environment and make them successful. Just as important as the interface is the ability to integrate the desktop and the mainframe in a fast, distributed developer environment. Without this, you are simply developing on a desktop and sending the source code over the network to compile and verify with scripts. By using the distributed developer environment NXTware Remote based on the powerful Eclipse IDE you gain the advantages of a standard IDE with the integration to the mainframe.

Benefits

In a distributed development environment there are many advantages for both the manager AND the newbie developer:

  1. A modern interface ensures rapid assimilation of the new infrastructure by new developers. By using the Eclipse standard interface for your development environment, you can abstract the infrastructure into menus, list boxes and wizards with one click functionality. The developer can quickly learn your environment and make progress on what you are paying them for: development.
  2. Adopt agile development teams to breathe new life into aging 3GL applications; enabling scrums to make changes easily, one-click compiles and integrated testing or automated builds just as easy as other platforms.
  3. Integration with DevOps deploy and manage functionality with NXTware Deploy and NXTmonitor makes it easy for new developers to deploy and test in an enterprise environment. With integration of your standard compile scripts into Eclipse menus, a newbie developer can be compiling and running existing applications within hours of introduction into this environment.
  4. Ease of use: a distributed interface with familiar Eclipse screens in NXTware Remote makes mainframe based source projects on the desktop easier to develop with. Abstracting much of the infrastructure they need to use to develop and test code minimizes the knowledge required to develop on this platform.
  5. Assist the learning process for 3GLs with Code assist and code complete; provide syntax highlighting editors to help newbie developers on 3GLs learn the language quicker.
  6. Integration of newer technology with legacy systems is made easier in NXTware Remote, which supports both.

More control

With a modern IDE like Eclipse, you have access to all kinds of management tools like software project management plugins and reviewer projects.

  1. Increase productivity by providing leading edge tools for developers and giving them an environment which keeps them more engaged leads to longer retention and developing more innovative features for the user community.
  2. In addition to superior developer tools, integrated software development project management tools in Eclipse enables managers to use the same interface and manage the project manager’s control of tasks, enable project reviews and provide timely feedback from developers.
  3. IDEs display more information in a more compact, easily read medium and multi-frame environment enables developers to quickly fix problems in less time and more efficiently.
  4. By allowing junior developers to quickly learn a new environment, it frees up your senior developers to work on blue field projects and analysis on how to improve existing software assets.
  5. Time is always the biggest issue development managers have on mainframes like OpenVMS. By improving productivity with modern tools for the developers, the turn-around for new features is reduced, making Continuous Delivery a reality and this makes your company more competitive.

Helping bring in new developers is only one of many advantages of the NXTware Remote Solution. Reducing costs on development and increasing the speed of deployment of new applications are significant too. In addition, the systems department can see tangible benefits as well, and that is the subject of the next blog: The hidden advantages of using a distributed IDE for mainframe development.

How NXTware Remote helps managers extend the life of their OpenVMS systems

OpenVMS is a very robust Operating system and many OpenVMS users have an application which runs so well on this platform that it is difficult to replace. Accordingly, most managers realize they need to extend the life of the application on this platform because it provides so much value to the company. Paramount to its long term survival is making it easier to modify and maintain, which is one of their biggest needs.

The Common Challenge:
How do you maintain and improve a legacy application when the operating system expertise upon which it is based is disappearing?

The solution:
You solve this by getting “fresh blood” developing on your application and by leveraging environments with which they are familiar.

This can be done by using an Open Source based IDE which can be modified to support your legacy OS infrastructure and the 3GL languages they are written in. With the right tool you can extend the life of the application and ensure it is properly maintained for the future.

  • Enterprise IDEs like NXTware Remote are developed for this need with Eclipse support on legacy systems in a distributed development environment.
  • Integrated tools like source library management, code assist and code templates makes it easier to adapt to a new environment.
  • NXTware Remote leverages a distributed integrated development system for OpenVMS and linux and it solves the problem for managers who can’t find OpenVMS developers for the projects they have.

Time is always an issue and one of the biggest challenges development managers have on mainframes like OpenVMS is finding the expertise in the marketplace to implement on-going development now.

  • Market changes are happening at an increased rate, so changes in your mainframe applications need to address those changes in order to remain relevant.
  • In addition to a need to augment your staff, managers have to worry about eventually replacing their existing staff who are getting older and closer to retirement with quality, trained developers.
  • What managers need are younger, agile developers who can work in SCRUMs and familiar with the latest IDEs that run on Windows, MacOS or linux computers. That expertise is readily available and at competitive prices.
  • For OpenVMS developers, that knowledge and experience is hard to find these days since it is not taught at universities and there are not many in IT that want to learn a legacy system using green screens and line editors.

So in many cases these projects are delayed because the in-house developers just don’t have the bandwidth to keep up with the changes that are being demanded by the user community. Several dynamics tend to happen:

  • The developers you can find to join your team, find it hard to learn enough to help the existing staff (infrastructure training only slows the project down – if you had the time to train them).
  • For legacy applications written in 3GL languages like COBOL, Basic, Fortran, Pascal and C, there are a limited number of consultants or developers available who know OpenVMS and those languages as well.
  • Those that do have that skill set may be very expensive, unwilling to adopt to your infrastructure or unwilling to relocate. Most of the established developers are already working for other companies, making it harder still to recruit the talent you need.

Your best option is to recruit College level 4GL programmers with no OpenVMS experience and teach them how to develop on OpenVMS with NXTware Remote. By minimizing the new interfaces developers have to learn to start developing on the mainframe, you can abstract the OS dependencies so that file editing, compiling, testing and debugging on a mainframe are as familiar as doing it on the workstation.

This is the NXTware Remote solution and we can show how this reduces the time to learn development on a new OS. That is the subject of our next blog.

Intelligent Infrastructure for DevOps: Using a GUI to manage and monitor instead of scripts

Application Performance Management tools have been around in one form or another for many years, mostly operating anonymously and unnoticed. While most of these tools were home grown, there were a few vendors who supplied generic APM tools. These tools addressed application infrastructure needs in the background and were not considered a strategic part of IT; they kept applications running with a series of scripts developed by the architects of the applications. Since this software is neither development nor operations specific, they were often not claimed by either department. Until DevOps became a buzz word, they operated under overhead accounts or were assigned to different departments in spite of the job that they did. However, since Agile Development has become mainstream, the need for an intelligent infrastructure and the unification between Development and Operations (DevOps) has refocused attention on APM tools. Agile infrastructure is now a necessity to ensure the rapid and accurate deployment of newly developed applications as fast as the Agile Development tools that were used to implement the new business logic. For years companies have developed their own script based infrastructure and due to internal changes it has evolved into obsolescence over the years. This has happened for a variety of reasons, but mainly because the scripts were written for one purpose and once the author departed, no one knew exactly how it worked – so it fell into disrepair. Out of this need, many companies have turned to DevOps companies like Chef and Puppetlabs, et. al. that have developed command line, template or script based tools to provide this intelligent infrastructure, primarily focused on the linux platform. Some of these scripts are Ruby based, but in most cases the scripts and templates are a proprietary language. These proprietary scripting languages can perform many different intelligent functions to handle deploying new applications and even creating new systems to deploy them to. The advantages of this approach is that scripting language is very powerful and simple to learn, making the ability to generate intelligent infrastructure easy to implement and deploy.

The problem with using scripts is that they are:

a. written to handle a specific action, requiring constant change whenever the infrastructure changes.

b. written for specific platforms, so one change has to be propagated to the others.

c. requires knowledge of the infrastructure.

d. requires knowledge of the scripting language.

More complex scripts becomes difficult to read and understand beyond a few lines of trivial operations, which makes it hard for maintenance by anyone other than the author. Unlike the goal of DevOps, which is intended to unify Development and Operations as a team with one goal, scripting languages requires an analyst maintain the scripts so that new development can be seamlessly managed or deployed. Operations typically does not have this knowledge and it can be difficult for them to develop this skill (not being programmers as such). In order to facilitate a DevOps environment, the complexity of the scripts required to implement this infrastructure intelligence needs to be abstracted into simple actions so that operations can use it. This is usually accomplished with a Graphical User Interface. Historically, Operation’s goal is to keep everything in system running smoothly and applications are an unknown area: they perform business logic on data, consume resources indiscriminately and typically don’t handle system problems very well. So operations rarely understands the infrastructure of each application; they simply monitor it from the system resources aspect to see if there are any problems. Using scripts to automate this function is helpful, but tools to display the results are arcane. They are more familiar with desktop tools that have menus and displays because they are easy to use, easy to understand, and easy to modify. Operations is familiar the system operations and how it provides resources to applications like CPU, Memory, Disk and Network speed. These are things they can understand using commands for. However, for DevOps to be successful, using command line scripts  or knowledge of the application infrastructure should not be required to manage applications. What is needed is a tool that manages the infrastructure from the application side and notifies operations of any potential problems before they impact the overall system.

Now application management and monitoring tools have been around for many years and a few of these existing tools can be adapted to meet this need. One such tool is a third generation application management tool called NXTmonitor. NXTmonitor’s heritage comes from the Open Environment and Borland tools known as Netminder, Appminder and AppCenter. These tools evolved from simple monitors to a three tier architecture (presentation, business and database layers) to manage applications with a rules-based configuration file to specify each application’s environment. By leveraging middleware to facilitate multi-tier communication for agents and master servers, these tools were built on a fast, scalable infrastructure that enabled UNIX and Windows applications to be managed by an independent middle tier through a GUI presentation layer. The best description of this powerful tool is Application Orchestration: a tool that enables the definition and maintenance of complex applications and powerful monitoring capability with definitions of interdependencies across multiple platforms. This allows a complex structure to be captured in an intelligent, XML based configuration file that is built with the help of a Graphical User Interface (GUI). By recording the structure required to start the applications across multiple nodes to a configuration file, NXTmonitor can replicate the development environment exactly and even scale it up for production use. The use of environment variables enables NXTmonitor to assign variable infrastructure components to values that can be changed, making the tool easy to use in defining and replicating a complex environment. When the environment changes, these variables can be captured in environment variables which are different for each environment.

NXTmonitor is the next evolution of DevOps infrastructure tools – managing the deployment of a multi-tier application with a GUI console which can define the intelligent infrastructure needed for agile development. Unlike other DevOps tools, NXTmonitor provides a full featured APM that supports application orchestration across multiple platforms like IBM i-series, z-series, OpenVMS, Windows, and UNIX in addition to Linux. Application Orchestration is a relatively new term: it is the ability to start a complex application consisting of multiple, discrete components that are interdependent in a concise manner so that the application can be ready to process information in an orderly manner. In essence, Application Orchestration captures the developer’s knowledge of the infrastructure in a configuration file that can be used as a playbook for starting up applications. It is intended to be displayed and monitored by operations staff, with mouse actions that allow them to easily inspect or modify the configuration for scaling or performance purposes. Action scripts and Timers enable custom execution of actions designed to resolve runtime problems and increase performance. The definition of Performance groups makes scalability simple and easy to modify capacity, failover and service levels.

One of the key benefits of using NXTmonitor is that the NXTmonitor Console GUI is designed to appear the same across all platforms and perform in the same manner regardless of the desktop software or the nodes upon which the applications are being managed. Once you understand the operation of managing applications on linux, you can do the same operations on Windows or OpenVMS, for instance. With this in mind, the Console is designed for maximum efficiency and understanding and includes these features:

  • Multiple node login capability from one console.
  • Short cut buttons at the top for quick reference and use.
  • Navigation panel to display nodes and components being managed.
  • Content panel provides complete application configuration and status information.
  • Audit function provide history of the process restarts, failures, starts and stops
  • System statistics for each node includes, CPU, memory bar chart, devices, and errors.
  • Operator access to the process table now includes the ability to stop or kill an individual process in case of deadlock or runaways
  • Performance groups can now have actions associated with state changes so that changes in status can be propogated to operations or development.
  • SNMP support for other tools like HP OpenView and BMC Patrol.

.
Benefits

The benefits of using a Graphical User Interface (GUI) to monitor and maintain your infrastructure instead of creating scripts is as follows:

  • All Application sub-components are easily displayed in a console window.
  • Infrastructure rules are easy to change in a GUI.
  • Console GUI displays all application statuses in one, easy to understand display.
  • GUIs don’t require operations to learn a scripting language.
  • A GUI console can provide management, monitoring and configuration tools all with the click of a mouse.
  • Focuses interaction with the application performance and application monitoring.
  • A mobile device can display the GUI making management more dynamic.
  • Simple, pro-active health scripts prevent outages, increase performance.
  • Mouse click operations speed up scalability functions with copy and paste.
  • Configuration files capture application infrastructure for easy deployment to additional machines.

 

Summary

The difference in DevOps tools today is usability. Do you want to manage scripts for infrastructure intelligence, or would you rather use a Graphical User Interface? Is it easier to teach operations how to maintain scripts or learn an Operations console? It is faster to capture the developer knowledge in a script or in a configuration file read by a GUI console?

NXTmonitor, which has evolved application performance management for decades, has advanced the creation of this intelligence with a rich GUI console, making it easier to define, modify and monitor application configurations in Development, Test, QA and Production. Once you have the developer knowledge captured in an XML configuration file, you can monitor and maintain it easily in the NXTmonitor Console.

For more information, go to: http://www.ecubesystems.com/nxtmonitor.html

eCube Systems Interviews VSI’s Brett Cameron on OpenVMS, Open Source and Developer tools

BRC2

eCube Systems is interviewing Brett Cameron, Director of Applications & Open Source Services at VMS Software, Inc.  Brett holds a doctorate in Chemical Physics from University of Canterbury and was a long time employee of Hewlett Packard Corporation where he held various positions for 19 years and most recently a Senior Architect in the Cloud Services group. Brett is well known in the OpenVMS community and has made a hobby of porting Open Source tools to OpenVMS, including AMQP, RabbitMQ and Erlang.

eCube: Thanks for giving us some of your valuable time for this interview. You have been developing on OpenVMS for a long time and are the go-to guy when people want high performance and customized software on OpenVMS. There are so many things you have worked on in OpenVMS, so there is a lot of ground to cover. What is your favorite thing to work on?

Answer: The tough questions first! In terms of your comment about me having so many interests, this has always been a problem! I will be working on one thing and will then see something else that looks interesting, and before you know it I’m pushed for time to complete whatever it is that I should be doing in the first place. Maybe there’s some support group I can join. Probably for as long as I have been involved with software development (dating back to the late 1980’s) I have had a keen interest in Open Source software and the potential it provides, and I can recall installing early versions of Linux from some 20 3.5” floppy disks, and if you messed up something on the last disk it was back to square one. Fun times. Even around this time (early 1990’s) I can recall porting small pieces of Open Source code to OpenVMS, initially to help with aspects of my PhD research, and subsequently for customer-related projects when I started work in 1992 with Digital Equipment Corporation as a FORTRAN programmer (who also happened to know a bit of C and Pascal).

But probably my favorite thing to work on would be integration – helping customers to integrate their “legacy” OpenVMS systems with other systems. I don’t know why, but I have always enjoyed playing around with integration software and crafting novel integration solutions. Many organizations seem to think that their old OpenVMS systems are some sort of black box that is unable to communicate with the rest of the world. Possibly they have lost the skills to do this sort of work, or maybe they simply do not know what is possible; however the simple fact of the matter is that there are a myriad of good integration options available to OpenVMS users, and it is invariably possible to craft a good integration solution that will allow them to integrate their trusted OpenVMS-based application environment with the wider computing ecosystem, and more often than not it is possible to do this using Open Source technologies, particularly these days, with more high-quality Open Source solutions being available.

I would also add to this that I enjoy working closely with customers as opposed to always working away in a back room somewhere. Our business is a symbiotic relationship with our customers, and interacting directly with customers is an important part of that relationship. Aside from the work aspect, I enjoy meeting new people and making new friends, and it is often fascinating to learn about the customers’ business – you kind of learn how bits of the world work.

eCube:  Yes, that is something we both share. As you may know from our previous interviews with Sue Skonetski and Eddie Orcutt of VSI, we are trying to get all the perspectives on the future of OpenVMS, now that VSI has taken charge. Can you tell me about the things you have been asked to do? What are you working on and what progress you are seeing?  

Answer:  My main focus is around Open Source, figuring out what Open Source products would be good to have on OpenVMS and figuring out how to get them there, which may involve us doing the work, or possibly working in conjunction with the community. There has been a lot of good work done over the years around Open Source on OpenVMS, and we want to expand on that. From my perspective, whatever we do needs to be relevant to our customers. I suppose that is in some ways an obvious statement; however finding out exactly what is relevant and determining where we are going to expend our energy is not necessarily straightforward, and this comes back to my comments in the previous question about working closely with customers and partners.

Since taking over OpenVMS so to speak, we have obviously had to spend a good deal of time doing somewhat tedious things such as re-branding, getting environments and processes and procedures set up, and so on; however we are largely through this now and going forward it will definitely be all about innovation, adding new features, creating new products, making significant enhancements, and so forth. Clearly the x86 port is of highest priority; however there is plenty more going on across the board, including the new TCP/IP stack, Java 8, and various other significant projects.

Personally, since around late March this year (2016) I have been largely focused on the Java 8 port. Thankfully I have had Camiel helping me, and I am pleased to say that we are just about across the line with this project. I will not bore you with the statistics, but let’s just say that it has been a significant piece of work and while we have certainly had plenty of challenges, things are looking good. I should note that this work has been done in collaboration with HPE, and certainly it would not have been possible without their assistance.

Piggy backing off the Java 8 work we are looking at upgrading some of the Java-based products and potentially introducing a few new ones. For example, we have beta kits for Scala (a popular functional language that uses the JVM) and Maven (a powerful build tool for Java projects).

In addition to the Java work, I’ve been working on various other projects, including the new version of CSWS (Apache), a new Ruby port with a pile of interesting extensions, a new version of PHP, and various other such projects. We also have a partial git implementation and a reasonably functional Subversion client, although both of these items need further work. I’ve also managed to fit in a bit of consulting here and there, so all up it has been a busy year, and I don’t think things will be much different next year!

eCube:  Since you are the first VSI person we have spoken to with a focus on developers, and VSI has said that the future of OpenVMS depends on the developers, what do you think it take to get developers on OpenVMS ready for the future?

Answer:  There’s no short answer to that question, I suppose in part because everyone’s needs are somewhat different. However, if I am to look at things from the perspective of getting new or younger developers onto OpenVMS, clearly we need to be able to provide the sorts of environments and tools that they are used to using on other platforms. For example, further enhancing GNV will make it easier for developers familiar with Linux to work on OpenVMS, and providing powerful IDE’s such Eclipse is also vitally important, as indeed are more Open Source solutions, and the ability to hook into facilities such as GitHub and continuous integration tools such as Jenkins.

To some degree, modern developers don’t much care what the underlying operating system is, so long as they have the tools to do their job and those tools work well. While we do have available some good tools in this space (such as your NXTware Remote and Eclipse-based IDE), there is still a lot that needs to be done.

I should also add that in parallel we need to continue to support and enhance existing toolsets, as these are critical to many of our customers. One big item that comes to mind here would be bringing the C++ compiler up to current standards.

eCube:  We have heard a few comments on the OpenVMS events like the Technical Update Days and the Bootcamp have the primary focus on hardware and operating systems topics. Developers say they don’t want to attend because there is no focus on developer issues. Is this a fair assessment? If not, what can be done to change this perception? If so, will VSI change its focus to a more developer oriented event? Are there any plans to change?

Answer:  I am not so sure that this is an entirely fair assessment. Certainly some of the material presented at these events would not be of much interest to developers (it’s not of much interest to me J); however I would like to think that at Boot Camp in particular there should be more than enough of interest to everybody (the committee do a great job of selecting a nice balance of presentations). But I do appreciate the problem. As to what can be done to address this matter, I am not sure. I think it can also be that developers will sometimes simply miss out on getting to attend these events, which is one thing. Another thing is that there is a lot of diversity to consider here, and what might be highly relevant to developers from one organization may be of absolutely no relevance to developers from another organization. I have done many talks over the years around reasonably general topics such as web services and integration, and possibly we could look to expand on this, but it is not really possible to go into much detail at a conference. If developers are interested in a particular topic, we could certainly see about facilitating custom workshops or training sessions. Another thing that comes to mind might be to organize a larger number of smaller events. Such events do not necessarily need to be formal events organized by VSI; they could just be meetups arranged by OpenVMS users to share their experiences with others (standard meetup practice being to provide pizza and beer). It would be great if we (VSI) could get along to all such events, but practically this could be a challenge; however I’ve been to meetups where speakers will call in via Skype, Hangouts, or whatever. Webinars are another possibility. A few years back my good friend John Apps and I did a series of OpenVMS development-related webinars that were well-received, and feedback from these sorts of events can be used to better guide future activities. We did the talks at two different times to cover most time zones; it worked very well, although I didn’t much enjoy getting out of bed at 2am or 3am in the middle of winter!

eCube:  Well, I just figured you never slept very much! Continuing with development topics, the programming language support on OpenVMS is one of its strengths, because it supports so many different languages. You mentioned your work on Java 8 – it is expected to have current version support on OpenVMS in Q1 2017. Is that on schedule? How important do you think this is and where do you see new Java ports in the priority list for VSI?

Answer:  I’ve talked a little about the Java 8 port already. This has been (still is) a major project, and we are generally very happy with how it has gone.  As of this moment we are coming to the end of a two month field test, and we have been very pleased with the results: bugs were found (and fixed), and testing coverage has been comprehensive. The intention is to release in Q1 2017, and this is looking achievable (there certainly should be no technical impediments).

Java is clearly an important language, and we are factoring future Java ports into our planning. We will for example need to repeat the Java 8 port for x86, and will also need to start looking at Java 9 I suppose. It is also important to appreciate that there are now quite some number of other languages that use the JVM (Java Virtual Machine). I mentioned Scala previously, and Clojure would be another one that comes to mind. With Java 8 we are able to better support some of these other languages, which gives developers more options on OpenVMS, and makes it possible to port Open Source projects written in such languages across to OpenVMS (often without too much difficulty).

However, there are a number of other new languages that we also need to think about. For example, languages such as Rust and Google’s Go language are becoming more popular and widely used, and it would be great to have these available on OpenVMS. Interestingly these languages leverage the Open Source LLVM compiler backend, which we are porting to OpenVMS as part of the x86 work, so in theory it would be possible to do something with these other languages; however this would not be a small job and it is just an idea at this stage.

Scripting languages such as Ruby, Lua, and Python are also very important. I mentioned previously that we have a new Ruby port available, and we are actively considering exactly what else we want to do in this space. We also have a version of Lua available. Something like Node.js would also be nice; however this is another thing that we might want to hold off on until OpenVMS is up and running on x86 (for various reasons). I have also talked a lot about Erlang in the past, and we might look to formalize some of this work. As things stand we have a couple of working ports of slightly older versions of Erlang; however I would like to see us have available a more current release. The older versions are reasonably stable and functional; however there are one or two limitations that I’d like address before I would consider them to be fit for use in a production environment.

eCube: What are the development features of OpenVMS that make it a good operating system for developers? How important are tools like SCA and PCA for developers these tools are absent from newer OSes like Linux. What can be done to attract young developers to the virtues of OpenVMS development?  

Answer:  All operating systems have their good and bad points. OpenVMS was designed by engineers for engineers, and this resulted in good 3GL language support, a very comprehensive set of library functions and system services, and various developer tools such as those you’ve mentioned that all work together seamlessly. Most OpenVMS developers are familiar with the RTL LIB$ routines; however there are a whole load of other useful routines in the RTL, many of which (such as the parallel processing PPL library) seem to have been somewhat forgotten about. Other operating systems also have such functionality, although it may be a separate library that needs to be installed, as opposed to something that comes bundled with the operating system. The key point is that OpenVMS was designed with all of these sorts of things in mind, as opposed to evolving (in a somewhat ad-hoc fashion) to accommodate them, and accordingly things seem (to me anyway) somewhat more logical on OpenVMS – it’s like it all goes together logically, because it does – it was designed that way.

But getting back to your question about what can be done to draw attention to the virtues of OpenVMS development, I am really not sure. I don’t think that you are ever going to convince a staunch Linux developer or a staunch Windows developer that OpenVMS has better facilities; it is simply an argument that (most of the time) you are not going to win; people like what they like, and you’re just not going to shift them; you start entering religious war territory. Obviously some people will be more open to the matter than others, but in general I think this is probably the wrong approach (initially at least). I suppose that ultimately it comes back to the comment I made previously about ensuring that we have the sorts of tools available on OpenVMS that “modern” developers are used to using on other platforms (irrespective of the relative merits of such tools), and ensuring that those tools work well. Once people are using the platform, it is easier to have a discussion with them about all of this other goodness.

The other side of things I suppose is giving existing OpenVMS developers more that they can use; there is a lot that we can do with the existing development stack, both ourselves and working with partners.

eCube:  I agree; unless you can offer them something they can’t get elsewhere. Moving on, there are a lot of new technologies that have developed since the 70’s and 80’s, like 4GLs, client/server architecture, object oriented languages and web-services. How is OpenVMS adapting to these technologies?

Answer:  I’m not sure that there’s a short or easy answer to that question. If we go back in time, I think it could be argued that for a period (maybe the mid to late 80’s and possibly into the early 90’s) OpenVMS was easily at the forefront of operating system technology, and it was one of the first platforms that the sorts of technologies you refer to were made available on. Times change and large corporations do strange things; new trendy looking kids arrive on the block; fashion changes. However, through all of this change and in spite of going through two major acquisitions (Compaq and HP), OpenVMS has in one way or another for the most part managed to adapt to these changes in fashion. It is possibly only in the last decade where things have slipped somewhat; however this is a recoverable situation, and we are working on making that recovery happen, with help and support from the community.

If I wanted to cite a few examples relating to your original question, I have fond memories of implementing DCE-based client-server solutions for several customers back in the mid 1990’s. At that time DEC had the best DCE implementation, and for that matter OpenVMS arguably also had the best CORBA implementation. CORBA is still in fairly common usage, but to some extent neither CORBA nor DCE ever really achieved their perceived potential and lost out to the next wave of fashion, which for the most part centered around web-based applications and leveraging HTTP in all manner of strange ways, leading up to the advent of web services and the myriad of web services standards surrounding them. Implementing web services-based solutions on OpenVMS is not particularly problematical and several good solutions exist; however one common problem that I have encountered many times is that OpenVMS users will have business-critical applications written in languages other than C or Java that they do not necessarily know how to integrate with the likes of C and Java, and this will often be an impediment to progress, or will result in some quite fascinating (and unnecessarily complicated, and often very brittle) workaround solutions.

I am not sure that I’ve adequately answered the question, but I think the bottom line is that to some degree OpenVMS has managed to adapt to changes in fashion and it’s our job to accelerate this.

 eCube: I want to keep this focused on software development, but there is an important aspect that hardware plays in the future. What do you see as the major change which will occur when OpenVMS is supported on the X86/64 platform?

Answer:  I suppose the obvious answer is that we’ll be able to run OpenVMS on a much wider range of hardware! Somewhat less obvious perhaps is that it also opens up the potential to more readily port some interesting Open Source products to OpenVMS. For example, I mentioned previously how languages such as Rust and Go leverage LLVM. I think that I also mentioned how the V8 JavaScript engine used by Node.js makes extensive use of just in time compilation (JIT) in pretty much the same way as Java does. Implementing a just in time compiler for V8 on Itanium would not be a trivial exercise; however x86 is supported. In short, I think it is fair to say that x86 provides more options and opens up an interesting array of opportunities to bring some new technologies to the OpenVMS platform. It will also make us start thinking about a few things. For example, people running OpenVMS on their x86 laptops are probably going to want a decent GUI, and we therefore need to look at improving the current state of play in this space.

Virtualization is probably also going to be a big one, and I can see many OpenVMS users being interested in the notion of running OpenVMS as a guest operating system in their corporate clouds. This in turn could have some interesting ramifications from a licensing perspective, and there are a few other things that we will need to consider, such as how to deal with shared storage and how OpenVMS might need to interact with core cloud services such as provisioning, and so forth.

eCube: You have been involved with Open Source tools for many years. What are the most important tools in your mind?

Answer:  It really depends on the problem that you are trying to solve. For example, we’ve talked a lot about integration and some of the Open Source integration technologies and programming languages that I have used and hold in considerable regard. If I was to look at it from a software development perspective, I would probably say that two little tools I have used successfully time and time again on multi-million dollar projects would be flex and bison (essentially lex and yacc). I have used these tools to create grammars for custom RPC-style middleware solutions, and I have used them to develop parsers for language conversion projects. I would not claim to be an expert with these things by any means, but all of the projects in question were successful (and quite a lot of fun).

eCube: What about modernization tools? Do you use a modern IDE when you develop? Does that help OpenVMS grow in the future?

Answer: I fully appreciate the benefits of a modern IDE, and I believe that a modern IDE is essential for software development on OpenVMS, particularly if we want to attract younger developers, who have grown up with such things. Aside from just looking nice, IDE’s provide many other features that are expected (taken for granted) by younger developers, such as integration with source code control systems, integrated debugging facilities, hooks into continuous integration tools, unit testing facilities, and so forth. Seamless cross-platform development is also anything key aspect here.

eCube: What are the obstacles that VMS Software faces down the road and how can that be resolved?

Answer: It is always difficult to say what the future might hold, but there are certainly several things that we need to be cognizant of and put in place strategies to address. As is well known, the OpenVMS business prior to the advent of VSI had for various reasons been in steady decline, with users moving to alternative platforms and so forth. Some of the reasons for this were not necessarily related to anything that HP or Compaq or DEC had or had not done, but where down things like corporate initiatives to standardize (or try to standardize) on a particular and more widely used operating environment. In some cases skills had been lost, effectively forcing the need for change. In some cases OpenVMS with its reliability and stability just ended up being its own worst enemy! From a VSI perspective, it is important not only to look after the needs of existing loyal OpenVMS users but also to put in place mechanisms that will ideally see increased use of the operating system, and there are many ways by which this may be achieved, including training, marketing, introduction of new technologies (as discussed previously), and so on. It may also entail working with partners to develop specific solutions in which the OpenVMS system is essentially an appliance. It should be noted that looking after existing OpenVMS users also entails many of these same activities. We are providing either directly or in conjunction with partners a range of consulting services, which extend to include the likes of hosting and application maintenance and support services.

 eCube: What do you think is the key for OpenVMS for the future?

Answer: I think that I’ve covered most of this in my answers to some of the previous questions, but in general terms I believe it comes down to a few things. Looking after existing OpenVMS users is paramount, and this does not mean preserving the status quo; it means enhancing the operating system and layered products; introducing new technologies; providing training; regular information-sharing events; providing a range of services; and so on. We need to listen to our customers and continue to provide them with quality products and services. We also need to make the platform more appealing and relevant to a wider audience. With the plans we have and the team that we have in place, I am sure we are in good shape with all of these things.

As our web site says, all we do is OpenVMS, and the bottom line is that the operating system is now receiving more attention from an engineering perspective than it has for quite some considerable time. We will continue to advance the operating system, enhancing existing features, adding new features, provide support for new software technologies, and potentially port it to other architectures (not only x86). There’s no value in thinking small here, as if you think small you only ever achieve small. We have an opportunity to do big things with OpenVMS, and this is likely to involve taking it to places that might previously not have been even considered. We’re a crazy bunch!

You might want to talk to our CEO Duane Harris about the plans VSI has for the future. He can tell you that our goal is return OpenVMS to prominence as a leading operating system platform.