Big Iron. Green Screen. Transaction Processing Systems. Mainframe. No matter what you call them, legacy systems are all around us in organizations of all shapes and sizes in every industry we can think of, including many high-tech, bleeding edge technology companies. And these systems increasingly drive the need for software requirements, mostly for system retirement, legacy replacement, and legacy migration.
Considering that the first GUI-based systems came out almost a quarter century ago, and web based applications have been around for more than a decade now, the resilience of the legacy systems living long past their predicted R.I.P. dates is astonishing to say the least. Companies have been struggling for years now to replace these systems with more modern applications, and the failure rate of these projects has been surprisingly high. While these failures may have been embarrassing not to mention expensive, time is pressing, and it is imperative companies replace these systems.
Two key factors are driving the imperative to retire legacy applications and systems:
First, legacy systems really do transactions very, very well but are extremely inflexible. As the product portfolios of companies become increasingly complex, and the relationship with customers nuanced and dynamic, certain types of transactions are almost impossible to perform efficiently with legacy systems. Legacy systems drive organizations to consider tough, unpalatable choices: simply give up on certain types of sales altogether, or use elaborate and inefficient manual processes to work around the limitations of their systems.
Second, and most importantly, programmers who are fluent in COBOL or any of the skill sets needed to code and manage the old systems are themselves retiring! Most companies using legacy systems have a skeleton crew of programmers and administrators on hand to keep the show going. They are not adding any new features, simply ensuring that what is being used continues to work. No new talent is entering the labor force fluent in these old technologies, and few want to get fluent in them.
Unless these applications are replaced, there are going to be very real problems confronting organizations using them in the very near future. But it is not as easy as it seems.
I have been part of several teams that tackled legacy replacement projects. Going into these projects, there often is a slam-dunk mentality that pervades the team. We are bringing in new technologies, slick user interfaces, enabling new business processes and doing so many cool and good things that someone would have to be crazy to reject what we are offering.
The testing usually goes quite well…and then comes time for the pilot launch. This is when things start going off-script: “The new application is really cool but it takes forever to load my pages.” “Am I doing something wrong or is this really this slow?” “Why does it take me six or seven minutes to create a sales quote when I could do it in less than five with the old system?” “Why are there no shortcuts to speed things up in the new system?”
The death of a thousand cuts has started, and the dripping sound you hear is the lifeblood gradually draining out of your project. And it is happening just because it takes people a little longer to do things with the new system. Even if you think you did well defining legacy replacement scope and business needs, your project may yet fail. So, can application performance and responsiveness truly trump all the other goodness that comes with a new platform? The answer, based on personal experience, is a resounding YES.
As Business Architects and Business Analysts, it is very important for us to keep a look out for these issues, since they really can derail an entire project. In my next post on this topic, I will discuss the reasons why small performance hits can have such a distressingly significant impact on project success.