How to avoid common pitfalls when migrating legacy IT systems

digital transformation IT legacy
Roy Wood - managing director - IT Services
Roy Wood is the managing director for IT Services for Advanced.
Opinions expressed by EM360 contributors are their own.

 

As digital transformation accelerates, many IT teams are challenged with navigating between legacy and new technologies while introducing innovative solutions to support key business processes. Legacy systems support core business processes, key workflows and services – that’s why many are still in use today – but they can also become a beacon of inefficiency, holding organisations back from successful change management plans. They usually consume more financial, physical and human resources than their modern counterparts.

These challenges are supported by research from analyst firm IDC which says that 70% of IT executives view the burden of legacy application systems as one of their top problems. Similarly, Forrester reveals that digital business cannot exist on top of old, monolithic legacy applications. Businesses are increasingly recognising the need to move their legacy technologies into the modern world. But making sure mission critical applications – which typically run on old and expensive proprietary environments – evolve to meet their needs, is no small task.

The solution? Migrate, re-architect, replace or decommission these technologies. The first option is often the most effective choice for organisations that wish to safeguard and future proof their Intellectual Property (IP). Application migration (also commonly referred to as re-platforming or re-hosting) can be accomplished with minimal risk, and at a fraction of the cost of redeveloping or buying a new package, which means businesses can maximise the return on investment for their legacy applications as well as modernising to remain competitive.

With the right analysis and technical support, businesses can effectively balance current work load while moving their legacy applications to modern environments with minimal disruption. The factors governing their decision to migrate will include which open systems platform (such as Linux) is the strategic choice for their business, the costs and timescales, as well as how simple or difficult the migration might be. Regardless of whether the chosen platform is Windows, Linux or Unix, ensuring the project is a success requires some key elements. Avoiding common mistakes will help ensure transformation takes place effectively and within time and budget.

1. Going for a Request For Proposal (RFP) and requiring a firm price
RFPs never provide sufficient information and certainly don’t provide the sources of all the assets. An RFP is not a substitute for a deep assessment project where the sources of the application are analysed in detail, the options for migrating each asset agreed with the customer, the risks agreed and mitigated, a migration roadmap created, and the migration properly costed and scheduled.

Our experience in helping organisations migrate their business critical applications
repeatedly reaffirms that the investigative time spent before starting a comprehensive
migration pays for itself many times over. It also facilitates sensible decisions and the
selection of appropriate strategic alternatives. Furthermore, an ‘at distance’ or overly strict RFP process can hinder the open exchange of information and collaboration that foster a partnership approach that works best for these types of projects.

2. Trying to modernise too much
A migration should be, as far as possible, a 1:1 conversion of the functionality. Obviously, the database is changed, but IT teams should restrain from any dramatic change, for example, to the architecture of the application, the user interface or the business logic. Such changes  extend the project, usually substantially, and makes testing much more extensive and complicated, leading to greater long-term costs. Once the 1:1 migration is successfully  achieved, businesses can then execute incremental modernisation enhancements.

3. Failure to use a specialist vendor with proven migration tools and methodology to ensure  the project completes on time and within budget
In order to ensure consistency, quality and eliminate human error in the conversion process, businesses need experienced personnel proven to be able to use specialist tooling. High levels of automation will make sure delivery can keep to their timescales and change can be accommodated during the project itself. So choosing a company with the right people and a proven product suite is as critically important as following the right methodology.

Specialists in migration will call out key elements such as getting the customer testing
(among other considerations) as early as possible. This is achieved by dividing the
application into a series of manageable work packets.

The first work packet should be a relatively small subset of modules which cover most, or preferably all, of the different application asset types. Once this is proven, further work packets can be delivered with increasing volumes of assets. Customer testing should continue for over half of the project, in parallel with the vendor migrating each subsequent work packet. By the delivery of the last work packet, the customer should be very confident that the application has been migrated successfully which allows the final acceptance testing period to be tightly controlled.

4. Customer undertakes too little testing
Although migration requires less testing than a rewrite, it is still important for IT teams to  construct a formal test plan at the beginning of the project and adhere to it. Testing should also include performance and load testing.

5. Failure to plan the implementation
Ensure all aspects are carefully documented and tested before the live cutover. This should include data migration – is there a sufficient time window to migrate all of the data for example? Consider parallel running. Organisations must have another plan in place if the implementation needs to be aborted.

Reassuringly, most of these pitfalls can be avoided, especially in conjunction with an initial comprehensive assessment project – and with the support of a technology partner that fully understands the critical nature of an organisation’s applications. With this in mind, businesses must ensure their chosen partner has the proven migration tools and methodology, and they must together create and execute a fully scoped plan for the main migration activities and not simply the conversion of the legacy assets. After all, “if you fail to plan, you plan to fail.”

Springfields Fuels, which is the site of the UK’s main nuclear fuel manufacturing operations, is a great example of an organisation that accomplished a successful migration. Working closely with us as its partner, it moved its mission-critical nuclear fuels application from an ageing OpenVMS VAX platform to a flexible Windows Server environment as part of its digital transformation journey.

In a modern environment, Springfields can now reduce its operating costs and obtain greater reliability for the future. Other organisations can easily achieve the same benefits and become unshackled by the burden of legacy application systems. It’s just about finding a partner with a proven track record in building an ecosystem of innovation, to ensure the common pitfalls are avoided.