Christopher Glynn is a principal consultant at IT change specialists ECS.
Opinions expressed by EM360 contributors are their own.
In many organisations it can take days or even weeks to standup consistent application environments. Some large enterprises can count the delay in months.
Access to replica environments is critical to support the development and testing of new/changed code; resolution of production incidents; and ad‑hoc reporting and analysis. As it underpins cloud adoption and other data centre consolidation projects, any inherent friction in traditional approaches to environment management impacts the business.
As a result, there has been a race to deploy DevOps tools and embrace containerisation technology in an effort to remove some of the roadblocks and accelerate the standing‑up of realistic end‑to‑end replica environments.
Unfortunately, it’s not straightforward. Enterprises wanting to harness these tools to speed up innovation are constrained by an amalgam of legacy platforms, including mainframes, and by using extracts of production data that are almost impossible to manage.
Replicating complex environments including all dependent applications, database and systems management systems is both time- and resource-intensive. Even once provisioned, a full end‑to‑end environment will still need to be maintained and kept up to date. One client we worked with was using data extracted from production fifteen years ago to support testing activity today: unsurprisingly, the lead time required to re‑provision with up‑to‑date data was deemed prohibitive.
Containerisation helps reduce the time and effort required to stand‑up an application stack and to ensure build consistency. However, most containerisation tools have platform constraints, which make it difficult to find one solution to handle an entire estate. This is an issue for those enterprises dependent on a mix of modern and legacy platforms.
Another weakness is that containerisation tools do not address the data layer. This means that containerising any application stack requires the data to be provisioned and managed separately, adding to the time and cost burden.
It is this failure to readily integrate dependent applications across modern and legacy platforms and multiple data sources that gives enterprises so many headaches. The scale of this problem is highlighted by LzLabs’ observation that over 70% of all transactions actually run on legacy platforms. This is overwhelming when, according to Gartner, “At a minimum, your organisation should have three additional nonproduction environments for each major application.”
To overcome this bottleneck and accelerate the delivery of consistent, full, truly end‑to‑end agile environments, you need to approach the problem from a new perspective. By integrating tools from multiple vendors – for example, LzLabs, Docker and Delphix – it is possible to reduce the friction of traditional environment management and deliver end-to-end environments in a fraction of the time.
By using combinations of tools within an agile environments framework from a single management console, enterprises can stand‑up complete, end-to-end replica environments – encompassing applications, platforms (including legacy platforms) and data sources – in just minutes.
The approach is technology-agnostic and can be customised to work with many DevOps and DataOps tools. A typical deployment brings together data virtualization, containerization and mainframe emulation technologies. It is the way in which these different technologies come together that makes it possible to provision complete virtual environments at the push of a button, comprising exact replicas of the multiple heterogeneous components that define the master environment. Furthermore, the ability to mask data sourced from production will ensure that dependent data sources in the same environment retain full referential integrity.
Using the latest data virtualization technology it is even possible to define and stand up an environment based on a specific set of time-stamped data. Multiple copies can be provisioned separately and in parallel, and each environment can be updated, rewound or refreshed completely independently.
The results are compelling. For one large banking customer we were able to show a reduction in the time required to stand‑up a full test environment with fresh data, from two weeks to under 30 minutes, while still remaining compliant for data protection purposes. And the benefits don’t stop there, with storage requirements reduced by a factor of ten, and the possibility of up to ten developers testing in parallel – instead of sequentially as before. This makes it possible to boost annual deployments by a factor of 10, leading to faster releases and a stronger competitive position.
In summary, many enterprises are looking for a reliable way to accelerate innovation and boost agility in order to compete more effectively in a market bulging with smaller, more nimble challengers. By combining existing tools in innovative ways, enterprises can unblock their environment management bottlenecks and put agility right at the heart of their business.