451 Research is regarded as one of the leading suppliers of information about the data centre business, among other tech markets.
In this exclusive interview, Rhonda Ascierto, 451’s research director, data center technologies, and conference co-chair for the DC Summits, gives us an insight into how the company built its reputation, and suggests criteria for enterprises when choosing a data centre.
EM360: Can you tell us about 451 and how it researches the data centre market?
Rhonda Ascierto: 451 covers the spectrum of enterprise IT, from the core to the edge.
We come at the datacenter business from three perspectives: the commercial datacenter business, which dates back to our acquisition of Tier 1 Research back in 2005, a company that specifically served colos, multi-tenant data centers and hosting providers; technology and operations, through our traditional approach of looking at disruption and different technologies; and we have our sister company the Uptime Institute.
One thing we’re always asking is, ‘What could disrupt today’s data centers — their designs, the technologies and how they are built, operated and managed?’ It’s a question that we constantly ask of suppliers, of end users and of ourselves.
And our close working relationship with The Uptime Institute is unique. Many people know the Uptime Institute for its datacenter Tier Certifications and its M&O [management and operations] Stamp of Approval. But there is also a global Uptime Institute Network, a membership group of operators from some of the world’s largest, best-run and most-efficient data centers.
As 451 analysts, it’s our privilege to be involved with the Uptime Institute Network, and this helps inform our research focus in a very practical way.
EM360: What are the big issues facing data centres right now?
RA: One the most pressing issues facing datacenters today is capacity management and planning. Understanding which workloads are best housed on-site, in a third-party datacenter or in a public cloud. There are many factors in play, with economics, security and data governance being key.
However, many organizations struggle to get accurate information — about their IT demand trends, the full costs of their data centers, their changing IT and business requirements — to support capacity decisions.
Yet at the same time, the pressure on datacenter owners and operators to design and run their data centers to the highest standards has never been greater. They are faced with strong commercial forces, with outsourced datacenter capacity becoming more and more attractive, and with rapid technological changes.
There is so much change in the datacenter industry that any significant technical or business investment, or decision to choose a partner or outsourcing option, requires research and careful consideration.
Another major issue is keeping up with new privacy rules, particularly in the UK and the EU, where executives are grappling with the implications of Brexit.
Developments like Rackscale Integration and DevOps can have a significant impact, because they can disruptive and large scale. Rackscale integration effectively means moving from discrete servers, supplied by companies like HP and Dell, to racks of integrated low cost and/or specialist components, sharing subsystems and linked through fibre optic connections. This can completely change the economics and design of datacenters. Open Compute is one example of this.
DevOps seems remote from the physical datacenter, but it will probably also have an effect in the long term. DevOps breaks away from the tight integration of application to the database, operating system and server.
Instead, it says, this will run on a virtualized, native cloud platform. That means the data may be spread around, or moved around. And it assumes failures are likely. So the need for very highly resilient datacenter will most likely be reduced over time.
EM360: What are the key issues CIOs need to consider when choosing a data centre or the siting of one?
RA: Well, one of the first things to consider is their organization’s applications — today’s and tomorrow’s — and the likelihood that there will be many that no one has yet thought of.
Every application is different, but many will need to link to each other. Some – many – applications are best sourced or run in the public cloud, as services. Others will be best in colos, and some organizations will definitely need their own data centers, for reasons of cost, control, compliance, security or other reasons.
But it’s important not to run applications in expensive data centers when there are better options. This should reduce over-provisioning and allow the management to really focus on the datacenter they need for the applications that they need to run.
This means that the best next datacenter for most organizations will be the one that is built for its purpose. It may look entirely different from their other data centers — be it a micro-data center in the basement of a metro high-rise or a 5MW prefabricated modular facility with scant UPS [uninterruptible power supply] coverage — but its design, location and operational management will have been carefully thought out to support its business requirement.
We sometimes call this the triple-A datacenter. Data centers must offer good availability and agility, and be aligned to the business customers and services they are serving.
It is no longer a viable strategy to view data centers as a connected, secure property, or a generic operation providing all customers with a vanilla service. Data centers must be designed to optimize uptime with the help of the best technology and thoroughly applied processes. But even that is not enough.
They must also be agile, efficient and aligned. Agile, meaning they must be able to adapt if the workloads, competitive environment or technologies change. They need to be efficiently and appropriately designed, resourced and optimized for the customer’s needs at any point. And they must be aligned to the business environment and the appropriate workloads and applications.
This requires careful multi-disciplinary analysis of their current and future capacity requirements, and of new technologies and operational approaches, ranging from micro-data centers to lithium-ion batteries, from Open Compute and Open19 to disaster recovery.
Guiding this analysis should be the future requirements of the business, and the ability to exploit new opportunities enabled by data centers, such as edge of network compute, the Internet of Things, and even artificial intelligence.