The concept of a software-defined data centre has been around for some time, and while it’s been widely acknowledged as the future of enterprise architecture, it’s still relatively uncommon.
Only a small percentage of data centres could be described as being totally software-defined.
One of the reasons for this, according to Gartner, is that it’s still early days for the technology and it is not yet fully understood.
“Infrastructure and operations leaders need to understand the business case, best use cases and risks of an SDDC,” said Dave Russell, vice president and analyst at Gartner. “Due to its current immaturity, the SDDC is most appropriate for visionary organizations with advanced expertise in infrastructure and operations engineering and architecture.”
However, Gartner adds that the SDDC is “crucial to the long-term evolution of an agile digital business”.
At its heart, the SDDC is all about abstraction. But such is the level of abstraction in computing these days – with abstraction upon abstraction to the point of distraction – there’s a stage at which the abstraction becomes unreal to even the most imaginative of tech experts.
But virtualisation – which is sort of like the first level of abstraction – is technology that most people in the industry have got their heads round by now, and that’s the subject with which Enterprise Management 360º started a conversation with Graham Brown, managing director of Gyrocom.Driven to abstraction: What is a software-defined data centre? Click To Tweet
Scope for Gyrocom
Gyrocom describes itself as “the software defined data centre experts”, and was established 10 years ago.
“We started our business to help organisations solve their data networking problems in large and medium-sized enterprises,” Brown says. “So, routing, switching, and so on, in one of the silos of the data centre.
“As we’ve matured as an organisation, we’ve build a practice that now specialises in what we term as the software-defined data centre. So, it’s across all three silos of the data centre – network, compute, and storage. And it’s about looking at a model that allows companies to commoditise the underlying ‘tin’ element, if you like, and move all of the functionality into an abstracted software layer.
“We work with various partners, like Nutanix, VMware, IBM, and others, to deliver the software-defined data centre model to our customers. We can do that in their private environment, in their own data centres, and we can deliver that in the public cloud as well.”
The very definition of SDDC
We’re not quite there at the explanation of what an SDDC is yet, but before we get there, Brown has something interesting to say about silos. “If we’re engaging with a customer that’s working to a more traditional IT model than to a model that’s been developed over the last 15 to 20 years, you’ve got a set of people that are specialists in the compute and now server virtualisation space – that’s one side of it, that’s one specialisation in a client’s IT structure.
“There are two more main silos that we tend to get within an infrastructure model within a company. So we’ve got the compute one, and then you’ve the storage one – which requires completely different skillsets in a traditional model – so, looking at the storage area networks, big, petabytes worth of storage for large enterprise customers. And that requires a completely different set of people with a completely different set of skills.
“And the last silo is networks – so, connecting everything together, routing, switching, and also security tends to fit within this area.
“So you’ve got three silos, and they almost work independently of each other, although they’ve got to come together into a consolidated infrastructure delivery for IT.
“What we’re finding now is that those silos are converging,” continues Brown, and they’re converging as a consequence of the traditional “appliance-based” approach being phased out.
“The traditional approach is one where you might buy a network switch from Cisco, storage hardware from Hitachi or EMC, and compute from HP with maybe a little bit of VMware.
“That whole appliance-based approach is now changing. There are economies that can be delivered from commoditising those appliances, because ultimately they all run off the same bits and bobs, if I can use that phrase.
“So that underlying hardware element is commoditising and the functionality is abstracting up into the level of software.
“One of the advantages you get from abstracting it all into software across all three silos is that it’s much easier to automate.”
Automating our way into the ether
Automation, says Brown, enables organisations to deliver digital transformation initiatives – where people need to deliver applications, or customers have new requirements, or to deliver a new app – quickly. This is not as easy with the traditional enterprise architecture model.
“Actually the back-end infrastructure in the traditional model is too inflexible,” says Brown. “You want a high degree of automation. Elevating it to software allows people to achieve that.
“So if you think about the way server virtualisation works – in the sense that you can take a server and define it in software, abstract it from the underlying physical server that it sits on – you can now do the same for networks, and you can do the same for storage.
“And bringing all that together is the software-defined data centre.”
There you have it. Probably the clearest explanation you will get of what an SDDC is and what its capabilities are.
But ultimately all this data – no matter how many stages of abstraction it goes through – requires some hardware somewhere down the line, probably all the way along the line: it needs storage disks to rest on, servers in which to interact, and network cables through which to travel. You can’t abstract data into thin air and substitute Ethernet for ether… can you?
“Absolutely,” says Brown. “Of course it needs hardware.”
Software needs hardware
So if everything’s abstracted to the point where data centres themselves are figments of an automation algorithm’s imagination, is anything real any more? How, they yell, does this thing work?
Brown gives an example of how one company, Nutanix, provides a solution. “Nutanix is an appliance vendor, but Nutanix would position themselves as a software vendor. So it’s the Nutanix software that sits on a commodity – x86 server – with commodity disks or flash storage.
“But it operates beneath HP, Dell, and other hardware. And there’s an increasing number of these organisations that come under the banner of what they call ‘white box’ vendors.
“Super Micro, Quanta, Intel even, are delivering platforms that are essentially exactly the same as what you would buy from a HP, Lenovo or Dell, but you don’t have the associated mark-up.
“It’s very much a commodity proposition. So, if you’re looking to drive costs out of delivering what is essentially the same thing, you can deliver the same functionality on the commodity hardware.”
As mentioned earlier, relatively few data centres have gone down the software-defined route, even though it’s seen as almost inevitable by experts such as Gartner. The catalyst for change, according to Brown, will be when the adoption of Open Compute standards by big companies such as Facebook, Google, and others starts to influence the wider industry.
Open Compute is a collaborative community which shares innovations in infrastructure design.
Beginning to dawn
Brown says customers are beginning to understand the benefits of moving to the new software-defined model for data centres. He says Gyrocom has a solution which is delivered in partnership with IBM that can deliver a software-defined data centre in four hours.
“One of the beautiful parts, one of the building blocks of a software-defined data centre, is that we take what’s called a hyper-converged route, which takes compute and storage and delivers them in functional units rather than in big, huge blocks that can costs millions of dollars.
“The customers’ requirements can be very small or very large, but as you add these functional blocks of compute and storage – delivered on commodity – it scales in a linear way to, theoretically, an infinite level.”
So, rather than buy IT infrastructure in a large block at the beginning of a five-year plan, an organisation could start small and “scale forever”, says Brown. “The levels of efficiency that we’re driving with the software-defined data centre model over that of a traditional IT model are huge.”
While we’ve gone to great lengths to provide what we hope is useful insight into the subject, Gartner has a more succinct explanation.
“An SDDC is a data center in which all the infrastructure is virtualized and delivered as-a-service,” says the research firm. “This enables increased levels of automation and flexibility that will underpin business agility through the increased adoption of cloud services and enable modern IT approaches such as DevOps.”
Gartner adds: “Today, most organizations are not ready to begin adoption and should proceed with caution.
“By 2020, however, Gartner predicts the programmatic capabilities of an SDDC will be considered a requirement for 75 per cent of Global 2000 enterprises that seek to implement a DevOps approach and a hybrid cloud model.”
In another article on EM360, we took a look at the data centre switch market, and how Facebook has built hardware that could change everything.
Facebook’s “Backpack” is the company’s latest open-source modular switch platform, and is designed to take the social networking giant’s data centre infrastructure from 40 G to 100 G.
Facebook built Backpack for itself rather than to sell to data centres, but it is contributing the hardware’s design to the Open Compute Project.
Here’s an article on TechTarget.com which might be useful if you’re deciding what type of switch you need for your data centre.
Data centre hardware can be expensive, especially if it’s from the leading brands. A top-of-rack, 10-gig switch from a big-name vendor would leave small change from $50,000. When you need one of these switches for every rack in your data centre, that leaves you with a lot of small change.
If you take the commodity approach, and buy hardware that does essentially the same thing, and installing open-source operating systems onto it, would cost a little more than $5,000.