IT installations grow to massive size in datacentres today, in an effort to support demand for online services, but the next generation of IT, with an “Internet of Things” promises a level of scale and complexity that we have not even begun to face up to.
This is the view of Mark Burgess, a physicist, Professor, and IT entrepreneur, who has studied infrastructure for the past 25 years. Already, he says, we are falling into old habits to try to make it work, but Mark Burgess thinks we need to think about things in a new way.
Our first instinct about managing IT infrastructure is the ‘command and control’ model for managing IT systems. A central command authority watches over everything and pulls levers and strings to operate everything, like Cape Canaveral, or a giant brain.
“Years in front of the television with a remote control have left us hard pressed to think of any other way of making machines work for us; but, the truth is that point-and-click, imperative command, and remote execution don’t scale well when you are trying to govern the behaviour of a large number of things”, says Burgess. “It’s too slow and too fragile.”
Command and control, with remote managers, struggles to keep pace because it is an essentially manual human-centric activity. “If we want smart responsive services, we need to limit the scope of responsibility, and decouple intent from action to make them largely self-maintaining.” Thankfully, a simple way out of this dilemma was proposed in 2005, by Burgess and collaborators, and is acquiring a growing band of disciples in computing and networking. It is based on so-called Promise Theory.
Promise theory shifts attention away from algorithms and control to outcomes and constraints. It predicts that rapid control should be kept as close to the point of action as possible, i.e. decentralised more like DNA than a brain; but there is still room for brain models for slower coordination. The problem with thinking about real-time remote control is that it only scales by sheer concentrated effort, and that has limits. Remote parts of a system might be unable or unwilling to comply with instructions from a brain, and all the responsibility is placed on a controller, who is basically at the mercy of a reliable connection. “There is a reason why organisms with brains get slower as they get bigger, and why the blue whale is the largest organism there is. Anything bigger would just be too slow and unreliable to survive.”
In a promise-based design, each part of a system is a priori independent, more like ants than whales, and tries its best to behave according to simple promises it makes to others. Instead of instruction from without, we have behaviour promised from within. Since the promises are kept by ‘self’ (the parts, either a human self or machine self), it means that adapting to context can be always made with knowledge of the circumstances under which implementation takes place. Moreover, if two promises conflict with one another, the agent has complete information to resolve them without having to wait for external help.
A promise-oriented view is somewhat like a service view. Instead of trying to remote control things with strings and levers, one makes use of an ecosystem of promised services that advertise intent and offer a basic level of certainty about how they will behave. Promises are about expectation management, and knowing the services and their properties that will help us to compose a working system. It doesn’t matter here how we get components in a system to make the kinds of promises we need; that is a separate problem.
Electronics are built in this way. You buy off-the-shelf components that promise properties like resistance; capacitance, switching, and you combine them based on your expectations into a cooperative circuit that makes a new promise (like being a radio transmitter or a computer). This is also how service-oriented architecture is evolving in IT design.
The main challenge faced in this view is how to see the desired effect emerge from these promises. What story do we tell about the system? In imperative programming languages, the linear story is the code itself. However, in a promise language, the human story is only implicit in a set of enabling promises. We have to tell the story of what happened differently. For some, this is a difficult transition to make, in the noble tradition of Prolog and Lisp and other functional languages. However, at scale and complexity, human stories are already so hard to tell that the promise approach becomes necessary.
Promises turn design and control into a form of knowledge management, which Burgess believes is the main challenge of the future. They shift the attention away from what changes (or which algorithms to enact), onto what interfaces exist between components and what promises they keep and why. The service-oriented style of programming, made famous by Amazon, and Netflix, essentially uses this approach for scalability, not only machine scalability but for scaling human collaboration.
Promise Theory reminds us that decentralisation is the route to very large scale. Applications have to be extensible by cooperation (sometimes called horizontal scaling through parallelism rather than vertical scaling through brute force).
“Interestingly, biology has selected redundant services as its model for scaling tissue-based organisms. This offers a strong clue that we are on the right track. Avoiding strong dependencies is a way to avoid bottlenecks, so this shows the route to scalability.”
Autonomy and standalone thinking seems to fly in the face of what we normally learn about programming, i.e. to separate and silo resources into classes, but this is not necessarily true. Hyperconvergence of infrastructure is already emerging in response to the scaling problems of siloed services, shifting to a more cellular view of the self-sufficient component. Security and scalability both thrive under autonomy too, and complexity melts away when the dependences between parts are removed and all control is from within.
If Burgess is right, the near future world of ubiquitous mobile and embedded devices is going to outgrow remote-controlled micromanagement, and the sprawl of the cloud’s server backend will force us to decentralise our datacentres into embedded resources, only shepherded by service providers. This is how an Internet that includes a rapidly evolving marketplace of Things can keep its promises, and be the future platform for society.
The Science of Our Information Infrastructure
Designing Systems for Cooperation