Christian Byrnes, managing vice president at Gartner, explores some key principles as to how we can extract the most out of the IT security disciplines we see executed today.
What is taking your interest in the security industry at the moment?
The trends in information security are relatively clear. The rate of change has accelerated yet again — primarily on the attacker side. The tools that we have can be mostly effective if you’re very good at deploying them and assuring that you’re covering all the bases. However, given today’s volatile environment, eventually you will be compromised. You need to have the sensors in place, metaphorically speaking, so that you can detect the compromise. The US Government revealed that more than a staggering 1000 organisations have been compromised during a six-month period in 2014 which were not detected by them. Instead it was through the FBI, who consequently had to notify the companies that in fact, they had been compromised. For that number of organisations to not have noticed that they were compromised means that we have a long way to go in assuring the security of our networks and systems.
What does this leave for the Chief Information Security Officer (CISO)? How can they adapt in this new information security paradigm?
The CISO is responsible for assuring as much security as possible without significantly negatively impacting the business operations. Now, I have to admit that the security profession has given itself a bit of a black eye over the decades in that we have a history of trying to over-secure. A lot of security officers try to eliminate all risk. We can’t do that anymore. Business people certainly understand that you can’t do it.
Computers are general purpose devices that will carry vulnerabilities. Of course, the mission is to manage those vulnerabilities and the level of risk that the business is willing to accept given the constraints that risk management imposes.
However, an organisation that over-secures will become uncompetitive and an organisation that under-secures will become compromised. There is a middle point that can be reached that simply accepts the fact that there may be some security failures but that we will rapidly detect and heal them. That really is where we have to focus at this point in time. If you ask me again ten years from now, I’m very hopeful that the situation will be different — that the technologies that we have will truly be effective. However, I said the same thing ten years ago and the situation hasn’t gotten better — it’s gotten worse.
Is this also because organisations have become, more or less, entrenched in their own data?
That is certainly part of it. There are a number of factors. Business depends on data, the external value of data and the fact that intellectual property is now a saleable commodity on the open markets. The internet has obviously been a boom — it’s obviously been very good for business but it has a downside and it allows some of these other things, like ‘dark markets’ to evolve. You always have to remember that in this type of environment, the attackers have an asymmetric advantage, which simply means that it takes us a lot more time and effort to protect against them than it takes them to successfully attack.
What are some of the unique challenges that organisations face when they’re actually moving towards continuous monitoring and risk mitigation?
The question is the effectiveness of automated systems doing continuous monitoring. A continuous monitoring environment requires a great deal of fine tuning, adaptation, and even with that, requires human intervention. We have a problem currently in that the systems are not quite smart enough to provide accurate, consistent and immediate information to human operators. The human operators have to wade through a relatively high volume of information in order to determine whether or not something actually represents a compromise. We’ve had various vendors work on that over the years and we see signs now that, in fact, there is success. We have vendors who are delivering products that can do realistic anomaly detection in some networks. It’s not universal yet, but the improvement in that arena is evolving fairly rapidly and I would like to believe that in the next three to five years, we would be at a point where continuous monitoring is not only cost-effective but effective.
Last year Apple’s iCloud was in the headlines for all the wrong reasons as personal images of female celebrities were stolen and then released online. Will enterprises be more considerate with their cloud security programs or strategies as a consequence?
Well, first of all I have no inside knowledge about the Apple compromise. However, I believe it will have no impact on enterprise approaches to security. The type of protective mechanisms that we typically have in an enterprise system, are sufficient to beat that style of attack. With any consumer orientated publicly accessible system, there may be some compromises made for ease of use for market acceptance. In the enterprise we don’t have the same issues; we can dictate a certain link and password, and can set the systems up to reject multiple attempts at password guessing, with capability to lock accounts if the password fails three, four or five times in a row. Consumer orientated systems don’t have that flexibility and so they do tend to be a bit more open to this form of attack.
The full article features in our June 2015 magazine available here