By James Henry, Consulting Practice Director, Auriga
In an age where technology is continually evolving and improving, we still seem no closer to achieving the nirvana of threat-free computing. Why?
It’s now thirty years since Brain A, the first PC virus, was created, spawning a myriad of attacks that seek to identify and exploit software fallibilities. Just as the human virus improves immunity, one would hope this naturally leads to the improvement over time of software. But the fact is that there is very little evidence that current security practices are working and, contrary to what we are told, it’s not just a matter of whether security is keeping pace with the bad guys, or whether there’s enough money to throw at the problem or whether execs are failing to understand that security is not a hindrance.
These excuses have been bandied about for the last fifteen years but the far less savoury truth is that it’s in the interest of the security community for threats to continue to get through.
As Rupert Goodwins points out in a piece reflecting on the industry, there is very little tangible hard evidence by which we can measure the scale of the problem, how effectively it is being dealt with and how effective solutions are at stopping these attacks. Statistics from select pools of participants inevitably seem to be turned to the vendor’s advantage and help them peddle their wares. This is especially true in the UK and across Europe where it’s not mandatory to report a breach and seldom in the victim’s interests to do so. In contrast, in the US, it’s compulsory to report a data breach in 47 out of 50 states.
This lack of visibility has made it near impossible to accurately determine the costs involved or to extrapolate any other information, such as successful mitigation. It has combined to create the perfect environment for industry pundits to use Fear, Uncertainty and Doubt (FUD) and to market solutions that seldom seem to be called to account on whether they deliver. Take Anti-Virus. Once seen as a major defence mechanism, AV now seems about as effective a means of securing the network as using a colander to hold water. The signature-based detection method naturally means AV vendors are playing catch-up and many were failing to update their software for months at a time. Symantec even went as far as to put the nail in the coffin by declaring AV dead.
Another point made by Goodwins is the lack of ‘joined up’ thinking. There will be one hawker of phishing solutions, another purveyor of practical staff training, and yet another offering a box to sit on the network and police specific aspects, such as endpoint security. A holistic view still seems an impossible ask. But we could well be entering a golden age when it comes to security thanks to an unlikely saviour: AI.
Machine learning will enable us to transform the bells and whistles into automated, self-learning solutions that evolve in concert with rather than after attacks have become manifest. We are already moving away from the network-centric defence model towards a more proactive type of defence that entails monitoring external as well as internal networks for evidence of anomalous behaviour and that in turn gives us the capability to anticipate how threats will evolve and forecast, rather then react, to them. Unlike the signature-based method, these solutions, such as the next generation Security Operations Center (SOC) will use rapid computing across disparate touch points (social media, forums, the dark web…) to create a more cohesive picture.
Goodwins concludes that the security industry has run amok and unchecked because it is unanswerable to any global governing body. I would take issue with that. The industry is self-regulating and has numerous effective standards. These regulations have achieved massive strides in creating and evangelising security best practice, from the CESG schemes to ISO 27001 and the Cyber Essentials. The imminent adoption of EU GDPR or it’s equivalent will also do much to streamline reporting and should give us the realistic statistics we, as a sector, so desperately need to further improve. But do we need to invest in a global means of standardisation? I would argue that we already have in the form of threat intelligence.
Threats are global as is threat intelligence. An all seeing capability will provide us with near real-time intelligence and window into the evolution of threats. The ability of the industry to gather that intelligence, present it and provide an appropriate and proportionate level of response is the litmus test. There will be no more hiding behind miracle solutions or massaging of the statistics. Those that peddle ineffective solutions will have nowhere to hide when threat intelligence becomes so pervasive that the scales fall from the eyes of the modern enterprise. In this way, threat intelligence will become a defining metric in an industry that for too long has sought to fight an unknown enemy with dated tools.