The good news: Cybersecurity still needs humans.
The bad news: Cyber threats are so numerous that humans alone can’t handle them — and they’re so nuanced, they can’t all be farmed out to artificial intelligence.
“[T]he issue is that we’re dealing with so many vulnerabilities that we can’t possibly answer all of them by putting people in front of screens. We have to find a better solution,” said Steven Fulton, director of Regis University’s Colorado Front Range Center for Information Assurance Studies. “Machine learning is one way to do this. Several companies are leading the way in doing so … but the bottom line is that we aren’t ever going to be able to train enough ‘perfect cyber analysts’ who can quickly and efficiently recognize and respond to adversaries as quickly as the adversaries can attack our systems.”
Automation and AI are certainly impacting cybersecurity, said Dan Likarish, associate professor in Regis’ Information Systems Department — but the details need to be understood.
“Cybersecurity defense is relying on AI and automation by correlation of bad actors against known and discovered data sets,” he said in an email. “Network data generates humongous bytes of data; important infrastructure threat data is collected and reduced to consumable bits (no pun intended) through correlation across aggregate dynamic data sets. That means large computing resources applied to the threat volume, with people staring at flashing monitors. Machine learning (automation) can help analysts find the needle in the haystack or burn the haystack down.
“The key word is ‘dynamic’ because the threat is always changing. The [next generation] approach is to move faster than the adversary — [and] that means more sophisticated, human-independent machine learning running on hot hardware.”
But this intersection of data science and cybersecurity creates a reliance on a single point of failure. What happens, Likarish asked, when the decision engine produces bad decisions?
When the system works and the cyber attacker can be reasonably identified (“Remember, this is a statistically driven guessing game,” Likarish notes) their identity is shared with security vendors, and rapid protection and implementation measures can be pushed out to infrastructure, systems and endpoints.
Regardless, Likarish said, “automation and labor-saving is in its infancy; we are many years from effective machine learning algorithms.”
The way threats are currently addressed is “messy and complicated because decision-making involves insightful human intervention,” he added. “When machine learning can simulate not just analysis but data interpretation, then we have arrived. The current state of the [cybersecurity] industry relies on analysts making decisions based on the output of the AI/automation tools. I would guess within the near future — crystal ball time — the analysts will be replaced with sophisticated decision engines that outperform the adversary.”
Likarish estimated it’s “10-plus years before human roles are replaced” by automation in cybersecurity.
“I’m not sure that computers will ever totally replace humans in this environment,” Fulton added, “but I do think that we’re seeing quicker reactions by companies which are able to point to abnormalities and suggest that humans look at these abnormalities. If I’m asked to suggest companies who are doing this, Crowdstrike jumps to mind — as well as some of the more traditional companies such as Symantec or McAfee.”
Cisco’s 2018 Cybersecurity Report notes the rise of artificial intelligence, saying that more organizations are turning to machine learning and AI in the face of elevated threats and massive increases in encryption, which reduces visibility and “provides malicious actors with a powerful tool to conceal command-and-control activity … [and] more time to inflict damage.”
AI and machine learning capabilities allow organizations to “spot unusual patterns in large volumes of encrypted web traffic,” the report said. “Security teams can then investigate further.”
Cisco’s survey of 3,600 chief information security officers for the 2018 Security Capabilities Benchmark Study revealed that most are “eager to add tools that use artificial intelligence and machine learning, and believe their security infrastructure is growing in sophistication and intelligence.
“However, they are also frustrated by the number of false positives such systems generate, since false positives increase the security team’s workload,” the report added. “These concerns should ease over time as machine learning and artificial intelligence technologies mature and learn what is ‘normal’ activity in the network environments they are monitoring.”
Thirty-nine percent of security professionals told Cisco researchers they are completely reliant on automation, and another 44 percent said they are heavily reliant. Thirty-four percent are completely reliant on machine learning, and 32 percent said they are completely reliant on artificial intelligence.
For locating malicious actors in networks, 92 percent of CISOs said behavior analytics tools work very well or extremely well.
Likarish said he wouldn’t be surprised if a vendor already has the first prototype learning engine that outperforms humans for simple cybersecurity problems.
“IBM’s WATSON is being prototyped within this space,” he said. “The interesting recent change in [cybersecurity] people’s life is shared open source threat intelligence. It is a game changer because the white hats [ethical hackers] are finally sharing information at an analyst level, across industry groups.
“Once information sharing is automated and pushed to endpoints, routers and firewalls, the game will get real interesting because the current adversaries are not terribly sophisticated,” he added, “rather they are avoiding deadbolts and going after open front doors. Machine learning will cause some of the adversaries to act smarter, turning the machine learning engines against the defenders.
“Remember, vendors are not just selling to the white hats.”