Artificial intelligence is raising the stakes in cybersecurity: It’s boosting defenses even as it builds faster and more devastating attacks.
The latest report from Malwarebytes Labs, “When artificial intelligence goes awry: Separating science fiction from fact,” analyzes the advantages of using AI and machine learning in cybersecurity, as well as the emerging threats created by cybercriminals using the same technology.
“Will AI be a disruptive tech … in both the good and bad sense? The answer: a definitive yes,” the report states, noting that AI presents “true possibilities for abuse in the near future” — even if they don’t rise to a robots-will-overthrow-us threat level.
Importantly, Malwarebytes Labs Director Adam Kujawa says, AI changes the cybersecurity threat landscape for businesses — including small businesses.
“AI-empowered attacks will be able to craft very specialized emails or other communication, based on data scraped from public sources like social media,” he said.
“These attacks — that would normally require heavy research done by the criminal — would usually only be worth it for large targets. But the use of AI will likely make it far easier for them to launch these attacks at all targets.”
Right now, the biggest threat posed by malicious AI is for this technology to be used to gather data that is already online. The technology can create millions of targeted profiles that will then be used to spread malware, scams and mass disinformation through the use of AI-controlled bots.
“This means that any trust users will have in the protection of their data — and what they trust — will go out the window,” Kujawa said.
“Alternatively, an AI-powered malware pusher could quickly identify when its malware is being detected and modify it on the fly, to evade detection in the next generation. This happens now manually and with some automated tools — but to a very small degree, something that many security solutions can see through. Now imagine this process happening 10 times faster and the modifications becoming more complex every time.”
If — or when — AI is used maliciously, there will be “alarming consequences,” according to the Malwarebytes report.
AI-enabled malware would be better equipped to scope out its target environment before an attack, the report states, meaning “we could expect harder-to-detect malware, more precisely targeted threats, more convincing phishing, more destructive network-wide malware infections, more effective methods of propagation, more convincing fake news and clickbait, and more cross-platform malware.”
The use of AI and machine learning in cyber attacks means businesses “have to stay ever vigilant now,” said Rodney Gullatte, Jr., a certified ethical hacker and founder of Firma IT Solutions. “The time of 58 percent of businesses not investing in cybersecurity is past — you’ve gotta stop doing that.”
Businesses are constantly falling victim to sophisticated, automated attacks that still involve a human component, he said, and thinking of that human component as primitive or disorganized is a mistake.
“A lot of these hackers don’t wear hoodies and sit in dark rooms,” Gullatte said. “They work in nice corporate offices with suits and they have lunch breaks and 401ks. And they launch these automated attacks that seem to be intelligent, but they’re just highly programmed. So it’ll go and search through Facebook, it’ll search through LinkedIn, it’ll search through all the social media sites and pull up a whole bunch of publicly accessible intel about you that they can then craft to launch attacks on you.
“There are parts of these cyber attacks that use AI,” he said, “and there are parts that are still human. It’s still very much a people-driven problem, and it will take a mix of people and artificial intelligence to help solve it.”
THE BRIGHT SIDE
AI in cybersecurity is not all bad news. In many ways, it’s opening avenues for more effective cyber defenses.
“Machine learning is going to make our defenses a lot more powerful because we’ll be able to learn attack methods — because right now we’re doing the whole cat-and-mouse game,” Gullatte said. “As soon as we create a defense, the adversary is creating an offense that we’ve got to figure out how to defend against, and as soon as we figure that out, they make a new offense.”
Using AI to take a more proactive stance can boost defenses and cut costs.
“With a well-known shortage of skilled IT workers and malware analysts, and changes in the threat landscape moving at breakneck speed, AI-enhanced technologies can step in and automate processes that might take humans much longer to complete,” the Malwarebytes report stated.
“AI used in security tools will empower small businesses beyond their current capabilities — and hopefully for less money than they would spend hiring a whole cybersecurity staff,” Kujawa added.
“These tools will identify anomalous behavior from applications and network traffic, bringing issues directly to the eyes of the business owner — likely with suggested actions to secure systems further.”
Gullatte said heuristic analysis — behavior-based detection designed to spot suspicious characteristics that indicate unknown, new and modified viruses — has been around for some time and already involves AI.
Looking ahead, he sees “incredible opportunities to use AI for not only defending systems, but to have automated systems going after adversaries as well — because the attacks are automated, for the most part.”
But Gullatte cautions that human discernment is critical for effective cybersecurity defense.
“You know, I rely on automation to a point,” he said. “I’m old-school now — I still think humans should be involved in the decision-making process, and not leave it 100 percent up to the machine. … We don’t want the machine to make all the decisions. There’s a danger with that.”
The Malwarebytes report outlines the same concerns.
“For now, while AI is mostly a beneficial addition to security solutions, incorrect implementation can result in less-than-optimal results,” it stated. “The use of AI and ML in detections requires constant fine-tuning. Today’s AI lacks the depth of human knowledge needed to ignore benign files that don’t match the expected patterns. If the weave of the neural net is too wide, malware might escape detection; too fine, and the security solution will trigger false positives.”
According to Malwarebytes researchers, we can expect to see AI implemented or used against itself for malicious purposes in the next one to three years, in minimal ways. But as developments in other fields progress, opportunities may emerge for cybercriminals to abuse new AI technology.
So far, there’s no fully automated cybersecurity strategy that would be able to overpower AI-driven malware, researchers found, “so to at least be on an equal playing field, we need to get to work.
“Our advantage over AI continues to be the sophistication of human thought patterns and creativity; therefore, human-powered intelligence paired with AI and other technologies will still win out over systems or attacks that rely on AI alone.”
“As long as people stay in the decision-making process and don’t make [AI] 100 percent ‘smart’ for it to think on its own,” he said, “I think we’ll be OK.”