Should AI Fear Cyber Attacks?

Artificial intelligence (AI) is rapidly finding applications in nearly every walk of life. Self-driving cars, social media networks, cyber security companies, and everything in between uses it. But a new report published by the SHERPA consortium has found that while human attackers have access to machine learning techniques, they currently focus most of their efforts on manipulating existing AI systems for malicious purposes instead of creating new attacks that would use machine learning.

The study’s primary focus is on how malicious actors can abuse AI, machine learning, and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are well within reach of today’s attackers, including the creation of sophisticated disinformation and social engineering campaigns.

And while the research found no definitive proof that malicious actors are currently using AI to power cyber attacks, they highlight that adversaries are already attacking and manipulating existing AI systems used by search engines, social media companies, recommendation websites, and more.

F-Secure’s Andy Patel, a researcher with the company’s Artificial Intelligence Center of Excellence, thinks many people would find this surprising. Popular portrayals of AI insinuate it will turn against us and start attacking people on its own. But the current reality is that humans are attacking AI systems on a regular basis.

“Some humans incorrectly equate machine intelligence with human intelligence, and I think that’s why they associate the threat of AI with killer robots and out of control computers,” explained Patel. “But human attacks against AI actually happen all the time. Sybil attacks designed to poison the AI systems people use every day, like recommendation systems, are a common occurrence.

“There’s even companies selling services to support this behavior. So ironically, today’s AI systems have more to fear from humans than the other way around.”

Sybil attacks involve a single entity creating and controlling multiple fake accounts in order to manipulate the data that AI uses to make decisions. A popular example of this attack is manipulating search engine rankings or recommendation systems to promote or demote certain pieces of content. However, these attacks can also be used to socially engineer individuals in targeted attack scenarios.

“These types of attacks are already extremely difficult for online service providers to detect and it’s likely that this behavior is far more widespread than anyone fully understands,” added Patel, who’s done extensive research on suspicious activity on Twitter.

But perhaps AI’s most useful application for attackers in the future will be helping them create fake content. The report notes that AI has advanced to a point where it can fabricate extremely realistic written, audio, and visual content. Some AI models have even been withheld from the public to prevent them from being abused by attackers.

Patel continued: “At the moment, our ability to create convincing fake content is far more sophisticated and advanced than our ability to detect it. And AI is helping us get better at fabricating audio, video, and images, which will only make disinformation and fake content more sophisticated and harder to detect.

“And there’s many different applications for convincing, fake content, so I expect it may end up becoming problematic.”

The study was produced by F-Secure and its partners in SHERPA – an EU-funded project founded in 2018 by 11 organizations from six different countries. Additional findings and topics covered in the study include:

  • Adversaries will continue to learn how to compromise AI systems as the technology spreads.
  • The number of ways attackers can manipulate the output of AI makes such attacks difficult to detect and harden against.
  • Powers competing to develop better types of AI for offensive/defensive purposes may end up precipitating an ‘AI arms race’.
  • Securing AI systems against attacks may cause ethical issues (for example, increased monitoring of activity may infringe on user privacy).
  • AI tools and models developed by advanced, well-resourced threat actors will eventually proliferate and become adopted by lower-skilled adversaries.

SHERPA Project Coordinator Professor Bernd Stahl from De Montfort University Leicester says F-Secure’s role in SHERPA as the sole partner from the cyber security industry is helping the project account for how malicious actors can use AI to undermine trust in society.

Stahl said: “Our project’s aim is to understand ethical and human rights consequences of AI and big data analytics to help develop ways of addressing these. This work has to be based on a sound understanding of technical capabilities as well as vulnerabilities, a crucial area of expertise which F-Secure contributes to the consortium.

“We can’t have meaningful conversations about human rights, privacy, or ethics in AI without considering cyber security. And as a trustworthy source of security knowledge, F-Secure’s contributions are a central part of the project.”

Leave A Reply

Your email address will not be published.