Beware of the fake Obama!

Report warns of the malicious use of AI in the coming decade

Experts on the security implications of emerging technologies have authored a report alerting about the potential malicious use of AI by rogue states, criminals, and terrorists. Forecasting rapid growth in cyber-crime, the report is a call for governments and corporations worldwide to address the danger inherent in the applications of AI.

“The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”

However, the report also recommends interventions to mitigate the threats posed by the malicious use of AI:

  • Policy-makers and technical researchers need to work together to prepare for the malicious use of AI.
  • AI has positive applications, but is a dual-use technology. So, researchers and engineers should be mindful of and proactive about the potential for its misuse.
  • Best practices should be learned from disciplines with a history of handling dual use risks, such as computer security.
  • The range of stakeholders engaging with preventing the risks of malicious use of AI should be actively expanded.

The co-authors come from organizations and disciplines including Oxford University’s Future of Humanity Institute; Cambridge University’s Centre for the Study of Existential Risk; OpenAI; the Electronic Frontier Foundation; the Center for a New American Security; among others.

The report identifies three security domains – digital, physical, and political security – as essential in dealing with the misuse of AI. It suggests that AI will disrupt the trade-off between scale and efficiency and allow large-scale and finely-targeted attacks.

The authors expect cyber-attacks such as: automated hacking, speech synthesis for target impersonation, finely-targeted spam emails with information stolen from social media, or exploiting the vulnerabilities of AI systems themselves, such as through adversarial examples and data poisoning.

In a similar way, the proliferation of drones and cyber-physical systems will allow attackers to repurpose them for harmful ends. The results could be as harmful as crashing fleets of autonomous vehicles, turning commercial drones into face-targeting missiles or holding critical infrastructure to ransom.

The rise of autonomous weapons systems on the battlefield risk the loss of meaningful human control and present tempting targets for attack.

– Stuart Roberts –

In the political sphere, targeted propaganda, and highly-believable fake videos present impressive tools for manipulating public opinion on broad scales.

Rediscovering cyber-security and promoting awareness

In order to avoid these risks, the authors discuss several interventions to reduce threats of improper AI usage. Therefore, they explore options such as rethinking cyber-security, promoting a culture of responsibility, as well as seeking both institutional and technological solutions.

Miles Brundage, Research Fellow at Oxford University, commented: “AI will alter the landscape of risk for citizens, organizations and states. Thus, it is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it”.

“In conclusion, it is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”

Source University of Cambridge
Leave A Reply

Your email address will not be published.