A new report from a series of national security and technology research organizations, including Elon Musk’s OpenAI, details the potential security threats posed by misuses of artificial intelligence.
The report, titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation,” echoes the sentiments voiced by European leaders gathered at the Munich Security Conference last week. Officials there raised concerns that NATO allies are ill-prepared to manage the potential threats presented by artificial intelligence.
The report focuses on specific ways AI may enable ”superhuman hacking” that will target three security domains: digital, physical, and political – each with its own unique vulnerabilities and countermeasures.
Here are the risks artificial intelligence presents, according to the report:
Digital Security – ExoWarfare
In recent years, both state and non-state actors have exploited networks to launch increasingly sophisticated cyberattacks and commit complex cybercrimes.
According to the report, “The use of AI to automate tasks involved in carrying out cyberattacks will alleviate the existing trade off between the scale and efficacy of attacks.”
The report envisions cyber criminals weaponizing artificial intelligence to design highly competent and highly realistic chat-bots that can elicit human trust through interactive dialogues and eventually even masqueraded visually as another person. Once these bots cultivate trust, they can gain manipulate users to gain access to critical data and valuable private information.
Physical Security
Physical harm inflicted by robots is no longer a futuristic concept explored exclusively on sci-fi television shows. Non-state actors such as ISIS and Hamas are already using non-autonomous drones to conduct attacks. A preview into a world of automated attacks was on display earlier this year when Russian ground forces in Syria were attacked by what they described as a swarm of 13 small-size drones.
The report explains that artificial intelligence will continue to equip these attack robots with increasing levels of autonomy, which will in turn allow a greater amount of damage to be inflicted by a single person.
The physical security section of the report also raises the specter of attacks that subvert cyber-physical systems, like causing autonomous vehicles to crash or remotely hacking a service robot and compelling it to carry out an attack.
Political Security – ExoWarfare
From social media’s role in our elections to organizing protests and disseminating messages – computers and the internet have become tools of political power.
The introduction of AI will likely make these existing trends more extreme and create entirely new political dynamics, the report read.
The report imagines a world in which artificial intelligence is used to create fake videos of politicians issuing terrifying statements or acting in appalling ways. The phrase “fake news” will take on an entirely new meaning. How will the average news consumer even begin to know what is true and what is an AI produced illusion, the report asks.
Existing political strategies, such as using bots to masquerade as people to spread political ideas, will be greatly enhanced by artificial intelligence, allowing propaganda to “hyper-personalized” based on a data that can accurately predict moods, behaviors, and even vulnerable beliefs.
The report goes on to recommend countermeasures such as government oversight and collaboration with experts, learning from dual-use fields like cybersecurity and keeping AI coding open, effectively requiring AI systems to explain themselves.