Intelligence agencies have used AI since the cold war—but now face new security challenges

Intelligence agencies have used AI since the cold war—but now face new security challenges

Recent publicity around the artificial intelligence chatbot ChatGPT has led to a great deal of public concern about its growth and potential. Italy recently banned the latest version, citing concerns about privacy because of its ability to use information without permission.

But intelligence agencies, including the CIA, in charge of foreign intelligence for the US, and its sister organization the National Security Agency (NSA), have been using earlier forms of AI since the start of the cold war.

Machine translation of foreign language documents laid the foundation for modern-day natural language processing (NLP) techniques. NLP helps machines understand human language, enabling them to carry out simple tasks, such as spell checks.

Towards the end of the cold war, AI-driven systems were made to reproduce the decision-making of human experts for image analysis to help identify possible targets for terrorists, by analyzing information over time and using this to make predictions.

In the 21st century, organizations working in international security around the globe are using AI to help them find, as former US director of national intelligence Dan Coats said in 2017, “innovative ways to exploit and establish relevance and ensure the veracity” of the information they deal with.

Coats said budgetary constraints, human limitations and increasing levels of information were making it impossible for intelligence agencies to produce analysis fast enough for policy makers.

The Directorate of National Intelligence, which oversees US intelligence operations, issued the AIM Initiative in 2019. This is a strategy designed to add to intelligence using machines, enabling agencies like the CIA to process huge amounts data quicker than before and allow human intelligence officers to deal with other tasks.

AI creates both opportunities and challenges for intelligence agencies. While it can help protect networks from cyber-attacks, it can also be used by hostile individuals or agencies to attack vulnerabilities, install malware, steal information or disrupt and deny use of digital systems.

AI cyber-attacks have become a “critical threat”, according to Alberto Domingo, technical director of cyberspace at Nato Allied Command Transformation, who called for international regulation to slow down the number of attacks that are “increasing exponentially”.

AI that analyses surveillance data can also reflect human biases. Research into facial recognition programs has shown they are often worse at identifying women and people with darker skin tones because they have predominately been trained using data on white men. This has led to police being banned from using facial recognition in cities including Boston and San Francisco.

Such is the concern about AI-driven surveillance that researchers have designed counter-surveillance software aimed at fooling AI analysis of sounds, using a combination of predictive learning and data analysis.

Leave a Reply

Your email address will not be published. Required fields are marked *