Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on April 27, 2020

UK spies must ramp up use of AI to fight new threats, says report

The RUSI study warns that AI is an essential tool to combat emerging threats


UK spies must ramp up use of AI to fight new threats, says report Image by: Defence Images

UK spies must use AI to counter cyber attacks and augment intelligence analysis, according to a new study commissioned by eavesdropping agency GCHQ.

The report warns that hostile states and cybercriminals “will undoubtedly seek to use AI to attack the UK,” through malware that mutates to evade detection and automated social engineering attacks that trick people into divulging confidential information.

It also predicts growing threats from deepfakes that manipulate public opinion and interfere with elections, and cyberattacks on national infrastructure.

All of these attacks must be countered with AI, according to the report by the Royal United Services Institute (RUSI), a defense and security think tank.

[Read: LAPD ditches predictive policing program accused of racial bias]

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

“Adoption of AI is not just important to help intelligence agencies manage the technical challenge of information overload,” said Alexander Babuta, one of the report’s authors.

“It is highly likely that malicious actors will use AI to attack the UK in numerous ways, and the intelligence community will need to develop new AI-based defense measures,” said Alexander Babuta, one of the report’s authors.

Threats and opportunities

The report arrives amid growing concerns about the AI capabilities of UK intelligence agencies.

MI5 reportedly believes it can no longer develop the AI it needs to tackle evolving threats, and wants the private sector to help build the spies’ tech.

The RUSI report shares these concerns, arguing that the UK government needs to take full advantage of AI advances made in the private sector and academia.

However, the authors argue that “none of the AI use cases identified in the research could replace human judgment.”

They see only “limited value” in using the tech to predict how terrorists will behave, as terrorist attacks are too rare to provide enough data to build a risk model. Instead of attempting to predict individual behavior, the report advises agencies to develop “augmented intelligence” systems that support human analysis.

They also call for new privacy safeguards on the tech to protect human rights, but adds that too much focus on the risks will stifle innovation. That’s unlikely to allay concerns that the UK is becoming a high-tech surveillance state.

 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with