The National Crime Agency (NCA) will “closely examine” the recommendations made by the Alan Turing Institute after it claimed the UK was ill-equipped to tackle AI-enabled crime.
A report from the institute’s Centre for Emerging Technology and Security (CETaS), published this week, had a few pointers – and advised the NCA to start by establishing a task force specifically looking at AI Crime within the next five years.
The Alan Turing Institute reckons that even though AI-enabled crime is still in its infancy, malign forces are upping their skillsets and UK law enforcement needs to adapt in kind.
Asked about the recommendations, an NCA spokesperson told The Register: “The National Crime Agency highlighted the growing use of artificial intelligence to commit a range of high-harm crimes – including child sexual abuse, cybercrime, and fraud – in its National Strategic Assessment published in March, and we welcome the Alan Turing Institute bringing further attention to this issue. Their recommendations will be closely examined.”
The core findings of CETaS’ report were that the UK’s police and other law enforcement agencies have been slow to adapt to the emergence of AI. It said the country was aware of the threat, but has done little to capitalize on it for defense.
Two unnamed academics interviewed as part of the research expressed their concerns, with one saying there is an “enormous gap between the technical capability of law enforcement in the UK and the nature of the problem.”
Another said they were “very concerned about the police’s ability to understand what is out there, deal with it and utilize AI itself.”
The institute said AI-specific legislation may work in the long term to mitigate the harm AI can enable in the wrong hands, but in the short term law enforcement must be better at adopting, procuring, and mainstreaming AI as part of its routine crime-fighting efforts.
In essence, it must fight AI with AI.
“As AI tools continue to advance, criminals and fraudsters will exploit them, challenging law enforcement and making it even more difficult for potential victims to distinguish between what’s real and what’s fake,” said Ardi Janjeva, senior research associate at the Alan Turing Institute.
“It’s crucial that agencies fighting crime develop effective ways to mitigate this including combatting AI with AI.”
The NCA acknowledged the threat of AI-enabled crime and said it is working to counter it. Alex Murray, director of threats at the UK’s premier police force and the first national lead for policing AI, is exploring the use of AI to empower crime fighters and increase efficiencies.
AI crime: A threat for now and the future
Despite AI-enabled crime still believed to be in the early stages of its evolution, it has already resulted in some highly successful heists.
The $25 million Deepfake CFO story from last year is one example, and was cited by the Alan Turing Institute too. But, the abuse of AI extends beyond mere Deepfakes.
Cybersecurity experts have warned, at length, of the impact AI is having on phishing. Easy access to sophisticated tools allows low-skilled, non-native speakers of local languages to target victims in prosperous countries by having AI craft convincing email copy free of the spelling mistakes that previously helped targets weed out the spam.
The institute also warned of AI’s role in helping scammers of all stripes, including of the romance variety, help craft messages to build stronger bonds with victims, all while using Deepfake tech to pass as celebrities, for example.
Early efforts to use AI to combat scams, whether those scams were AI-enabled or not, include UK telco O2’s AI time-wasting granny Daisy, but sophisticated counters to fraud have not yet come to the fore.
The current threat is clear, although as AI develops, it is expected that various software and solutions will empower criminals with greater capabilities, such as the automation of attacks that currently require manual control.
“As AI capabilities continue to advance, fraudsters will likely refine their use of LLMs and deepfake media, blending automated deception with strategic human oversight,” the report stated.
“Future scams may leverage increasingly convincing real-time AI interactions, reducing the need for direct human involvement. The evolution of AI-driven relationship-building tactics underscores the growing challenge of distinguishing between authentic and manipulated digital identities.” ®
0 Comments