CYBERUK Peter Garraghan – CEO of Mindgard and professor of distributed systems at Lancaster University – asked the CYBERUK audience for a show of hands: how many had banned generative AI in their organizations? Three hands went up.
“And how many, in your deepest of hearts, actually have a good grasp of the security risks involved in AI system controls, by a show of hands?”
Not a single hand was raised among the 200-strong, security-savvy crowd.
“So everyone’s using generative AI, but no one has a grasp of how secure it is in the system,” Garraghan replied. “The cat’s out of the bag.”
This snippet from a session at the UK National Cyber Security Centre’s (NCSC) annual conference last week vividly illustrates how some organizations are haphazardly deploying AI without much consideration for the wider implications.
It’s also the exact thing the agency is actively trying to dissuade businesses and government departments from doing due to the increased attack surface these risky deployments create, especially for those with roles in critical supply chains.
The NCSC launched a report on the matter on day one of CYBERUK 2025. Not only did it note that “there is a realistic possibility” that critical systems could become vulnerable to advanced attackers by 2027, but also that all organizations which fail to integrate AI into their cyber defenses before then would become materially riskier to a new breed of cybercriminals.
Launched by senior minister Pat McFadden, the report claimed that by 2027, AI-empowered attackers will be further reducing the time-to-exploitation of vulnerabilities. In recent years, this has been reduced to days, and the agency is certain this will continue to shorten as AI-assisted vulnerability research becomes more popular.
An NCSC spokesperson told The Register: “Organizations and systems that do not keep pace with AI-enabled threats risk becoming points of further fragility within supply chains, due to their increased potential exposure to vulnerabilities and subsequent exploitation. This will intensify the overall threat to the UK’s digital infrastructure and supply chains across the economy.
“The NCSC’s supply chain guidance is designed to help organizations gain effective control and oversight over their supply chains. We encourage organizations to use this resource to better understand and manage the risks.
“This is also why market incentives need to exist, to drive up resilience at scale, at an increased velocity.”
AI getting entrenched… before safeguards in place
Ensuring the cybersecurity fundamentals are applied across the board when deploying AI systems, which experts expect to be developed more swiftly than securely in a rush to gain market share, will be crucial in mitigating the threat AI presents to entities.
AI models are fast becoming more deeply entrenched in organizations’ systems, data, and operational technology, the report noted, and the common attacks associated with AI then become dangerous to those business assets.
Think direct and indirect prompt injections, as well as software vulnerabilities and supply chain attacks. With AI-connected systems, these attacks can all facilitate wider access to enterprise environments, and the necessary controls must be in place to mitigate these risks.
Garraghan told of a recent pentest his company did for a candle shop’s AI chatbot – the type of AI tech most businesses are racing to deploy right now to keep up with the corporate Joneses.
The chatbot used a large language model (LLM) to help the company sell candles. According to Garraghan, it was deployed insecurely and his firm was able to break it, causing security, safety, and business risks.
Security risks in this case could be that prompt engineering leads to a reverse shell on the application and an attacker extracts system data. Safety risks could involve engineering the chatbot to provide instructions about how many candles it would take to burn a house down, and business risks could arise if the chatbot could be engineered to divulge information about how to make the company’s candles.
These specific outcomes did not occur at the same company, but in Garraghan’s view serve as the realistic potential results of deploying AI tools without the proper governance in place.
The NCSC warned about the potential risks too, saying insecure data handling processes and configurations could result in transmitted data being intercepted, credentials being stolen, or user data being abused in targeted attacks.
Asked how it plans to support UK organizations in meeting the demand for cyber resilience to AI-assisted cyberattacks, the NCSC said to keep an eye out for its guidance and advice pieces being published throughout the year.
A spokesperson told The Reg: “Cyber threat actors are almost certainly already using AI to enhance existing tactics, techniques and procedures, and so it is vital that organizations of all sizes ensure they have a strong baseline of cybersecurity to defend themselves.
“The NCSC, alongside government, are continuously focused on improving digital resilience across the UK. This includes publishing a range of advice and guidance to help organizations take action and increase their resilience to cyber threats.
“For those most in need, we expect the largest technology companies, who are often their suppliers, to adjust to the future threat and deliver on their corporate social responsibility.” ®
0 Comments