CANALYS FORUMS APAC Generative AI is being enthusiastically adopted in almost every field, but infosec experts are divided on whether it is truly helpful for red team raiders who test enterprise systems.
“Red teaming” sees infosec pros simulate attacks to identify vulnerabilities. It’s a commonly used tactic that has been adapted to test the workings of generative AI applications by bombarding them with huge numbers of prompts in the hope some produce problematic results that developers can repair.
Red teams wield AI as well as testing it. In May, IBM’s red team told The Register it used AI to analyze info at a major tech manufacturer’s IT estate, and found a flaw in an HR portal that allowed wide access. Big Blue’s red team reckoned AI shortened the time required to find and target that flaw.
Panel prognostications
The recent Canalys APAC Forum in Indonesia convened a panel to ponder the use of AI in red teaming, but also what it means to become dependent on it – and more importantly, its legality.
IBM APAC Ecosystem CTO Purushothama Shenoy suggested using AI for red teaming could be helpful “to break the system yourself in a much more ethical manner.”
He predicts AI will speed threat hunting by scouring multiple data feeds, applications, and other sources of performance data, and do so as part of large-scale automated workflows.
But Shenoy told us he worries that as AI adopters build those systems, and other AI apps, they’ll make the classic mistake of not stopping to consider the risks they pose.
“It will replace some human tasks, but you don’t want an over-reliance on them,” said Mert Mustafa, APAC sales partner ecosystem GM for security shop eSentire.
Kuo Yoong, head of cloud at distributor Synnex’s Australian operations, warned that generative AI often doesn’t detail how it produces its output, which makes it hard for a red team to explain its actions – or defend them to governance pros or a court of law.
“AI can’t go on the stand and explain how it went through those activities to find threats,” explained Yoong.
Criminals don’t worry about those sorts of legal concerns, so will likely use AI to power their attacks.
Panelists at Canalys’s event therefore suggested AI will “transform” cyber security.
“We’re going to have to use more and more of it,” claimed Mustafa.
Another panelist, Galaxy Office Automation’s director of cybersecurity and networking Nishant Jalan, suggested there should be limits to the use of generative AI in cyber security to prevent over-consumption. He also advocated for regulations and policies to govern it.
Perhaps positions are premature
Other experts from whom The Register sought opinion questioned whether generative AI is sufficiently mature for use by red teams.
“The use of Gen AI for security operations is in the early stages. Use cases will evolve and new ones will emerge,” Canalys analyst Matthew Ball told The Reg by email. The firm expects to have more research on the topic next year.
CISO at cyber security biz Acronis Kevin Reed told us he thinks AI is not ready to join red teams, but may be suitable for their close cousins – penetration testers. “Penetration tests focus on finding vulnerabilities in a system or network, testing technical controls and are usually pretty direct, while red teaming is more about testing organizational controls and staying undetected,” explained Reed. “LLMs aren’t ready for that yet. They’re more suited for pentests.”
Some pentest efforts he is aware of that are already underway have had success at running commands in specific stages of a multi-stage attack – but struggle with full automation.
“I think current LLMs don’t have enough memory to handle all the context needed,” he concluded.
But is it legal?
When it comes to legality Bryan Tan, partner at tech-centric law firm Reed Smith, believes the relevant question to ask is who is responsible for the generative AI conducting the pentest?
His guess is that liability falls on the operator providing the pen testing service.
“This also means the operator (whether the company or its employee) will be the one hauled up to answer questions,” he added. The operator will therefore need to be sure what the AI is doing or at least explain so that there is transparency and explainability.
As for AI regulations, he referred to them as “currently at a philosophical level.” He also pointed out that a number of countries do currently regulate pen testing, meaning those laws may one day change to also touch on AI. ®
0 Comments