AI vulnerability fueling cyberattacks, U.K. cyber agency warns

The rapid adoption of artificial intelligence tools is raising new security concerns worldwide.

The U.K.’s National Cyber Security Centre is warning against the use of large language models supporting popular AI tools such as ChatGPT since they could be involved in cyberattacks.

The cyber agency is particularly worried about “prompt injection” attacks that look to take advantage of AI tools struggling to distinguish between an instruction and data provided to complete an instruction for a user.



Banks and financial institutions using LLM assistants, or chatbots for customers, are among the potential victims, according to a post on the agency’s website from its technical director for platform research.

“An attacker might be able [to] send a user a transaction request, with the transaction reference hiding a prompt injection attack on the LLM,” the cyber official wrote. “When the user asks the chatbot, ‘Am I spending more this month?’ the LLM analyzes transactions, encounters the malicious transaction and has the attack reprogram it into sending users’ money to the attacker’s account.”

The danger of large language models ranges from presenting reputational risks to causing real-world harm such as theft of dollars and secrets.

Samsung stopped its workers’ use of generative AI tools this year after discovering its employees inadvertently leaked sensitive data to ChatGPT.

Employees reportedly asked the chatbot to generate minutes from a recorded meeting and to check sensitive source code after Samsung’s semiconductor division let its employees use the new AI tools.

Some cyber professionals are wary of the new AI tools, too. Software company Honeycomb said in June it saw people attempting prompt injection attacks against its systems, including extracting customer information, but its LLM tools are not connected to such data.

“We have absolutely no desire to have an LLM-powered agent sit in our infrastructure doing tasks,” Honeycomb’s Phillip Carter wrote on the company’s blog. “We’d rather not have an end-user reprogrammable system that creates a rogue agent running in our infrastructure, thank you.”

Rules regulating AI and efforts to limit such danger are under development worldwide.

When lawmakers return to Washington in September, Senate Majority Leader Charles E. Schumer is bringing in major tech leaders such as Elon Musk, Meta’s Mark Zuckerberg and Google’s Sundar Pichai for a forum about AI. Mr. Musk, who has met with Mr. Schumer on potential AI legislation, says he sees a role for China in writing international AI rules. 

AI rules crafted by China are likely to be frowned on by U.S. officials worried about intellectual property theft and fretting that the communist government doesn’t share American values surrounding free digital discourse.

While new tools built on top of large language models pose risks, they also present an opportunity to enhance security.

The Office of the Director of National Intelligence’s Rachel Grunspan said last month that America’s spy agencies are planning to be ‘AI-first.’

Ms. Grunspan, who oversees the intelligence community’s use of AI, said at a summit in Maryland that the government is preparing for a future where all spies use AI.

“Anything that is getting AI in the hands of individual officers regardless of their job, regardless of their role, regardless of their background, technical or not, and just maximizing the capacity of the entire workforce — that’s where I see us going,” she said.

Source: WT