NSA, FBI warn of expanding use of ‘deepfakes’ in new report

Criminals and intelligence services are expected to increase the use of “deepfakes” — manipulated and misleading audio and video images — to target government and the private sector for disinformation operations or financial gain, according to a new joint intelligence report.

“Deepfakes are a particularly concerning type of synthetic media that utilizes artificial intelligence/machine learning (AI/ML) to create believable and highly realistic media,” wrote the authors of the joint report by the National Security Agency, FBI, and Cybersecurity and Infrastructure Security Agency.

The 18-page report, “Contextualizing Deepfake Threats to Organizations,” was published Wednesday.



In one illustration of the potential for abuse, an AI-generated video circulated in May showing an explosion at the Pentagon sparked confusion and turmoil on the stock market.

Other examples included a false video of Ukrainian President Volodymyr Zelenskyy telling his countrymen to surrender, and a fake video of Russian President Vladimir Putin announcing the imposition of martial law.

Deepfakes are video, audio, images and text created or edited using artificial intelligence. To date, the report said, there has been limited signs of significant use of deepfakes by malicious actors from nation-states like Russia and China.

However, with growing access to software and other synthetic media tools, the use of deepfake techniques is expected to increase in both frequency and sophistication, the report concluded.

The primary dangers from synthetic media are the use to impersonate leaders and financial officers, damage an organization’s image and public standing, and the use of fake communications to gain access to computer networks, communications, and sensitive data.

Government and private sector organizations were urged in the report to use deepfake detection technology and archive media that can be used to better identify fraudulent media.

Deepfakes aren’t limited to manipulated images or faces: Cybercriminals recently used deepfake technology to create an audio that led to the theft of $243,000 from a British company. The chief executive of a British energy company was conned into believing he had been telephoned by the chief of his German parent firm and ordered to send the money within a short period of time.

The report said recent incidents indicate “there has been a massive increase in personalized AI scams given the release of sophisticated and highly trained AI voice-cloning models.”

The main threats posed by deepfakes include the dissemination of disinformation during conflict, national security challenges for the U.S. government and critical infrastructure, and the use of falsely generated images and audio to gain access to computer networks for cyber espionage or sabotage.

The difference between deepfakes and earlier forms of manipulated media is the use of artificial intelligence and other sophisticated technology such as machine learning and deep learning, which allow spies and criminals to be more effective in their operations.

In addition to the Ukrainian and Russian examples, the report noted the social media platform LinkedIn has seen “a huge increase” in fake images used in profile pictures.

Malicious operators in the past could produce sophisticated disinformation media with specialized software in days or weeks.

However, deepfakes can now be produced in a fraction of that time with limited or no technical expertise based on advances in computer power and the use of deep learning.

“The market is now flooded with free, easily accessible tools (some powered by deep learning algorithms) that make the creation or manipulation of multimedia essentially plug-and-play,” the report said, noting that the spread of these tools puts deepfakes on the list of top risks for 2023.

Computer-generated imagery also is being used to produce fake media. A year ago, malicious actors used synthetic audio and video during online interviews to steal personal information that could be used to gain financial, proprietary or internal security information.

Manipulated media also can be used to impersonate specific customers to gain access to individual customer accounts or for information-gathering purposes.

A company in May 2023 was targeted by a person posing as the chief executive of the company during a WhatsApp call that presented a fake voice and image of the CEO.

Another attempted use of deepfake technology involved a person posing as a CEO who called on a bad video connection and urged switching to text. The person then sought money from the company employee but was thwarted.

Source: WT