New alarms sounded about China deploying generative AI as a social media weapon

The Rand Corp. is warning that new artificial intelligence tools will provide China with a pathway to use social media to more effectively manipulate people worldwide. 

Major tech platforms and their users have quickly adopted generative AI tools such as the popular chatbot ChatGPT to create fresh content, make work more efficient, and get quick answers to complex questions. 

The People’s Liberation Army of China wants a long-term and high-impact way to orchestrate large digital media campaigns and generative AI will be especially good at helping China accomplish that task, according to a new report from the Rand Corp., a nonprofit research organization.



“For the Chinese military, generative AI offers the possibility to do something it could never do before: manipulate social media with human-quality content, at scale,” the report said. “Chinese military researchers routinely complain the PLA lacks the necessary amount of staff with adequate foreign-language skills and cross-cultural understanding.”

The report, which was published this month, said China’s crackdown on an open internet has substantially limited the country’s understanding of its American adversaries in a manner necessary to manipulate people online.

The Chinese Communist Party will look to overcome that gap by manipulating people with the use of new AI tools’ large language models that are trained on information blocked in China, according to the Rand National Security Research Division that authored the new report. 

“While PRC social media manipulation has historically been a limited concern outside Taiwan and the United States, generative AI has the potential to extend China’s capability to a much wider range of target countries, such as Japan, South Korea, and the Philippines, as well as other countries in Southeast Asia and Europe,” it said.

The researchers’ concerns about generative AI’s ability to help malicious actors influence people are not limited to high-visibility online campaigns. The technology can also influence more niche and targeted audiences. 

The report said a special information warfare unit for China, Base 311, has espoused infiltrating preexisting online communities to participate in nonpolitical conversations and then inject desired political narratives at an opportune moment. The tactic was described in a 2018 how-to guide likely intended for use in manipulating Facebook content. 

Funding for the research came from its contracts with Department of Defense federally funded research and development centers, according to Rand’s website.

U.S. national security officials are concerned that China’s use of generative AI will have damaging effects on American society. 

President Biden’s pick to run the National Security Agency and U.S. Cyber Command has raised concerns about foreign adversaries using generative AI to influence elections. 

Air Force Lt. Gen. Timothy Haugh, who worked on election defense efforts in 2018, 2020 and 2022, told senators in July he was worried about the AI tools’ impact on the 2024 elections.

“As we look at this election cycle, the area that we do have to consider that will be slightly different will be the role of generative AI as part of this,” Lt. Gen. Haugh said at a Senate Armed Services Committee hearing. “And so our concern is foreign use attempting to be a part of our electoral process.”

America’s allies are worried too. Last month, the U.K.’s National Cyber Security Centre urged caution for people integrating new generative AI tools into their work. The cyber agency fears large language models can enable new cyberattacks, such as by hackers manipulating chatbots intended to help customers at banks instead of helping cybercriminals rob them blind.

Source: WT