Senate’s AI probe expands to high-tech manipulation of politics and weapons

Fears of political destabilization, deployment of weapons of mass destruction, and catastrophic cyberattacks are top concerns of the Senate Judiciary Committee’s broadening probe of artificial intelligence tools.

The committee has become a hub for oversight hearings of AI amid Senate Majority Leader Charles E. Schumer’s calls for new rules to govern the technology.

The Senate Judiciary human rights subcommittee held an AI-focused hearing on Tuesday. The panel’s chairman, Sen. Jon Ossoff, touted the need for new scrutiny of AI because of the potential for automated kill chains and the proliferation of AI-fueled danger.  



“At a moment like this, it is imperative that Congress understand the full range of risks and potentials to ensure this technology can be developed, deployed, used and regulated consistent with our core values, consistent with our national interest, consistent with civil and human rights,” said Mr. Ossoff, Georgia Democrat.

The panel is not the first to dig into AI. The Senate Judiciary’s subcommittee on privacy, technology and the law heard testimony last month from OpenAI CEO Sam Altman, whose company makes the popular chatbot ChatGPT. Mr. Altman urged lawmakers to regulate his industry, citing the potential abuse of AI tools to manipulate people.

Last week, the Judiciary subcommittee on intellectual property held its first AI hearing reviewing questions about patents and the copyrightability of AI and things generated by AI tools. 

The intellectual property panel’s top-ranking Republican, Sen. Thom Tillis of North Carolina, said at that hearing that he anticipated lawmakers would need to hold an “endless number of hearings” to get new laws for AI correct.  

Senate Judiciary Committee chairman Richard Durbin outlined his desire to develop an “accountability regime for AI” this week that would include the potential for federal and state civil liability when AI tools cause harm. 

“Such a regime can — and should — include pre-deployment testing, ongoing audits, transparency measures, and other regulatory safeguards like those suggested by the NTIA, the White House Office of Science and Technology Policy and others,” Mr. Durbin said in a letter Monday to the National Telecommunications and Information Administration.

Other senate committees are looking to have a say in writing rules for AI too. Senate Homeland Security and Governmental Affairs Committee chairman Gary Peters partnered with a pair of Republicans on legislation designed to force the government to explain when it uses AI to make decisions. 

As an example of concerning behavior the lawmakers hope to address, Mr. Peters’ office cited that the Internal Revenue Service has used an automated system that is more likely to recommend Black taxpayers for an audit than White taxpayers. 

Regardless of whether Congress passes AI laws, the Biden administration is already busy crafting new rules. 

The White House Office of Science and Technology Policy is working on a National AI Strategy. The office also updated its research and development strategic plan last month that included a desire to spend more taxpayer money on AI. 

Urging the government to press ahead with new AI rules are Big Tech companies including Google and Microsoft. Both tech behemoths have advocated for AI regulations, and Microsoft president Brad Smith called for a new federal agency to police AI.

Source: WT