Big Tech invites Washington to create new agency, new rules to govern AI
Big Tech’s solution to the dangers posed by its artificial intelligence products is big government.
Microsoft wants the federal government to create a new agency to regulate artificial intelligence, pushing for more bureaucracy soon after the company’s executives met with top Biden administration officials about AI.
Microsoft president Brad Smith unveiled a blueprint for governing AI on Thursday that said after better enforcing existing laws and rules, the federal government must impose new regulations that would be “best implemented by a new government agency.”
He wrote on the company’s blog that action is needed to ensure that AI helps “protect democracy,” advances the “planet’s sustainability needs,” and provides access to AI skills that “promote inclusive growth.”
“Perhaps more than anything, a wave of new AI technology provides an occasion for thinking big and acting boldly,” Mr. Smith wrote. “In each area, the key to success will be to develop concrete initiatives and bring governments, respected companies, and energetic NGOs together to advance them.”
Microsoft is not the only Big Tech behemoth calling for new AI regulations. Google published a white paper detailing its AI policy agenda last week that said it was encouraged to see countries busy writing new regulations.
“AI is too important not to regulate, and too important not to regulate well,” wrote Kent Walker, president of global affairs at Google.
Microsoft and Google have more openly embraced letting the U.S. government write new rules upon meeting with top Biden administration officials.
Mr. Smith wrote in February that tech companies’ self-regulatory efforts would lead the way for the government to craft new rules for artificial intelligence. He urged countries to use “democratic law-making processes” and rely upon “whole-of-society conversations” to help determine the rules.
Microsoft CEO Satya Nadella and other tech executives including Google CEO Sundar Pichai then met with President Biden, Vice President Kamala Harris and senior administration officials at the White House earlier this month to discuss AI tools.
Following the meeting, Mr. Smith delivered remarks at a Center for Strategic and International Studies event and said he welcomed new AI rules and laws from Washington policymakers. Mr. Smith’s blog post on Thursday said Microsoft’s new AI blueprint was responsive to his company’s meeting with White House officials.
Big Tech’s call for regulation is music to the ears of the Biden administration and its allies on Capitol Hill.
The White House Office of Science and Technology Policy said earlier this week it is making a new “National AI Strategy.”
“The Biden-Harris administration is undertaking a process to ensure a cohesive and comprehensive approach to AI-related risks and opportunities. By developing a National AI Strategy, the federal government will provide a whole-of-society approach to AI,” the Office of Science and Technology Policy said when launching the effort.
The office also released an updated national AI research and development strategic plan that emphasized the government’s desire to spend more taxpayer cash on AI.
Meanwhile, Senate Majority Leader Charles E. Schumer, New York Democrat, has jump-started the process to write new AI rules in the Senate.
The Senate Judiciary Committee’s first hearing toward writing new AI rules earlier this month featured testimony from OpenAI CEO Sam Altman, who was also at the White House meeting on artificial intelligence.
OpenAI, the maker of the popular AI chatbot ChatGPT, is a major beneficiary of Microsoft, which said earlier this year it was pouring billions of dollars into OpenAI.
Mr. Altman’s testimony made a splash on Capitol Hill, as he urged lawmakers to regulate AI systems as competitors to his products are springing up.
“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Mr. Altman told lawmakers. “For example, the U.S. government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities.”
Pushed at the hearing to describe the kinds of AI capabilities that concerned him, Mr. Altman was reluctant but cited AI models that may influence a person’s behavior and beliefs and models that could “help create novel biological agents.”
Lawmakers have also heard Big Tech raise concerns in private about how foreign countries may use new AI tools.
Google DeepMind, the company’s AI team, worries about China stealing AI research and using AI for malign influence operations. Those fears prompted the company to rethink its approach to how it publishes its work, according to a source close to the House Select Committee on the Chinese Communist Party.
Google’s message to the House lawmakers in a closed-door meeting in the U.K. last week was that it did not matter if Google was the only one making changes to its work, the lawmakers needed to consider new rules of the road for other researchers to follow too.