Regulation of artificial intelligence has been a hot topic in Washington in recent months, with lawmakers holding hearings and press conferences and the White House announcing voluntary AI safety commitments by seven tech companies on Friday.
But a closer look at the activity raises questions about how important actions are in setting policies around the rapidly evolving technology.
The answer is that it is not yet meaningful. Lawmakers and policy experts said the United States is still at the beginning of what is likely to be a long and difficult road toward creating AI rules. While there have been hearings, meetings with top technology executives at the White House and speeches to introduce AI bills, it is too early to predict even the harshest schemes of regulations to protect consumers and contain the risks the technology poses to jobs, the spread of disinformation, and security.
“It’s still early days, and no one knows what the law will look like yet,” said Chris Lewis, president of the public-knowledge consumer group, which has advocated for the creation of an independent agency to regulate artificial intelligence and other technology companies.
The United States still lags behind Europe, as lawmakers prepare to enact an artificial intelligence law later this year that would place new restrictions on what are seen as the technology’s most dangerous uses. In contrast, there is still so much disagreement in the US about the best way to handle the technology that many US lawmakers are still trying to understand.
Policy experts said that suits many tech companies. While some companies have said they welcome rules around AI, they have also argued against strict regulations similar to those being created in Europe.
Here’s a summary of the state of AI regulations in the United States.
in the White House
The Biden administration has been on a whirlwind listening tour with AI companies, academics and civil society groups. The effort began in May with Vice President Kamala Harris meeting at the White House with the CEOs of Microsoft, Google, OpenAI and Anthropic, pushing the tech industry to take safety more seriously.
On Friday, representatives of seven technology companies appeared at the White House to announce a set of principles for making their AI technologies safer, including third-party security checks and watermarking AI-generated content to help stop the spread of misinformation.
Many of the practices announced were already in place at OpenAI, Google, and Microsoft, or were in the process of being implemented. It is not enforceable by law. Promises of self-regulation also did not deliver what consumer groups had hoped.
“Voluntary commitments are not enough when it comes to Big Tech,” said Caitriona Fitzgerald, deputy director of the Electronic Privacy Information Center, a privacy group. “Congress and federal regulators must put in place meaningful and actionable barriers to ensure that AI is used fairly and transparently and protects individuals’ privacy and civil rights.”
Last fall, the White House introduced an outline of the AI Bill of Rights, a set of guidelines on protecting consumers with the technology. The guidelines are also not regulations and are not enforceable. This week, White House officials said they were working on an executive order on artificial intelligence, but did not disclose details and timing.
in Congress
The loudest drumbeat of AI regulation has come from lawmakers, some of whom have introduced bills on the technology. Their proposals include creating an agency to oversee artificial intelligence, responsibility for AI technologies that spread disinformation, and licensing requirements for new AI tools.
Lawmakers have also held hearings on AI, including one in May with Sam Altman, CEO of OpenAI, which makes the ChatGPT chatbot. Some lawmakers tossed ideas about other regulations during the hearings, including food labels to notify consumers of the dangers of artificial intelligence.
The bills are in their early stages and so far do not have the necessary support to move forward. Last month, Senate Leader Chuck Schumer, D-NY, announced a months-long process to create AI legislation that would include educational sessions for members in the fall.
“In many ways we’re starting from scratch, but I think Congress is up to the challenge,” he said during a speech at the time at the Center for Strategic and International Studies.
in federal agencies
Regulatory agencies are starting to take action by monitoring some of the problems arising from AI
Last week, the Federal Trade Commission opened an investigation into OpenAI’s ChatGPT, requesting information about how the company secured its systems and how the chatbot could harm consumers by creating false information. FTC Chair Lena Khan said she believes the agency has broad power under consumer protection and competition laws to police the problematic behavior of AI companies.
“Waiting for Congress to act is not ideal given the usual timetable for Congress,” said Andres Sawicki, a law professor at the University of Miami.