Advertisement

ChatGPT’s creator warns Congress the technology could cause ‘significant harm’

Sam Altman sits at witness table in a Senate hearing room
OpenAI Chief Executive Sam Altman speaks at Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing Tuesday on Capitol Hill in Washington.
(Patrick Semansky / Associated Press)
Share via

The creator of ChatGPT and the privacy chief of IBM both called on U.S. senators during a hearing Tuesday to more heavily regulate artificial intelligence technologies that are raising ethical, legal and national security concerns.

Speaking to a Senate Judiciary subcommittee, OpenAI Chief Executive Sam Altman praised the potential of the new technology, which he said could solve humanity’s biggest problems. But he also warned that artificial intelligence is powerful enough to change society in unpredictable ways, and “regulatory intervention by governments will be critical to mitigate the risks.”

“My worst fear is that we, the technology industry, cause significant harm to the world,” Altman said. “If this technology goes wrong, it can go quite wrong.”

Advertisement

IBM’s chief privacy and trust officer, Christina Montgomery, focused on a risk-based approach and called for “precision regulation” on how AI tools are used, rather than how they’re developed.

The senators openly questioned whether Congress is up to the task. Political gridlock and heavy lobbying from big technology firms have complicated efforts in Washington to set basic guardrails for challenges, including data security and child protections for social media. And as senators pointed out in their questions, the deliberative process of Congress often lags far behind the pace of technology advancements.

ChatGPT and other new AI services benefit from a science fiction-infused marketing frenzy unlike anything in recent memory. There’s more to fear here than killer robots.

March 31, 2023

Demonstrating AI’s power to deceive, Sen. Richard Blumenthal, the Connecticut Democrat who chairs the panel, played an AI-written and -produced recording that sounded exactly like him during his opening statement. While he urged AI innovators to work with regulators on new restrictions, he recognized that Congress hasn’t passed adequate protections for existing technology.

“Congress has a choice now. We had the same choice when we faced social media,” Blumenthal said. “Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.”

Several senators advocated for a new regulatory agency with jurisdiction over AI and other emerging technologies. Altman welcomed that suggestion as a way for the U.S. to continue leading on the technology that springs from American companies.

But Gary Marcus, a New York University professor who testified alongside Altman and Montgomery, warned that a new agency created to police AI would risk being captured by the industry it’s supposed to regulate.

Advertisement

Lawmakers questioned the potential for dangerous disinformation and the biases inherent in AI models trained on internet content. They raised the risks that AI-fabricated content poses for the democratic process, while also fretting that global adversaries like China could surpass U.S. capabilities.

Snack’s new AI feature lets chatbots handle the initial getting-to-know-you conversations with potential suitors. It’s as strange as it sounds.

April 26, 2023

AI ‘hallucinations’

Blumenthal asked about “hallucinations” when AI technology gets information wrong. Sen. Marsha Blackburn (R-Tenn.) asked about protections for singers and songwriters in her home state, drawing a pledge from Altman to work with artists on rights and compensation.

Missouri Sen. Hawley, the ranking Republican on the subcommittee, asked whether AI will serve to be as transformative as the printing press, disseminating knowledge more widely, or as destructive as the atomic bomb.

“To a certain extent, it’s up to us here, and to us as the American people, to write the answer,” Hawley said. “What kind of technology will this be? How will we use it to better our lives?”

Much of the discussion focused on generative AI, which can produce images, audio and text that seem human-crafted. OpenAI has driven many of these developments by introducing products like ChatGPT, which can converse or produce human-like, but not always accurate, blocks of text, as well as DALL-E, which can produce fantastical or eerily realistic images from simple text prompts.

But there are boundless other ways that machine learning is being deployed across the modern economy. Recommendation algorithms on social media rely on AI, as do programs that analyze large data sets or weather patterns.

Advertisement

A common declaration about AI programs is that they’re learning abilities they weren’t trained to have. But that claim doesn’t quite hold up.

May 14, 2023

Requiring registration

The Biden administration has put forth several nonbinding guidelines for artificial intelligence. The National Institute of Standards and Technology in January released a voluntary risk management framework to manage the most high-stakes applications of AI. The White House earlier this year published an “AI Bill of Rights” to help consumers navigate the new technology.

Federal Trade Commission Chair Lina Khan pledged to use existing law to guard against abuses enabled by AI technology. The Department of Homeland Security last month created a task force to study how AI can be be used to secure supply chains and combat drug trafficking.

In Tuesday’s hearing, Altman focused his initial policy recommendations on required registration for AI models of a certain sophistication. He said companies should be required to get a license to operate and conduct a series of tests before releasing new AI models.

Montgomery said policymakers should require AI products to be transparent about when users are interacting with a machine. She also touted IBM’s AI ethics board, which provides internal guardrails that Congress has yet to set.

“It’s often said that innovation moves too fast for government to keep up,” Montgomery said. “But while AI may be having its moment, the moment for government to play its proper role has not passed us by.”

Advertisement