Advertisement

What AI will bring in 2024: 4 predictions

Illustration of a computer chip shaped like a text message with three animated dots showing the chip is typing.
One thing both skeptics and enthusiasts alike can agree on: Artificial intelligence will have a banner year in 2024.
(Jim Cooke / Los Angeles Times)
Share via

If 2023 was the year that AI finally broke into the mainstream, 2024 could be the year it gets fully enmeshed in our lives — or the year the bubble bursts.

But whatever happens, the stage is set for another whirlwind 12 months, coming in the wake of Hollywood’s labor backlash against automation; the rise of consumer chatbots, including OpenAI’s GPT-4 and Elon Musk’s Grok; a half-baked coup against Sam Altman; early inklings of a regulatory crackdown; and, of course, that viral deepfake of Pope Francis in a puffer jacket.

To gauge what we should expect in the new year, The Times asked a slate of experts and stakeholders to send in their 2024 artificial intelligence predictions. The results alternated between enthusiasm, curiosity and skepticism — an appropriate mix of sentiments for a technology that remains both polarizing and unpredictable.

Advertisement

Regulators will step in, and not everyone will be happy about it.

When a surgeon or a stockbroker goes to work, they do so with the backing of a license or certification. Could 2024 be the year we start holding AI to the same standard?

“In the next year, we may require AI systems to get a professional license,” said Amy Webb, chief executive of the Future Today Institute, a consulting firm. “While certain fields require professional licenses for humans, so far algorithms get to operate without passing a standardized test. You wouldn’t want to see a urologist for surgery who didn’t have a medical license in good standing, right?”

The Times sat down — or logged in, rather — to interview Grok, Elon Musk’s new AI chatbot, about the billionaire tech mogul and his various controversies.

Dec. 14, 2023

It’d be a development in line with political changes over the last few months, which saw several efforts to more conscientiously regulate this powerful new technology, including a sweeping executive order from President Biden and a draft Senate policy aimed at reining in deepfakes.

Advertisement

“I’m particularly concerned about the potential impact [generative AI] could have on our democracy and institutions in the run-up to November’s elections,” Sen. Chris Coons (D-Del.), who co-sponsored the deepfakes draft, said of the coming year. “Creators, experts and the public are calling for federal safeguards to outline clear policies around the use of generative AI, and it’s imperative that Congress do so.”

Regulation isn’t just a domestic concern, either. Justin Hughes, a professor of intellectual property and trade law at Loyola Law School, said he expects the European Union will finalize its AI Act next year, triggering a 24-month countdown for broad AI regulations in the EU. Those would include transparency and governance requirements, Hughes said, but also bans on dangerous uses of AI such as to infer someone’s ethnicity and sexual orientation or manipulate their behavior. And as with many European regulations, the effects could trickle down to American firms.

Yet the rising calls for guardrails have already triggered a backlash. In particular, a movement known as effective accelerationism — or “e/acc” — has picked up steam by calling for rapid innovation with limited political oversight.

Advertisement

Julie Fredrickson, a tech investor aligned with the e/acc movement, said she envisions the new year bringing further tensions around regulation.

“The biggest challenge we will encounter is that using [tools that] compute IS speech and that raises critical constitutional issues here in the United States that any regulatory framework will need to deal with,” Fredrickson said. “The public must make our government understand that it cannot make trade-offs restricting our fundamental rights like speech.”

In one of his first major interviews since getting fired and rehired at artificial intelligence startup OpenAI, Sam Altman spoke with Trevor Noah.

Dec. 7, 2023

Authenticity will grow more important than ever.

Imagine being able to know with certainty whether that vacation photo your friend just posted on Instagram was taken in real life or generated on a server farm somewhere.

Mike Gioia, co-founder of the AI workflow startup Pickaxe, thinks it might soon be possible. Specifically, he predicts Apple will launch a “Photographed on iPhone” stamp next year that would certify AI-free photos.

Other experts agree that efforts to bolster trust and authenticity will only grow more important as AI floods the internet with synthetic text, photos and videos (not to mention bots aimed at imitating real people). Andy Parsons, senior director of Adobe’s Content Authenticity Initiative, said he anticipates the increased adoption of “Content Credentials,” or metadata embedded in digital media files that, almost like a nutrition label, would record who made something and with what tools.

Such stopgaps could prove particularly important as America enters a presidential election year — its first in history that will take place amid a torrent of cheap, viral AI media.

Advertisement

Bill Burton, former deputy press secretary for the Obama administration, predicted: “The most viewed and engaged videos in the 2024 election are generated by AI.”

In Culver City, a gathering of indie filmmakers who used AI software to make short films takes place amid a broader Hollywood reckoning with automation.

Nov. 7, 2023

The steam engine of innovation will keep chugging along …

Last year brought substantial advances in AI technology, from the launch of mainstream products — ChatGPT, deemed the fastest-growing consumer app in history, released its fourth version — to continued breakthroughs in AI research and development.

Many AI insiders think that pace of innovation will continue into the new year.

“Every business and consumer app user will be using AI and they won’t know it,” said Ted Ross, general manager of the City of Los Angeles Information Technology Agency. “I predict that artificial intelligence features and high-visibility [generative] AI platforms, such as ChatGPT, will rapidly integrate into existing business and consumer applications with the user often unaware.”

Other developments could be more niche but no less impactful. Some experts predict a rise in leaner and more targeted alternatives to the “large language models” that underlie ChatGPT and Grok. The AI itself could get better at self-improvement, too.

“There hasn’t been a lot of tooling that targets speeding up AI research,” said Anastasis Germanidis, chief technology officer of the synthetic video startup Runway. “We’ll likely see more of those tools emerge in the coming year,” including to help write or debug code.

Acclaimed actor Tom Hanks warned fans that a dental plan being promoted with his likeness is using a fake AI double. His concerns come amid a wider pushback against AI by Hollywood’s actors, who are currently on strike.

Oct. 2, 2023

… Unless the bubble bursts.

The AI market is frothy right now, but not everyone thinks the glory days can last.

“A hyped AI company will go bankrupt or get acquired for a ridiculously low price” at some point in 2024, Clément Delangue, chief executive of the open source AI development community Hugging Face, wrote in a recent tweet.

Advertisement

Eric Siegel, a former Columbia University professor and the author of “The AI Playbook: Mastering the Rare Art of Machine Learning Deployment,” has struck an even warier tone.

“There will be growing consternation as the lack of a killer [generative] AI app becomes increasingly apparent,” Siegel told The Times, referencing an app that would drive widespread adoption of AI. “Disillusionment will ultimately set in as today’s grandiose expectations fail to be met.”

Eventually, he warned, we could even enter an “AI Winter,” or a period of declining interest — and investment — in the technology.

A discussion draft of a policy proposed by Sens. Chris Coons, Marsha Blackburn and others would, if made into law, offer a legal recourse for people “cloned” by artificial intelligence software without their consent.

Oct. 12, 2023

But that is probably still a few years away, he added: “The current ‘craze’ has built incredible momentum, and that momentum will continue to be fueled as new impressive-looking and potentially valuable capabilities continue to pop up.”

Even the skeptics, it seems, anticipate a banner year for AI.

Advertisement