Advertisement

X promises ‘highest level’ response on posts about Israel-Hamas war. Misinformation still flourishes

Workers install lighting on an "X" sign atop the company headquarters
Social media platform X says it is trying to take action on hateful and graphic posts about the latest war between Israel and Hamas. But watchdog groups say misinformation abounds on the platform.
(Noah Berger / Associated Press)
Share via

The social media platform X, formerly known as Twitter, says it is trying to take action on a flood of posts sharing graphic media, violent speech and hateful conduct about the war between Israel and Hamas.

X says it’s treating the crisis with its highest level of response. But outside watchdog groups say misinformation about the war abounds on the platform that billionaire Elon Musk bought last year.

Fake and manipulated imagery circulating on X include “repurposed old images of unrelated armed conflicts or military footage that actually originated from video games,” said a Tuesday letter to Musk from European Commissioner Thierry Breton. “This appears to be manifestly false or misleading information.”

Advertisement

Breton also warned Musk that authorities have been flagging “potentially illegal content” that could violate EU laws and “you must be timely, diligent and objective” in removing it when warranted.

X didn’t immediately respond to a request for comment about Breton’s letter. But a post late Monday from X’s safety team said: “In the past couple of days, we’ve seen an increase in daily active users on @X in the conflict area, plus there have been more than 50 million posts globally focusing on the weekend’s attack on Israel by Hamas. As the events continue to unfold rapidly, a cross-company leadership group has assessed this moment as a crisis requiring the highest level of response.”

That includes continuing a policy frequently championed by Musk of letting users help rate what might be misinformation, which causes those posts to include a note of context but not disappear from the platform.

Advertisement

The struggle to identify reliable sources for news about the war was exacerbated over the weekend by Musk, who on Sunday posted the names of two accounts he said were “good” for “following the war in real-time.” Analyst Emerson Brooking of the Atlantic Council called one of those accounts “absolutely poisonous.” Journalists and X users also pointed out that both accounts had previously shared a fake AI-generated image of an explosion at the Pentagon, and that one of them had posted numerous antisemitic comments in recent months. Musk later deleted his post.

Biden issued an apparent warning to Hezbollah: “To any country, any organization, anyone thinking of taking advantage of the situation, I have one word: Don’t.”

Brooking posted on X that Musk had enabled fake war reporting by abandoning the blue check verification system for trusted accounts and allowing anyone to buy a blue check.

Brooking said Tuesday that it is “significantly harder to determine ground truth in this conflict as compared to Russia’s invasion of Ukraine” last year and “Elon Musk bears personal responsibility for this.”

Advertisement

He said Musk’s changes to X have made it impossible to quickly assess the credibility of accounts while his “introduction of view monetization has created perverse incentives for war-focused accounts to post as many times as possible, even unverified rumors, and to make the most salacious claims possible.”

“War is always a cauldron of tragedy and disinformation; Musk has made it worse,” he added. Further, Brooking said via email, “Musk has repeatedly and purposefully denigrated the idea of an objective media, and he made platform design decisions that undermine such reporting. We now see the result.”

Part of Musk’s drastic changes over the last year included gutting X’s staff, including many of the people responsible for moderating toxic content and harmful misinformation.

Right after Elon Musk took control of Twitter, hateful content rose as moderation was loosened, according to a USC computer scientist and his team.

One former member of Twitter’s public policy team said the company is having a harder time taking action on posts that violate its policies because there aren’t enough people to do that work.

“The layoffs are undermining the capacity of Twitter’s trust and safety team, and associated teams like public policy, to provide needed support during a critical time of crisis,” said Theodora Skeadas, one of thousands of employees who lost their jobs in the months after Musk bought the company.

X says it changed one policy over the weekend to enable people to more easily choose whether or not to see sensitive media without the company actually taking down those posts. “X believes that, while difficult, it’s in the public’s interest to understand what’s happening in real time,” its statement says.

Advertisement

The company says it is also removing newly created Hamas-affiliated accounts and working with other tech companies to try to prevent “terrorist content” from being distributed online. The company said it is “also continuing to proactively monitor for antisemitic speech as part of all our efforts. Plus we’ve taken action to remove several hundred accounts attempting to manipulate trending topics.”

Linda Yaccarino, whom Elon Musk named in May as the top executive at X, withdrew from an upcoming three-day tech conference where she was scheduled to speak, citing the need to focus on how the platform was handling the war.

There’s a paradox at the heart of Facebook, Twitter, Reddit and other companies that rely on user-generated content — and it’s leading to their downfall.

“With the global crisis unfolding, Linda and her team must remain fully focused on X platform safety,” X told organizers in advance of the WSJ Tech Live conference being held next week in Laguna Beach.

AP writer Ali Swenson contributed to this report.

Advertisement