Advertisement

States try to combat election interference as Washington deadlocks

Clement Wolf, Google’s global public policy leader for information integrity, speaks Tuesday at a symposium on digital disinformation hosted by the Federal Election Commission.
(Kirk McKoy / Los Angeles Times)
Share

With the White House and Congress paralyzed over how — or even whether — to act on intelligence agency warnings about foreign interference in U.S. elections, Maryland opted to take matters into its own hands.

The state adopted transparency rules for political advertising on Facebook, Twitter and elsewhere online. The pioneering move drew praise from election reformers as a blow against foreign meddling.

Then came the backlash. And it wasn’t from Russia.

Newspaper publishers hauled the state into federal court. The new rules ran afoul of the 1st Amendment and created burdens on media organizations that could push struggling local papers under, they protested.

Advertisement

Even one of the world’s most vocal advocates for transparency, the Reporters Committee for Freedom of the Press, joined the objectors. Along with the Washington Post, Associated Press and others, they successfully blocked the state’s effort in federal court.

Three years after a presidential campaign in which foreign operatives used social media platforms to mislead and manipulate voters, the nation still can’t figure out how to shore up its defenses. With partisan divisions preventing any effective federal action, the states have responded with a patchwork of moves of their own, some of which, as in Maryland, run up against other cherished American values, including free expression.

“We haven’t come to grips with the sophistication and scale of the attack in 2016 and the coordinated action that is needed to confront it going forward,” said Nathaniel Persily, co-director of the Cyber Policy Center at Stanford University. “There is plenty of blame to go around.”

Even as other nations take aggressive action, some 40 election security bills in Congress have been left to fester. President Trump’s refusal to admit that Russia intervened in the election on his side, coupled with Republican unwillingness to contradict him, has blocked any congressional action.

That leaves states to try to figure out piecework solutions — an effort that security experts say is inevitably too limited.

“This doesn’t seem like the kind of thing that ought to be worked out in a laboratory,” said Andrew Grotto, who was director of cyber security policy in the White House for Presidents Obama and Trump, nodding to the role states can play as laboratories of democracy. “There should be a federal answer.”

Advertisement

Maryland’s plan spooked newspaper publishers, who warned the online rules would create costly reporting requirements that could force them to stop running political ads. Some of the much bigger, and far more financially stable, online platforms planned to stop accepting paid state and local political ads because of the same concerns.

Platforms already ban such ads in Washington state, which has its own law.

Those state efforts parallel the approach taken in Canada, which is pushing ahead with strict transparency rules for ads and a new government agency that will work with intelligence officials to alert the public when foreign disinformation campaigns get traction with voters. Several platforms have halted selling political ads in Canada as a result.

President Trump has derided U.S. cities since he took office. He’s turned his fire on Los Angeles of late, where he’s visiting for fundraisers. His anti-urban rhetoric plays well with his mostly rural base, and plays on a longstanding division in the U.S.

Sept. 18, 2019

State legislators championing such rules in America have no plans to pull back, though several are working with media groups to mitigate their worries. State officials see their efforts as crucial to nudging Congress and the Trump administration along. The bigger the patchwork of state laws, the thinking goes, the bigger the headache for Facebook and Google and Twitter, and the more pressure companies will put on Washington to act.

“Each time a state creates a new regulatory system, it adds impetus to devising a federal solution to this,” said Adav Noti a senior director at the nonprofit Campaign Legal Center.

California plans to require social media companies to keep ads in a publicly accessible archive, along with information about who purchased them. The new Social Media Disclose Act, which takes effect in January, creates other burdens for the tech firms. They will be required to clearly reveal on each political ad who paid for it, along with background information on the purchaser and the amount spent. New York has adopted a similar law.

“Individual states are raising the floor,” Sen. Mark Warner (D-Va.), said Tuesday at a forum that the Federal Election Commission hosted in an effort to break the federal logjam. “You are ending up with a much tougher floor than you would if you collaborated with us [Congress] on the front end,” he warned tech executives.

Advertisement

Facebook and Twitter officials were in the audience but declined invitations to participate in the forum. An executive from Google, which owns YouTube, warned of the dangers of a slippery slope.

“Quite often, solutions have tradeoffs that are not easy to navigate without causing harm in other ways,” said Clement Wolf, Google’s global public policy leader for information integrity.

But it is unclear how much the state measures will actually do to curb interference. Archiving and transparency requirements can be effective at deterring and rooting out paid ads by foreign entities. But propaganda campaigns often don’t involve ads but unpaid — or “organic” — social media postings that are amplified by bots.

Confronting that challenge is proving more fraught. That much was clear after the Democratic National Committee recently alerted Beto O’Rourke’s presidential campaign that he was the target of a misinformation attack which aimed to link the shooter who killed 22 people in El Paso in August to O’Rourke’s campaign.

The falsehood lives on in social media. One tweet amplifying the lie has been retweeted 11,000 times, including by a member of President Trump’s campaign advisory board, who has 89,000 of his own followers. A single Facebook post repeating the false claim, apparently with the help of bots, got 34,000 shares.

“As a campaign, we’re almost entirely powerless to stop misinformation,” Jen O’Malley Dillon, O’Rourke’s campaign manager, wrote on Twitter. One of her colleagues was more blunt in an interview: “We’re all sitting ducks if these are the rules the social media companies are going to follow. It’s insane.”

Advertisement

Scholars are struggling to gauge the effectiveness of disinformation campaigns. Tech companies so far aren’t giving them much of the data they say they need. The firms express support for the scholarly efforts, but they are moving slowly amid worries about trade secrets baked into their algorithms and protecting user privacy.

Facebook’s hesitance to turn over such data now threatens to sidetrack an alliance between the company and academics tracking disinformation. An ideologically diverse coalition of foundations sponsoring the effort, called Social Science One, has warned it will suspend funding at the end of this month if Facebook does not start releasing more data.

The company, which says it has 20 staffers working diligently to deliver the data, is still reeling from the fallout of the 2016 election, when it inappropriately shared reams of private user data with political operatives. Now, it’s under fire for not being more open with such data. Those are not ideal conditions for swift action, and the lack of clarity from Washington isn’t helping.

Researchers like Sinan Aral, an MIT professor investigating how social media misinformation affects voters, voice frustration that the charged politics around meddling are making it impossible to forge common-sense solutions.

“There has to be a nuanced approach,” he said. “We need to understand who is exposed to which messages. We don’t have that data. The platforms do.”

Advertisement