At the heart of Trump's Twitter spat, a 'shocking level of bipartisan support' for Big Tech change

In this article:

Once seen as a critical tool for internet platforms to police lewd and objectionable online speech, Section 230 of the Communications Decency Act has gained growing bipartisan support as a law in need of fixing.

Enacted in 1996, Section 230 exempts online platforms from liability for most user-generated speech. President Donald Trump has taken aim at changing the law in a fight against Twitter (TWTR), putting tech giants in legal and regulatory crosshairs that are likely to outlast the current election cycle.

Democrats and Republicans alike voice increasing antipathy over sweeping liability protections that 230 affords to all online platforms — including Facebook (FB), Instagram, YouTube (GOOG) (GOOGL). All told, experts say it’s becoming clear that change is coming.

This combination of images shows logos for companies from left, Twitter, YouTube and Facebook. Social media companies are failing to stop manipulated activity, according to a report Friday, Dec. 6, 2019 by NATO-affiliated researchers who said they were easily able to buy tens of thousands of likes, comments and views on Facebook, Twitter, YouTube and Instagram. Most of the phony accounts and the activity they engaged in remained online weeks later, even after researchers at the NATO Strategic Command Centre of Excellence flagged it up as fake.  (AP Photos/File)

“If Trump is reelected, frankly even if he isn’t reelected, you might see variations on this proposal coming into some type of effect next year, with a shocking level of bipartisan support,” Florida’s former consumer protection czar, Richard Lawson, told Yahoo Finance recently.

It’s happened before. Back in 2018, bipartisan action on Section 230 created a carve out that removed immunity for user content promoting or facilitating sex trafficking of minors.

“The fact that they would come together is huge,” Lawson said, speaking of the 2018 change.

In the wake of ongoing efforts by lawmakers and the Trump administration to claw back the protections, some say that’s reason enough for all companies with an online presence to seriously assess the cost of the law’s potential makeover.

“There’s a tremendous amount of liability at risk,” according to Rob McDowell, a former Federal Communications Commissioner (FCC) under presidents George W. Bush and Barack Obama, and partner at Cooley LLP.

‘Something’s probably going to happen’

Sen. Ted Cruz (R-TX) speaks during an oversight hearing to examine the Federal Communications Commission on June 24, 2020in Washington,DC. - The hearing was held by the Senate Committee for Commerce, Science, and Transportation. (Photo by Alex Wong / POOL / AFP) (Photo by ALEX WONG/POOL/AFP via Getty Images)
Sen. Ted Cruz (R-TX) speaks during an oversight hearing to examine the Federal Communications Commission on June 24, 2020 in Washington,DC. - The hearing was held by the Senate Committee for Commerce, Science, and Transportation. (Photo by Alex Wong / POOL / AFP) (Photo by ALEX WONG/POOL/AFP via Getty Images)

The Trump administration has twice targeted Section 230 in the space of a month. The first was a controversial executive order that, in theory, would hand power to the FCC to reinterpret the statute; the second was via legislative “reform proposals” published by the Department of Justice (DOJ).

Yet independent of what the president may or may not accomplish, Congress is also growing restive, with Democrats and Republicans teaming up on bills to address the law’s perceived shortcomings.

In June, Senators Brian Schatz (D-HI) and John Thune (R-SD) introduced the “PACT Act,” which would expose platforms to federal civil claims, and allow users to appeal a platform’s moderation actions.

Separately, the Senate’s “EARN IT Act” — introduced by South Carolina Republican Lindsey Graham (R-SC), with backing from Connecticut’s Richard Blumenthal and other Democrats — is headed to the full Senate for debate.

The measure seeks to remove the liability shield for platforms that fail to meet standards for preventing sexual exploitation of children by the government or platforms. However, security and privacy experts argue the bill’s stated purpose is a backdoor way to rob platforms and their users of encrypted communication.

House Rep. Greg Walden (R-OR) has said he’s working on legislation that would impose additional responsibility on companies earning more than $1 billion in annual revenue. In February, Rep. David Cicilline (D-RI) introduced legislation that would remove liability protections for Big Tech companies that knowingly publish false political advertisements.

All of which suggests that the writing appears to be on the wall, according to Cooley’s McDowell — especially as a mix of political bedfellows join forces.

“If you have Sen. Ted Cruz (R-TX), Sen. Mark Warner, Sen. Bernie Sanders (D-VT), Sen. Mike Lee, all coming together...something’s probably going to happen,” he said.

Change that could be costly

Facebook Chairman and CEO Mark Zuckerberg testifies at a House Financial Services Committee hearing in Washington, U.S., October 23, 2019. REUTERS/Erin Scott     TPX IMAGES OF THE DAY
Facebook Chairman and CEO Mark Zuckerberg testifies at a House Financial Services Committee hearing in Washington, U.S., October 23, 2019. REUTERS/Erin Scott TPX IMAGES OF THE DAY

New changes are unlikely before next year. Yet platforms across industries can count on a host of rising compliance costs if user content must be policed more aggressively — and to defend against ensuing lawsuits.

“The cost of [platforms] actually having to meet the level of granularity that’s expected by some of these regulators -- that’s where the tension lies,” Peter Hyun, a Section 230 expert and partner at Wiley LLP who once served as chief counsel to Sen. Dianne Feinstein (D-CA).

Legal experts say it’s unclear how big of a war chest companies will need, as it would depend on the nature of a platform’s business, and how significantly any changes shift liability.

While it’s hard to put numbers on it, big platforms “could potentially be facing millions of dollars in damages costs, depending on what the actual changes to the law are, and they could be spending millions of dollars on increased moderation costs,” according to James Rosenfeld, an attorney who handles defense of 230 claims for major media and tech companies, and previously for Yahoo Finance.

He explained that from a defense perspective, one advantage of 230 is it wipes out the bulk of user-generated, content-based claims in the early stages of litigation.

“So it cuts your bills a lot,” he said. “If a single case goes forward, it could be several hundred thousand dollars to millions of dollars to defend.”

Additional responsibility would require platforms to scale up on content moderation. Facebook, for example, pays around 15,000 contractors who work as content moderators that make between $16 and $18 per hour. Collectively, they wade through around 3 million posts per day.

Meanwhile, back in 2017, Google announced plans to build out its moderation team with at least 10,000 by 2018. And according to Reuters, Twitter’s content moderation arm is comprised of about 1,500 workers.

‘A monumental shift’

Jade Giltrap, head of the media coverage team for cyber insurance provider CFC, explained that even under the current wide-ranging shield, platforms do purchase coverage for claims arising from user-generated content. That includes coverage for defamation, duty of care, intellectual property, and privacy.

“It’s a defense, not a right,” Giltrap said of the practical application of 230. Accordingly, “the law’s protection doesn’t kick in from a monetary perspective until a company needs to invoke it as a defense.”

Premiums to defend such claims vary widely, and depend on a platform’s revenue, level of reliance on user-generated content, and lawsuit history. Big Tech platforms that do rely heavily on user generated content, Giltrap said, would tend to face higher premiums. However, they also tend to curb coverage costs through self-insurance and robust in-house legal teams that handle smaller claims, she said.

She added that an errors and omissions policy with a $10 million cap can run well into six figures for a larger online platform, and around $7,500 to $10,000 for small firms. But all that would change under a less comprehensive 230 liability shield.

“Where these platforms stop becoming only a conduit, and become responsible for most or all of the content on on their platforms, that is a monumental shift, because it doesn't allow for them to adequately control that risk,” Giltrap said. “They can't control all of their content that moves through their sites.”

More moderation responsibility is exactly what most proposals to amend 230 have in mind.

Under the DOJ’s proposals, platforms would face civil liability if they purposefully facilitate third party content that violates federal criminal law.

According to Wiley’s Hyun, one problem with that standard is that it tasks platforms with identifying potentially criminal content, as well as guessing what users intended when they posted it. That’s because criminal violations depend on the state of mind of the alleged violator, he said.

A cyberstalking threat posted to social media, for example, could be an obvious threat to the perpetrator who posted it, yet undetectable as such by platform moderators.

Even for experienced prosecutors and investigators, Hyun said that “painstaking” effort goes into uncovering enough evidence to allege the level of intent required for a single criminal lawsuit to go forward, based on a user’s online speech.

“That makes it incredibly difficult when you then try to assign that role to a company,’” Hyun said.

The DOJ also wants to permit civil suits against platforms for content-based claims that address a broad array of cyber crimes, and permit government-initiated civil enforcement actions to “protect citizens from harmful and illicit content.“

The department also aims to remove the phrase “otherwise objectionable”something that’s long given platforms liability-free wiggle room to perform “good faith” content moderation. A redefinition of “good faith” would mean only moderation that adheres to a platform’s own terms of service.

Trump’s Executive Order

This illustration photo shows a woman in Los Angeles looking at the official Twitter account of US President Donald Trump on June 23, 2020, with a tweet by the president which Twitter considered "abusive" and hid it. - Twitter on Tuesday hid a tweet from President Donald Trump in which he threatened to use "serious force" against protestors in the US capital, saying it broke rules over abusive content. The move appeared to be the first by Twitter against the president for an "abusive" tweet. In a growing dispute, the platform has recently labeled other Trump tweets as misleading and violating its standards on promoting violence. (Photo by - / AFP) (Photo by -/AFP via Getty Images)

Trump’s executive order takes a different approach. It looks to hand power to the FCC to reinterpret and issue rules for Section 230, and to the FTC to police content moderation under its authority to enforce unfair and deceptive trade practices.

Because 230 extends no authority to either agency, McDowell said efforts to carry out the executive order would probably face legal challenges. On the other hand, the agencies, both independent of the executive branch, could choose to ignore the order altogether.

“You don’t want the FCC to legislate,” McDowell said. Absent action by the FCC, the FTC could still pursue enforcement action that could prove costly for platforms to defend, even though the agency lacks effective tools for penalizing alleged bad actors, according to McDowell.

Although some proposals to weaken 230 protections may lack teeth, the attorney cautions that the mere contemplation of altering the law dredges up intangible costs.

“There’s a big unknown here. There are opportunity costs with the inhibition to innovate, or the chilling of taking positive, constructive risks with a new business idea, or with a post,” he said. “That’s harder to measure. What technologies or services don’t happen if this law is changed?”

Lawson echoed the point, saying the current crop of platforms are essentially a creation of Congress passing CDA 230. “This massive growth really probably could not have happened if all of this time all of these websites were having to have armies of lawyers and other people, vetting the content.”

Newer platforms, and not Big Tech, tend to be the entities that test the waters on 230’s reach, Giltrap said. That is because they are more likely to introduce leading-edge technologies that push its boundaries.

Rosenfeld said any changes should be looked at broadly — not just in terms of what they can do to Big Tech and companies with an in-line presence, but at how those changes will impact people's experiences on the internet.

So far, most alterations to 230 have taken shape through interpretations of the law, by courts, without intervention from lawmakers and administration officials. Still, they do they do shift risk for online platforms. A recent Ninth Circuit Court of Appeals case, for example, held that a software company could not use the law as a shield to block content moderation that amounted to anticompetitive behavior.

“That’s sort of the first gap in the armor,” McDowell said.

Alexis Keenan is a reporter for Yahoo Finance. Follow on Twitter @alexiskweed.

[Click the following links for more of Yahoo Finance’s coronavirus coverage: Personal finance tips, news, policy, graphics & more from Yahoo Finance]

Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, LinkedIn, and reddit.

Find live stock market quotes and the latest business and finance news.

Advertisement