Facebook can’t police Live video, and neither can anyone else

Mark Zuckerberg speaking at F8 in 2015. REUTERS/Robert Galbraith
Mark Zuckerberg speaking at F8 in 2015. REUTERS/Robert Galbraith

Facebook (FB) forgot one thing before it began exhorting everybody and anybody to share moments from their lives as video streams on the social network: The “everybody and anybody” demographic, statistically speaking, will include some awful people.

We got a terrible reminder of that this weekend when Steve Stephens used Facebook to share a video of himself killing Robert Godwin Sr., a retiree and great-grandparent, on Easter Sunday.

Stephens uploaded the video at 2:09 p.m. and it stayed up until Facebook took it down at 4:22 p.m., according to Facebook’s timeline of events. Two days later, Stephens shot himself after police cornered him in Erie, Pennsylvania.

This abuse of Facebook’s video-sharing feature was upsetting, but isn’t surprising given the recent history of Facebook being used to live stream suicides, rapes and worse. Unfortunately, neither human moderators nor artificial intelligence can readily stop users from broadcasting these crimes — but Facebook still seems in a state of denial about that.

What Facebook says it will do

Facebook recognizes it has a problem with the video streaming option it constantly pushes on its users. But it’s not clear that management recognizes the extent of the issue.

“We have a lot more to do here,” founder Mark Zuckerberg said about the Cleveland killing video at the start of his keynote Tuesday opening the company’s F8 developer conference in San Jose. “We have a lot of work, and we will keep doing all we can to prevent tragedies like this from happening.”

And what exactly is that? Facebook has traditionally relied on users to call out abusive content — meaning somebody has to see something they can’t unsee and then tell Facebook about it.

That will never work on any large social network, much less one with Facebook’s 1.15 billion mobile daily active users, according to an expert on social-network abuse.

“No live streaming service that relies on user flags to trigger the moderation process can possibly keep rapes, suicides and murders out of public view,” explained Mary Anne Franks, a professor at the University of Miami School of Law and vice president of the Cyber Civil Rights Initiative. “A suicide, rape or murder video only needs a few seconds to go viral, at which point removal by the platform has limited impact.”

We asked Facebook for comment Tuesday afternoon; a publicist pointed to a company statement pledging improvements to its reporting and review process and noting its work in using software to prevent sharing of a horrific video in its entirety.

“In addition to improving our reporting flows, we are constantly exploring ways that new technologies can help us make sure Facebook is a safe environment,” Facebook’s vice president of global operations Justin Osofsky said in the statement.

“Artificial intelligence, for example, plays an important part in this work, helping us prevent the videos from being reshared in their entirety.”

Some ugly sights should be shared

At the same time, a blanket ban on violence will stop a vital application of Facebook Live: documenting abuses of power.

“We saw an explosion of videos caught on Facebook Live because of Philando Castile’s death,” said Caroline Sinders, an online-harassment researcher at Wikimedia. Without that broadcast, the police shooting of that man outside St. Paul, Minnesota, might have gone unremarked.

“We need something that is good for protests but bad for harassment,” Sinders concluded.

The law itself places no liability on Facebook for hosting explicit or gruesome content. “Section 230 of the Communications Decency Act broadly immunizes internet intermediaries from being responsible for user-generated content,” wrote Electronic Frontier Foundation staff attorney Sophia Cope in a forwarded statement.

She added that while Facebook can set its own rules, it should “ensure that online speech in the public interest, that has journalistic or artistic value, for example, is not unduly subject to private censorship.”

The machines won’t save us from ourselves

This isn’t the first inevitable hangup with Facebook video. Two years ago, the problem was “freebooting”—people sharing copyrighted video and depriving the creators of ad revenue.

Facebook responded last April by launching Rights Manager, an automated video-matching system somewhat like the Content ID system Google (GOOG, GOOGL) runs on YouTube that identifies copyright infringing clips automatically.

But freebooting persists despite Facebook’s efforts. For instance, in January, Fortune’s Jeff John Roberts found copyrighted videos of mixed martial arts fighter Ronda Rousey’s late-2016 defeat all over on Facebook.

“I can’t say that it’s made a dent in the problem from a personal perspective,” said Joe Hanson, host of the PBS digital series “It’s Okay To Be Smart,” who has complained about the freebooting problem before.

With copyright infringement, however, Facebook has a relatively easy job of pattern matching. That’s also true of Facebook’s attempts to combat “revenge porn” shared against the wishes of a person in it: You can teach a computer to recognize unwanted content that already exists.

For Facebook’s extensive work in machine vision to police live video (which, as Franks noted, could also allow “sparing human workers the trauma of sifting through gruesome murders and rapes”), a computer would have to recognize footage nobody’s seen before. Then it would need to know when a horrible video represented somebody bearing witness to an atrocity.

At some point, Facebook needs to think deeply about the reality that any tool it puts before a billion-plus people will be abused grotesquely by some of them. Instead, it keeps being surprised that bad people are on the internet.

Consider what came after Zuckerberg’s F8 apology: an extended sales pitch for the company’s ambitions in augmented reality — in which mobile devices overlay user-generated content on live video of your surroundings.

It all sounded great, but the presentation showed no evidence that the company had pondered this question Sinders suggested for any social-media developer: “When you make something, how can this be used against a person with less privilege than me?”

More from Rob:

Email Rob at [email protected]; follow him on Twitter at @robpegoraro.

Advertisement