Why Social Media Keeps Failing Us
May 11, 2017
By Lata Nott
Inside the First Amendment
Was there a time when people regarded social media in a wholly positive light? It's hard to remember. The honeymoon's been over for a while. We still recognize the benefits of social media — after all, the majority of Americans use platforms like Facebook, Twitter and Instagram on a daily basis — but when we talk about these companies, it's usually not to laud them for bringing the world closer together.
Our conversations about social media often revolve around the problems that have come with it. There are the usual laments about how these applications have ruined our ability to focus and made us all unhappier. And then there are the more serious concerns: That social media can serve as a fertile recruiting ground for terrorist organizations. That it enables, and perhaps encourages, people to broadcast themselves committing heinous acts. That it allowed for the unbridled dissemination of fake news, which may or may not have impacted the outcome of the 2016 U.S. presidential election.
Here's a snapshot of the controversies that Facebook has encountered just within the past year:
- In May 2016, news broke that Facebook employed human curators to decide which news stories would show up as trending topics, and thatthese curators frequently suppressed conservative news stories.
- In September 2016, Facebook censored an iconic and historically important photograph of a naked child fleeing a napalm attack during the Vietnam War.
- In November 2016, Facebook was "embroiled in accusations that it helped spread misinformation and fake news stories that influenced how the American electorate voted."
- In April 2017, a video of a 74-year-old man's murder was posted on Facebook; shortly afterwards, the killer made a video confession via Facebook Live.
Each controversy was followed by a public outcry for Facebook to do better. Each outcry led to Facebook quickly rolling out some sort of triage solution, such as itspartnerships with third-party fact checkers to deal with fake news stories, or its recent hiring of 3,000 contract employees to screen violent videos. Each solution was derided as either a token, too-small step in the right direction, or as a misguided attempt to curtail free expression.
Is it possible that we expect too much from social media companies?
This isn't to say that Facebook shouldn't be held responsible for the increasingly large role it plays in disseminating the news. Nor is it to say that we, as news consumers, don't have the right to ask the tech giant to do better. But it's worth acknowledging that there may not be any obvious solutions or quick fixes to the problems that have emerged with our growing reliance on social media.
Pushing social media platforms to shut down terrorist-related accounts can help to curtail ISIS recruitment efforts. It can also deprive legitimate opposition groups in politically oppressive countries of a key communication tool.
Identifying a fake news story isn't always a straightforward endeavor, partly because "a clean database with a complete list of verified facts" does not — and cannot — exist.
The algorithms that Facebook and other social media platforms use to filter out offensive or obscene content are often criticized for lacking the nuance and common sense that human beings possess. But human curators, of course, bring their own biases and blind spots to the filtering process.
There may not be a way for any social media platform to get things right, especially since every one of us has a different idea of what "right" looks like. This doesn't let social media platforms off the hook; they have a responsibility to try to mitigate the problems they've created, and imperfect, "good enough" solutions are better than none at all.
But it's likely that our collective frustration with social media will never really go away. Perhaps the root of this frustration is that it amplifies the worst elements of human nature: our comfort in our own "filter bubbles," our tendency to disregard facts in favor of the stories we want to be true, our viciousness towards each other (and how access to an audience can encourage that viciousness).
I've been told that one day, sooner than most of us think, artificial intelligence will develop to the point that it will mimic, and then quickly surpass, human intuition. Perhaps AI will be up to the task of sifting through vast amounts of human communication and striking the right balance between free expression and truth, security and decency.
You know, if our robot overlords actually care about that sort of thing.
Lata Nott is executive director of the First Amendment Center of the Newseum Institute. Contact her via email at lnott@newseum.org. Follow her on Twitter at@LataNott.