A documented contributor to our societal disconnect is a tool most of us in internet-land (and certainly agency-land) use on a daily basis. While Facebook keeps us connected with folks we seldom see in-person, it also puts us in social bubbles of likeminded individuals sharing likeminded content. The extent to which social media “echo chambers” have contributed to our political disconnect is not entirely known, but logically, we can deduce that repeatedly being reminded of one’s “correctness” while being able to easily filter out any dissenting viewpoint can not be healthy to critical, analytical thought. Those who are not simply isolated but who are connected to a variety of opinions might grow to despise those with opposing viewpoints (or to despise politics altogether) as they are bombarded with articles they feel are untrue, misleading or uninteresting.
So how was this election different from the previous ones where Facebook played a significant role? People were sharing opinion posts in 2008, 2010, 2012, and 2014 and there didn’t seem to be much controversy surrounding them. What caused the debate to become so bitter and divisive this time around?
In times past, we’d look to objective fact as a mediator between viewpoints. Former Senator Daniel Patrick Moynihan is credited with proclaiming, “You’re entitled to your own opinions. You’re not entitled to your own facts.” People boldly declare that facts are on their side in social media disputes, but ultimately, the facts will support certain viewpoints better than others. During the presidential debates, seemingly neutral media outlets like NPR and Politifact tracked candidate points and accusations and provided real-time fact-checking to help better inform the electorate. Unfortunately, much of this thorough, un-slanted reporting was met with disdain as we came to realize that social media echo chambers had surpassed basic opinion bias and had come so far as to perpetuate false information masquerading as true fact.
Facebook has acknowledged and downplayed the role of “fake news” in the 2016 election, but third parties have been analyzing the impact and beg to differ. With 44% of people receiving their news from Facebook, how can we promote objective fact to keep us from falling into alternate realities perpetuated by the comforts of our personal echo chambers? Better yet, how can we do so without censoring free speech, paying special attention not to marginalize a particular point of view?
Step 1. Make filtering a community responsibility.
Rather than censoring information in-house (as Facebook does with content it deems hateful or pornographic), why not make this a community duty for two reasons?
Reason 1: By leaving it up to Facebook users to report inaccurate content, you’re removing the Facebook corporation from being the gatekeepers of information. Facebook has caught flack in the past for favoring certain viewpoints. This would help create a much needed separation and a diversity of world views needed to lay claim to inclusivity.
Reason 2: By having users flag content they feel is inaccurate, it will encourage them not to “un-friend” or unfollow family and friends with opposing viewpoints. Rather than yelling at a friend through the comments section, one could simply report this person’s content.
Step 2. Promote third-party fact-checking.
There are a number of great fact checking sites out there. Why not allow them to fact check a disputed article? This would take more power away from “Big Brother” Facebook to add more credibility to the process.
Step 3: Let people know.
When an article that had not been previously fact-checked has been evaluated, two things must happen:
1. A notification must be sent to everyone who has shared the article. If Politifact rates a “fake news” article PANTS ON FIRE, there should be a simple explanation for this decision when the notification has been clicked on.
2. Existing postings of the article in question should feature an icon to let viewers know that it has been rated by a fact checking service. Clicking on said icon should prompt a tool-tip that allows the user to learn the same information that has already been delivered to the sharers via notification.
Step 4: Don’t outright censor, except in extreme circumstances.
Unless someone is being falsely accused of a heinous crime or something that is not only a proven lie but seeks to tarnish or destroy character, “fake news” articles should be allowed to remain posted, much like tobacco is able to be sold with ample, obvious warning labeling. With that said, Facebook would be wise to adjust its algorithm so repeat liars are given lesser priority as link sources, ultimately decreasing visibility and impact. This will help encourage accurate information to go viral while filtering out a lot of the mess. So long as investigative journalism remains the standard-bearer of factual legitimacy, the updated newsfeed should help organizations that invest in thorough reporting to bolster visibility and see returns on their labors.
As people who design, implement and improve websites we know that a lot of imagination, testing and man hours go into building great user experiences. With a brilliant staff, record profitability and an anxious public, Facebook would be wise to put on a pot of coffee and find a fair, reasoned solution to their echo chamber problem before we face even greater turmoil in the next election. Writing code can be a challenging, laborious process, however, it does not require unanimous consent from three often conflicting branches of government. It’s time to restore sanity to millions of newsfeeds.