The unexpected result of the presidential election has many people pointing fingers in attempts to explain what happened. One of the biggest targets for blame has become social media, in particular platforms like Facebook and Twitter, for the role that critics say they played in the proliferation of fake news articles. But to focus on this one aspect of the circumstances surrounding this election is to ignore the greater implications of bigger changes happening in the nature of news media, as Ruth Reader argues in a Fast Company article, “How We Got To Post-Truth.”

News media has become incredibly saturated with content in the past few years. The Internet has made it possible for many more people and groups to publish content for millions to consume, but it has also opened the door to a flood of unprofessional and unchecked information. Local news organizations and newspapers have suffered as reporters attempt to compete for attention and viewers with growing online blogs. And even though more Americans still get news from TV than social media, its influence is growing. More adults are using Facebook daily, according to a Pew survey, and an increasing number of those people are being influenced by content from social media on various political issues.

The turn away from newspapers and towards Internet-based content means there has been a huge increase in the amount of information categorized as “fake news”—stories written without factual basis, not by professional journalists but people working for page visits or ad revenue. This is where criticism of social media sites comes into play as the major vehicles by which these stories are spread.

Post-election reports revealed such statistics as the fact that within a sample of election tweets on Twitter, one in five were sent by a bot, while the top twenty fake news stories on Facebook in the week before election day were engaged more than the top twenty real news stories. Social media highlights both the concern about fake news and the danger of ideological isolation. Algorithms used by platforms like Facebook contribute to spreading fake news by promoting posts from friends over those from liked pages, but social media also lets the user pick and choose what kind of content they are exposed to overall. Over-reliance on such a source for information can easily create self-reinforcing ideological bubbles in which people are shown only more of what they already know.

The dangers of polarizing online echo chambers are many, but fortunately so are the possible solutions. In the short term, social media can do its part by turning to additional human moderation of content and activity in addition to algorithms. Those algorithms could even be modified to counter the problem as well—if something is being shared only within a small group of ideologically similar accounts, Facebook could alter its program to avoid sending that content to a bigger crowd. Both Google and Facebook have already taken the step of banning fake news websites from utilizing their ad-platforms, intended to help with defunding such sites by having humans examine those seeking promotion. But we as readers can also do our part by familiarizing ourselves with the regular characteristics of fake news articles.

Becoming the stewards of our own media intake and making efforts to expose ourselves to viewpoints different from our own are some of the best ways we can work to avoid the echo chambers and preserve a democracy based on the healthy exchange of ideas.  

If you’re in healthcare, insurance, technology or other professional services industries, and need help with a PR, marketing or social media campaign, contact Scott Public Relations.

Download our e-book, “The C-Suite Asks, We Answer: The Top 6 Questions About Healthcare PR.”

Like what you’ve read? Follow Scott Public Relations on LinkedIn, Twitter, Facebook, Pinterest, and Google+.

Sign up to receive our monthly advice on healthcare, insurance and technology PR: https://scottpublicrelations.com/.