The credibility of the mainstream media has been called into question. Is there any difference between what mainstream media says and what the average person says? After all, much of our news today comes from real time video by people who happen to be on the scene, not professional reporters. We see much more unfiltered reality this way than we ever did before. Whether it's natural disasters or a police shooting caught on video, we don't rely on mainstream media as much as we used to.
Adding to their credibility problem, the mainstream media has demonstrated strong biases in their reporting. This has become more pronounced as print media has suffered from the onslaught of online information. Budgets have been cut, and formerly reliable print media sources have been purchased by large corporations focused on profits, not truth. Many of the large corporate owners demand that these once reilable sources slant their coverage or sensationalize it in ways that were never done before.
But if news can come from anyone, and people have their own agendas too, who can you trust?
Fake news has made headlines lately with the election. Was it the Russians, teenagers looking to make some fast cash by getting lots of hits on their nascent sponsored sites or posts, or political operatives trying to sway people? Will the Internet always be susceptible to these types of 'schemes'? Will we see more incidents of people so convinced that falsehoods are true that they resort to violence in order to right imagined wrongs? Will propaganda become the currency of our modern internet age? Or has it already?
If you have millions of social media contacts and can shout your message out to them, a large percentage will believe almost whatever you say. The more contacts you have and the more people who propagate your claims, the more your message sounds true. Fake news uses this, jihadists use it, political extremists use it, and now, it's becoming mainstream. We're overloaded with propaganda. How do we find the truth?
Many pundits claim that we just need to educate users to differentiate what's true from what's not. Information consumers need to learn to take the time to look deeper into stories and their sources. Unfortunately, I don't think this could possibly work. Some of the most intelligent people I know have shared misleading stories that have cherry-picked facts to appear more credible. Usually they've propagated these stories based on headlines that seem to support their own points of view. After all, we love it when we're proven to be right, don't we?
If college-educated, internet-savvy people can do this, is there any hope for internet neophytes?
Is it really reasonable to expect us to fact-check everything we see on the Internet especially if it comes from our favorite, trusted social media sites and our friends?
I don't think so.
We know that major players like Facebook, Twitter, Google, and others are now looking at how to protect their users from misleading and potentially damaging information without violating first amendment rights to free speech and freedom of expression. But so far, interviews with their technologists seem to indicate that the problem may be intractable. I'm not so sure. It might not be that difficult.
I've noticed that some sites, like Yahoo News, offer a scoring mechanism for stories they publish. If you roll over the headline for the story, a meter pops up showing the number of people who liked the story versus the number who didn't or who were neutral about it. This certainly doesn't address the problem at hand as it just indicates how popular a story (and its positions) might be. But it might be an idea that can be built upon. What if we could roll over a story, post, tweet, search result, etc., and see a credibility meter instead?
As a technologist, I often look to working solutions to see if they can be applied to new problems. In this case, a proven model has been staring us in the face.
A Proven Model We Can Start With
Not so long ago email spam was a major problem. We'd receive hundreds or thousands of emails a day. The majority were just junk; some were dangerous (with viruses attached); many were scams; and then lost in the midst of all that junk, were the ones you really wanted to read. It was a disaster for most of us, causing lost productivity, wasted time, and in some cases damage to systems or pocket books.
And yet, in spite of the fact that there are a reported 400 BILLION spam messages per day on the internet, you don't hear much about spam anymore. It exists, but we now have spam filters that protect us.
Spam filters work through a combination of software running on our Internet Service Providers' (ISPs) mail servers and on our own computers.
At the highest level, these programs look at the email headers to track the path of the message back to its source. They then validate the sources against blacklists that include known spammers. Many compare against white lists (known and approved email servers). Next, they apply content filtering using algorithms to recognize common content in known spam. They assign a score to each email. Based on that score, which is computed from a combination of the reliability of the source and the content, they decide whether to delete the message, to designate it as probably spam, or pass the message as valid. If you look closely at your complete email headers, you can usually see each message's spam score. It's not a perfect system, but it works well enough to spare us from being innundated by unidentified spam.
Social media networks, search engines, news outlets, etc. could use a very similar method to validate posts. It's not terribly hard to find the sources of posts, messages, or stories, and from that, to assign a credibility score based on the originator's history of reliability. When displaying the story, it could include a rollover 'Credibility Meter'. Move your mouse over the search result, post, message, etc and a widget would appear giving you the score for this particular content.
A simple version of this filter could be developed and deployed very quickly. Later, as content dissecting algorithms become more sophisticated, so too can the accuracy of the reliability score.
I note that even in its simplest form, a Credibility Meter of this sort would at least let us know if we should dig deeper into the source and credibility of the story. At the same time, sources with low credibility scores would be motivated to create more factual posts to raise their scores. Ultimately, we'd see more reliable information on the internet.
Clearly, I'm not advocating a right or wrong, absolute fact or lie approach here. I'm just suggesting that news and social media assign scores to information we receive, based on the likelihood of it being reliable and on the credibility scores of the originator and the people who repost. That way, we can decide if we need to dig deeper or just want to believe what we see on the Internet.
I mentioned this idea to my former team over lunch today and one of my engineers is already at work to create a prototype. But as I told him, with such an obvious solution, I'd be surprised if someone else isn't already working on it.