Introducing an add-on for Chrome that measures the trustworthiness of websites you visit every day.
About a year ago, my wife (who’s also a User Researcher at eyeo) told me about an idea she had for a browser extension that could identify fake news. I shared this idea with the Product Team a few months later, only to discover that many of them already had a passionate interest in the subject. Knowing there was a collective will to act, and a unique opportunity in the market, I pitched the idea to our executives, who gave me their approval to explore a minimum viable product.
So, I formed a small team of core contributors and a few advisors—just enough people to get shit done. Together, we defined the opportunity, designed and tested a prototype, and then built something real. Today, the Trusted News Beta add-on is available on the Chrome Web Store in English-speaking markets.
This is the story of how Trusted News came to be.
In the beginning we saw a huge opportunity to address fake news, and had lots of ideas for an extension. We wanted to do things like measure each page for accuracy, suggest alternative articles from websites across the political spectrum, and offer users the ability to categorize and rate content themselves. But an initial product, produced with limited resources, would have to be the best possible compromise between what’s possible now versus later (with more resources).
Therefore, we started with the basics: 1.) A solid vision 2.) great data sources and 3.) a straightforward user experience.
The vision itself is simple. Essentially, we believe that the distribution of misinformation online is a huge problem. Brexit, the 2016 US Presidential Election, and the polarization of society in general are just recent examples that have had enormous consequences. We wanted to build a tool that would rate the quality of content simply, accurately, and fairly as a user browses the web. However, it’s not within our authority (or ability) to declare what’s true, or what’s false… That should be up to the individual to decide for themselves.
Our primary goal, then, should only be to inform users of the trustworthiness of a particular source of information. They can believe or trust in whatever they want, but at least they have a tool that helps them separate fact from fiction. With this as our primary directive, we had a foundation to build on.
The really hard part was also the most critical. Reliable, unbiased data sources were the key to delivering any product worth installing. Only such data sources are exceedingly hard to come by (or openly accessible). Nearly all of the ones that do exist lean one way or another politically, and they don’t necessarily measure/score the same content sources.
Luckily, I happened to know the CEO of a company dedicated to information security and the accurate tagging of content. MetaCert started off providing cyber security tools to help developers and service providers keep porn off their networks. Then they began labeling all sorts of content that impacted corporate communication technologies, such as phishing attacks and malicious websites that were known to distribute viruses or malware through chat bots. They recently launched MetaCert Protocol, an anti-fraud and URL registry that pulls data from several sources, including PolitiFact, Melissa Zimdars for it’s News Reputation category, and their own extensive database, which identifies content across a variety of categories like fake news, far left, and far right. Plus, they offered a custom integration of their database into our own extension. MetaCert Protocol thus became our initial data provider.
We had a vision. We had data. Now it was time to design a user experience and an interface that was easy to understand.
As a user navigates from one website to another, we wanted to arm users with contextual information without getting in their way. The first, most obvious way to communicate this information is through the Trusted News icon in the browser toolbar. We decided to make the icon display a traffic light system that immediately tells users whether a website is trustworthy (green), biased (amber), or untrustworthy (red). That way a user already has an indication of the trustworthiness of a website without having to click on anything.
But if they do want more information, it should be simple to find and understand. The best way to do this is through the extension interface (aka the “bubble UI”), which is displayed whenever a user clicks the toolbar icon. Here, we had a limited amount of space to show information, and knew that whatever we showed would make or break the entire experience.
One challenge in particular were the labels themselves. We didn’t want too many categories, or to be overtly political. Yet we did want the labels to be meaningful and genuinely helpful in making decisions. In the end, we came up with several categories that would show prominently in the bubble UI using the data available from the MetaCert Protocol registry…
- Trustworthy: Websites that are known to consistently provide quality, accurate information, regardless if the publisher is liberal, conservative, or moderate.
- Untrustworthy: Websites that are proven to produce false or purposefully misleading content.
- Biased: Websites that contain politically biased content or promote unproven or skewed views.
- Satire: Websites that produce satirical content, and are not intended to be sources of actual news.
- Malicious: Websites that are known to distribute viruses or malware.
- Clickbate: Websites that knowingly uses misleading headings or article titles to attract readers in an effort to increase traffic and revenue.
- User Generated Content: Websites that contain user generated content and therefore can’t be accurately evaluated.
- Unknown: Whether there’s too little data, or no data at all, there isn’t enough agreement between the data sources to assign any label to the website. Therefore, the content may or may not be trustworthy… we simply don’t know (yet).
The data sources used to make the decision are displayed below the label in the bubble UI. This implies that all the sources shown contributed to the final result. The user can click on any of the logos to learn more about the owner of the data source. Ultimately, they then must decide for themselves whether to trust the individual sources and / or agree with the label.
After user testing, we also agreed that users should have the means to provide anonymous feedback into the system. Our sources are limited (for now), and they all certainly have their biases to varying degrees. But over the long term, we plan to add more data sources that measure more content from all over the world. Until then, at least we can ask users to help us improve the product by asking if they agree or don’t agree with the label shown—essentially crowdsourcing quality assurance—which will be included in the product soon.
The Trusted News Beta is the only thing we had the authority, resources, and time to build: a minimum viable product. We of course aim to nail the initial offering, but it will inevitably be prone to flaws, or lacking in seemingly-obvious functions. That’s why this is just a “Beta”. We’ll be conducting further user testing in the future that will lead to more features and a better experience.
Until then, here are some things to keep in mind:
- There are many ways a website could be labeled, from pornographic to left-or-right leaning content. What we chose instead was to keep the categories to a helpful minimum. When it comes to news, knowing which websites are most trustworthy is a more positive way to avoid fake news. Everything else, then, should provide only the information a user needs to determine if the content they’re reading is actual, reliable news, or something they should consider more critically.
- As for websites versus individual webpages, that’s both a technical and user interface problem. For one, it’s nearly impossible to review and score every article or content page out there. There is no automated process for such a thing. This stuff takes serious people power, as only human beings can understand and categorize content with any accuracy. So, there would be mostly nothing to show the user visiting a webpage on a website with low exposure.
- Then there are the people themselves. Irrespective if the human reviewer is from MetaCert, PolitiFact, or Melissa Zimdars herself, all people will be prone to some form of bias. They are not the arbiters of truth. They’re just humans making decisions. However, each organization has its own criteria (sometime very strict), and we’re confident that the results are as fair/accurate as possible.
Armed with the information Trusted News provides, we hope users will at least think more critically about the websites they visit. Information—especially news—is often construed with “truth” simply because it’s real content online. But just because somebody writes something we either agree or disagree with with, doesn’t therefore make it true or untrue.
Beyond that, we intend to build a community of users, data providers, and partners to provide a more comprehensive, accurate, and fair product, across more countries and in other languages. We strongly believe that this is a long-term project that will take a lot of work to perfect (knowing fully it will never be “perfect”).
Also, everyone here at eyeo is a fierce advocate of the open web and personal privacy. We seek to bring transparency to the project, while being transparent ourselves, by making our product open-sourced, and data providers accountable. Meanwhile, users should be able to trust that their search history or personally identifiable information is not being collected, stored, or shared with third-party partners for profit. In time, we seek to establish credibility across the web publishing industry, and earn the trust of our users. Until then, the Trusted News Beta extension is a great start.
If you want to participate in this experiment, or provide feedback for future development, download it now on the Chrome Web Store or visit trusted-news.com for more information. We invite anyone who cares about preserving journalistic quality and integrity online to join us.
This project is the result of many people who have contributed to the Trusted News Beta extension one way or another:
Misha Thornburgh, the User Researcher that the developed a test plan, provided the information we needed to make the Beta better before launch, and the who came up with the original idea.
Ann-Lee Chou, the other User Researcher who tested the initial prototype, conducted interviews, and supplied the team with invaluable insights.
Mario König, the Technical Project Manager who worked with everyone involved to keep everything moving forward in perfect synchronization.
Martin Velchevski, the Product Designer who refined the user experience, defined the final interface, and built the front-end.
Tom Woolford and Lisa Bielik, the Content Managers who found all the right words for all the things.
Vasily Kuznetsov, the Lead Developer who took care of everything on the backend, including integrating the MetaCert Protocol database in a way that respected user privacy.
Special thanks to Paul Walsh, CEO of MetaCert/Founder of MetaCert Protocol, and his team. Without their help, commitment, and generosity, we wouldn’t have been able to produce anything meaningful whatsoever.