Recently, Jack made me aware of this article by WikiTribune, where they talk about NewsGuard, a plug-in by Microsoft that rates news websites trustworthiness. Part of the TRS is calculating what we call a "Source Credibility Rating" (CR), which makes it interesting taking a look at similar efforts by others.
These are some of my first impressions, and questions that could be worth asking, if we contacted them in the future.
Comments
- How often do you update the rating of the websites?
- What analytical procedure did they follow to set the criteria and criteria weighting
I hope this is interesting, and feel free to add any thoughts on the matter to this thread!
These are some of my first impressions, and questions that could be worth asking, if we contacted them in the future.
Comments
- The binary rating of green vs red is better than nothing, but actually pretty simplistic. We are offering way more precision than that.
- One of the measures that I see they are taking to ensure reliability is that the ratings are set by more than one analyst. This is good but may not be enough. Critics could say that they are a cluster of like-minded people.
- Their highest weighted criterion (22/100 points) is that the site "does not reportedly publish false content". In order to determine if this is the case or not, they rely on the assessments of "journalists of NewsGuard or elsewhere". That of course can be (1) too subjective if using solely their journalists' assessments, or (2) fall in a circular trap of using source A to determine the truthfulness of information published in source B, source B to determine the truthfulness of source C, and source C to determine the truthfulness of source A.
- Their criterion "handles the difference between news and opinion responsibly" is the only one that includes some of what we call Patterns of Deception. The patterns that they mention in the description seem like nothing more than a couple illustrative examples, but even if they had a more comprehensive list of PoDs, to me they are missing the central concept of a Deception Impact Density (DID). Without that, it is hard to tell how significant the impact is. They weighting of this criterion gives it 12.5/100 points, not too high for being the one identifying explicit deception patterns.
- How often do you update the rating of the websites?
- What analytical procedure did they follow to set the criteria and criteria weighting
I hope this is interesting, and feel free to add any thoughts on the matter to this thread!
Last edited: