I'm a fan of
PolitiFact. They rate political statements based on truthfulness. They use one of
six ratings— True, Mostly True, Half True, Barely True, False, and Pants on Fire. I think they do a pretty good job of not being biased. Although I sometimes disagree with their ratings, usually the write up is pretty good.
I had an idea. Their site
lists 625 people or groups who they have rated at least once. Going to any person's page gives statistics on numbers of different ratings. I made a script to scrape the site for these stats then found some averages. As there are six ratings, I assigned a score of 7, 6, 5, 3, 2, or 1 to each. Note the missing four. I feel there is a division between the top three and bottom three.
Also, I did something which I've been thinking about for a while. It bothers me whenever you sort a list by ratings and the top items are always the ones that have gotten a single maximum rating. I've long thought that an easy way to combat this would be to start each item off with a hard coded middle rating. In this example of 1-7 that means every person got a single 4 rating to start with. I will say this didn't work quite as well as I had expected. It still got the single ratings down a bit. However, I think due to the nature of these ratings, which tend to hover in the middle, it worked less well than it would have for a site where the ratings tended to be closer to the extremes. So, I used the other method of combating the high single ratings. I took only those people that had at least five ratings and looked at just them.
Here is the top and bottom five by average (including the hardcoded four, and only those with five or more total ratings):
Here's the data:
http://daleswanson.org/blog/truthmeter.ods
http://daleswanson.org/blog/truthmeterresults.csv