Google’s new ‘readability’ score – An analysis
Google have released a new function (available in Advanced search) to measure readability of websites, making it possible to filter your result set to only include results from a certain level of readability (Basic, Intermediate and Advanced).
From our initial trials it is clear that there are some bugs; badly translated or pages with a lot of strange word structures (think addresses) will be classified as “Advanced Reading”, for example. But this is not widespread and will not really affect the overall patterns.
Example: “In English I am new to this, and I want to clarify that I speak English, I communicate with you enter through a translator” classifies as advanced (which it may well be – but for the wrong reasons).
We thought it would be fun to test it out on some of the top web-properties globally. With some surprising and expected results. It is a bit strange that Google has an exact three way split – perhaps they used their domain as the overall benchmark to determine the rest of the web.
It’s clear the myth of YouTube comments being the most mindless on the internet is confirmed by the consistently low readability levels, although this might also be reflective of the nature of the majority of the content not being textual.
We got very predictable results when comparing a clever chap’s website (stephenwolfram.com) with a site populated by mindless drooling drones (justinbiebermusic.com) and got the expected results.
Facebook and Twitter actually scored much higher than expected – although that might just be the characters in my social network painting an unfairly inferior readability level.