Is real-time the future of the web?
There has been a lot of talk about the real-time web and how it is ‘the future’ but what exactly is the real-time web and why should we care?
Put simply, the real-time web is instant. You do something and the system you are interacting with knows what you have done, does what it does and lets the rest of the web know with near zero delay. A good example of this would be Twitter. You post your Tweet and seconds later your Tweet appears in search results and all your followers can see it. Facebook has similar qualities but applied in a much wider range of interactions yet based on the same principles.
Imagine all of your online experiences having the same kind of response. The web would become less linear and more like a conversation. This is already happening with Twitter becoming so popular but fundamentally Twitter is talking about things and not interacting with them.
The challenge is to use the real-time qualities the web offers in things other than social media.
If you build it, they will come
There are a few technologies being explored in the creation of the real-time web.
The first is XMPP (Extensible Messaging and Presence Protocol) which is the most popular at the moment and is based on instant messaging which has a proven track record. This is what Twitter uses.
The second is SUP (Simple Update Protocol) which was developed by FriendFeed (recently purchased by Facebook) which works in a similar way to RSS but integrates a ‘push’ technology and notifies you when there is something new for you.
Another contender is GNIP (http://www.gnip.com/) which is designed to sit in between the services that are updated, and the applications expecting updates, providing one access point to multiple streams of real-time data.
This means that applications can be developed that don’t have to actively check each service like they do at the moment. Facebook for the iPhone is likely to be the first to achieve push notifications through a mobile application with any real demonstration of large scale deployment. Even Google’s recent push email update for the iPhone has been plagued with problems of inconsistency with the notifications.
Searching in real-time
There are a few tools that you can try to test the current state of the real-time search. Most of them use Twitter or other social network sites for their up-to-the-second results as these are the places that provide the most of that type of information.
There is a site called Scoopler (http://www.scoopler.com/) which is a real-time search engine. It sounds impressive but again it’s really only a searching sites like Twitter and Delicious and pulling out two sets of results. One set showing the most recent results (mostly from Twitter) and the other showing the most popular results, a good combination and the only way real-time search could really ever work; traditional search results supplemented by up-to-the-second results.
A similar effect can be achieved with a Greasemonkey script (available here) that integrates a Twitter search into Google results to show you related Tweets to your query. While most results tend to be irrelevant, you do find the occasional Tweet that is helpful, especially when trying to find answers or information about problems or current events.
Then there is the problem of accuracy and authority. Real-time search is all about searching for the most recent, up-to-the-second information. As the information is so new there is no way to determine if the information is accurate or relevant in the way that Google does so well. Twitter has tried with trending topics but some are perpetuated with Tweets asking why something is a trending topic.
What about Google?
Google is closing the gap between traditional and real-time search with the ability to filter your results by time including ‘recent results’, ‘past 24 hours’, ‘past week’, ‘past month’ and ‘past year’. They introduced an option to set a specific date rage to search and also the option for results from the previous hour. It is unlikely Google will go any further due to the types of content their search engine indexes.
If the real-time is the way forward, how does this affect SEO? The current process of SEO involves keywords, anchor text, URL structures, link building etc. but the only thing likely to survive into real-time search would be keywords. Something that, by itself, is incapable of giving any real benefit except showing you the most recent matches. Google and most of the major search engines would probably argue that there is more benefit in finding sites naturally through links because a site that has no links is likely a site not worth indexing. Perhaps a combination of social recommendations from friends / people you follow, recent results and authoritative sites could work.
But what happens when someone spams the real-time web? Considering the search results are chronologically ranked in terms of freshness, there may be very little people could do to avoid it. Twitter has struggled with this and a number of spam Tweets have, at times, become trending topics. Could it be as easy as adding a CAPCHA form before posting? Would that have a detrimental effect on the experience of Tweeting? And what would happen with all the third party applications that make Tweeting a pleasure?
There is certainly room for more real-time applications and ways to utilise the unique, searchable and instant qualities of real-time but it is most likely going to be geared even further towards the social aspect of the web. It will be interesting to see how new distribution models form through the real-time delivery of data to applications.
In conclusion, if Google thought real-time search was a good idea, I’m sure they would have already done it and the most useful aspect of the real-time web will be that of content delivery. The traditional web indexing and SEO techniques will live to fight another day. That is until the web gains intelligence. What happens when the web truly understands what you want?