Thursday, February 07, 2008
How to make sense of the noise of the Web 2.0 generation ... Web 3.0?
Looking back over the last 10-15 years, beyond the desktop that is, you could sum up the first real movement on the web was around discovery and communication. Then the underlying trend of Web 2.0 has really been around sharing and connecting. One side-effect of the Web 2.0 generation has been that everyone creates a lot of noise, things I will not be interested in clouding my view on what I might be interested. So how to make sense of all this. This is where it all gets complicated of course, if in the last year I have done things like go on holiday, join social groups, attend events, gone to parties, taken pictures and video, blogged, communicated and so on, how do these relate and what is relevant to other people. A recent conversation I have had regarding deep tagging (at the Creative Coffee Club in Central London) provides and interesting step into how you can flag things with richer assets such as video. Tagging right now appears to be the way the web is being sewn together, easier to implement of course but across multiple cultures lacks relevancy and can be very tenuous where two things tagged the same are actually anything to do with each other. Relevancy is a deep subject, the things I have mentioned above regard who I am, where I was, when I was there, what I was doing, what interesting things happened, who was there with me and the same information about them. As I have mentioned, things becomes really complicated as chances are, once you upload these to current Web 2.0 sites you may not able to answer all those questions, especially if you are in a room with lots of people you dont know or cant remember. So the point at which 'things' are captured is going to become more important. The mobile device right now is quite good at capturing basic things like photos, low quality video, texts and so on, and all we typically get in common is a timestamp (which relies on the device time being right). But it is hard to apply this meta data that relates them together, right now that is mainly done in retrospect. Some devices can do things such as geotag, some file formats have additional meta data such as EXIF on photos but it only goes so far. I think the geographical information is only going to move forward once devices really exploit this, GPS is limited especially when you are inside. So how can I capture things and apply the meta data right there and then. If I take a photo on my mobile, yes I might be able to tag a location, yes it will be timestamped but if my phone and the file format provided a way to immediately add some meta data that could go quite a long way to making sense when everything is uploaded. If I am videoing an event, how can I pin/tag things within that event so I can find them later on, deep tagging is not something you find in video/audio capture devices right now but it would be a great addition that could allow a system to automatically edit some of your assets when uploaded. Another interesting trend has been in micro-blogging where people write short bursts of what they are up to on things like twitter, facebook and so on. Because again this is timestamped, information such as this could help sew together other captured assets and apply some level of relevancy in a manner users might be more comfortable with. I realise some thought has been put into areas such as the symantic web, however I think the factors that will make this all work is when the user doesnt need to do anything or very little to apply this information at the point of capture. Anyway food for thought.