The World Wide Web, after just 12 years in mainstream use, has become an infrastructure accessed by hundreds of millions of people every day, and the medium through which trillions of dollars a year are transacted. In this short period, the Web has already been through a boom, a crippling bust, and a renewal to full grandeur in the modern era of 'Web 2.0'.
But imagine, if you could, a Web in which web sites are not just readable in human languages, but in which information is understandable by software to the extent that computers themselves would be able to perform the task of sharing and combining information. In other words, a Web in which machines can interpret the Web more readily, in order to make it more useful for humans. This vision for a future Internet is known as the Semantic Web.
Why is this useful? Suppose that scientific research papers were published in a Semantic Web language that enabled their content to be integrated with other research publications across the world, making research collaboration vastly more efficient. For example, a scientist running an experiment can publish his data in Semantic format, and another scientist not acquainted with the first one could search for the data and build off of it in real time. Tim Berners-Lee, as far back as 2001, said that this "will likely profoundly change the very nature of how scientific knowledge is produced and shared, in ways that we can now barely imagine."
Some are already referring to the Semantic Web as 'Web 3.0'. This type of labeling is a reliable litmus test of a technology falling into the clutches of emotional hype, and thus caution is warranted in assessing the true impact of it. I believe that the true impact of the Semantic Web will not manifest itself until 2012 or later. Nonetheless, the Semantic Web could do for scientific research what email did for postal correspondence and what MapQuest did for finding directions - eliminate almost all of the time wasted in the exchange of information.