The idea behind the "Semantic Web" is that the internet as a whole is designed so that humans can read it, not computers. Which means data is presented in a way which generally a computer needs to be very, very smart to interpret accurately. The Semantic Web basically entails marking up existing web pages with computer-readable tags so that the computer sees what's on each page in black and white. So, for example, when you do a Google image search for "cats", instead of searching the internet for images with "cats" in the name, or in the alternate text, or in the caption, or just generally nearby, it simply searches for images with the word "cats" attached as a tag.
Dead easy, right? Yes. Yes it is. Assuming everybody always tags everything correctly.
Which they WON'T.
People can't be trusted or expected to tag their stuff correctly. People ALWAYS try to finagle their way to the top of the search engine listings. It will become common practice for the unscrupulous to simply attach every tag they can imagine, whether applicable or not, to everything they ever put online - just like happens already in regular search engines. You'll just end up getting porn after every search, regardless of what you search for.
The only way to avoid such a situation is for the tags to be applied by the people who have it in their best interests - namely, the search engine operators. But that means the tags must be applied by automated crawlers. Which means the crawlers will have to be able to interpret data that is meant for humans. We're back where we started.
This is really just a different way of looking at the same problem. It's one of many possible alternatives to Google's PageRank system and so on. I'm not saying it can't work - it could work better that PageRank, it could be AWESOME, who knows? But I, for one, am waiting to be impressed.