Unsupervised learning of semantics?

Language Understanding has been studied for years. But so far, progresses are only made within a limited domain (think of controlled vocabulary).

In this article, researchers at CMU, Google and Yahoo come together ambitiously trying to build up an ontology space that could capture language meanings based on contents from billions of webpages.

"Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, while NELL is highly automated. “What’s exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help,” said Oren Etzioni, a computer scientist at the University of Washington, who leads a project called TextRunner, which reads the Web to extract facts."

The proposed system seems like to have the ability to continously "learn" knowledge from massive Internet data. So, does that actually help solve the problem "meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day."?