Researchers are fine-tuning a computer system that is trying to master semantics by learning more like a human, reports the New York Times. Give a computer a task that can be crisply defined—win at chess, predict the weather—and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence. Few challenges in computing loom larger than unraveling semantics, or understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day. Now, a team of researchers at Carnegie Mellon University—supported by grants from the Defense Advanced Research Projects Agency (DARPA) and Google, and tapping into a supercomputing cluster provided by Yahoo—is trying to change that. The researchers’ computer was primed with some basic knowledge in various categories and set loose on the web with a mission to teach itself. The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of web pages for text patterns that it uses to learn facts—390,000 to date—with an estimated accuracy of 87 percent. These facts are grouped into semantic categories: cities, companies, sports teams, actors, universities, plants, and 274 others. NELL also learns facts that are relations between members of two categories…

Click here for the full story

About the Author:

Meris Stansbury

Meris Stansbury is the Editorial Director for both eSchool News and eCampus News, and was formerly the Managing Editor of eCampus News. Before working at eSchool Media, Meris worked as an assistant editor for The World and I, an online curriculum publication. She graduated from Kenyon College in 2006 with a BA in English, and enjoys spending way too much time either reading or cooking.

Add your opinion to the discussion.