A new breed of technologies is taking shape that will extend the reach of search engines into the web's hidden corners, such as internet-connected databases they can't penetrate today, reports the New York Times. Search engines rely on programs, known as crawlers, that gather information by following the trails of hyperlinks that tie the web together. While that approach works well for the pages that make up the surface web, these programs have a harder time penetrating databases that are set up to respond to typed queries. "The crawlable web is the tip of the iceberg," says Anand Rajaraman, co-founder...

Subscribe to Read More

Are You an Educator?

Get Free online access to all our
news and resources and get
eCampus News Today email newsletter

About the Author:

eSchool News