THE EVOLUTION OF SEARCH ENGINES

In the emerging days of the Web, directories were built to help users navigate to various websites. Generally, these directories were created by hand—people categorized websites so that users could browse to what they wanted. As the Web got larger, this effort became more difficult. Web spiders that “crawled” websites were created. Web spiders, also known as robots, are computer programs that follow links from known web pages to other web pages. These robots access those pages, download the contents of those pages (into a storage mechanism generically referred to as an “index”), and add the links found on those pages to their list for later crawling.

Although Web crawlers enabled the early search engines to have a larger list of sites than the manual method of collecting sites, they couldn’t perform the other manual tasks of figuring out what the pages were about and ranking them in order of which ones were best. These search engines started working on computer programs that would help them do these things as well. For instance, computer programs could catalog all the words on a page to help figure out what those pages were about.

Get Marketing in the Age of Google: Your Online Strategy IS Your Business Strategy, Revised and Updated now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.