4.1. Designing the Search Engine

Dissecting the typical search engine reveals the anatomy of a simple beast in three parts: a crawler, an indexer, and a front end. A crawler goes out looking for fresh content to queue up for the indexer. In larger search engines, the crawler may actually download these pages and scan them for links to still more pages. The indexer in turn takes content, analyzes it, and formulates a searchable index. A front-end component of the search engine then accepts a query, uses it to search the index and presents the results back to the user. The devil, as always, is in the details.

The best algorithms to create an efficient index or rank results by perceived relevancy are closely guarded secrets in the search engine industry, and developing these is where a few lucky programmers have earned millions of dollars.

The full text-searching capabilities of MySQL make it possible to build low-grade search engines for small amounts of content, but this approach has some drawbacks as I will discuss. Instead, this project will have a crawler/indexer and front end.

The functionality of the crawler and indexer components will be combined in the same code file for this project. The document will be indexed to an inverted-index in a MySQL database immediately after it is retrieved. It will be written to run as a stand-alone cron job or scheduled task, the same way the scripts in the mailing list project from the previous chapter work.

The front end of the search engine ...

Get PHP and MySQL®: Create-Modify-Reuse now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.