Chapter 17. Search Engine Application

In the beginning of the World Wide Web, search engine applications were one of the first types of applications implemented. Being able to collect and store data from various sources across the web and provide an interface to make that data searchable for users is a common functionality that has been implemented in a number of ways throughout the history of computing and, more recently, the Web. There are a lot of requirements for this functionality: a means of storing the data, a means of full-text indexing the data so terms or queries can be used to perform searches against that data and retrieve results, having hits that are quickly returned, and a user interface for the person performing the search.

This chapter shows you a search engine application—one that's probably implemented differently than other search engine applications you have seen before. This chapter shows you the following:

  • How to implement a search engine from top to bottom!

  • How to use Sphinx for a real-life application.

  • How to use Gearman to distribute work and how to use both the Perl client and worker API for Gearman, as well as the Gearman MySQL UDFs, which you can use with triggers to further automate job assignments.

  • A simple web client crawler implemented in Perl that is a worker to which the Gearman job server will assign work.

  • Yet another application that takes advantage of memcached and MySQL together, and even more practical examples of using the Memcached Functions for ...

Get Developing Web Applications with Perl, memcached, MySQL® and Apache now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.