Robots are no different from any other HTTP client program. They too need to abide by the rules of the HTTP specification. A robot is making HTTP requests and advertising itself as an HTTP/1.1 client needs to use the appropriate HTTP request headers.
Many robots try to implement the minimum amount of HTTP needed to request the content they seek. This can lead to problems; however, it's unlikely that this behavior will change anytime soon. As a result, many robots make HTTP/1.0 requests, because that protocol has few requirements.
Despite the minimum amount of HTTP that robots tend to support, most do implement and send some identification headers—most notably, the User-Agent HTTP header. It's recommended that robot implementors send some basic header information to notify the site of the capabilities of the robot, the robot's identity, and where it originated.
This is useful information both for tracking down the owner of an errant crawler and for giving the server some information about what types of content the robot can handle. Some of the basic identifying headers that robot implementors are encouraged to implement are:
Tells the server the name of the robot making the request.
Provides the email address of the robot's user/administrator.
Tells the server what media types are okay to send. This can help ensure that the robot receives only content in which it's interested (text, images, etc.).
Provides the ...