Log Analysis

Some system administrators never get past the rotation phase in their relationship with their log files. As long as the necessary information exists on disk when it is needed for debugging, they never put any thought into using their log file information for any other purpose. I’d like to suggest that this is a shortsighted view, and that a little log file analysis can go a long way. We’re going to look at a few approaches you can use for performing log file analysis in Perl, starting with the most simple and getting more complex as we go along.

Most of the examples in this section use Unix log files for demonstration purposes, since the average Unix system has more log files than sample systems from either of the other two operating systems put together, but the approaches offered here are not OS-specific.

Stream Read-Count

The easiest approach is the simple “read-and-count.” We read through a stream of log data, looking for interesting data, and increment a counter when we find it. Here’s a simple example, which counts the number of times a machine has rebooted based on the contents of a Solaris 2.6 wtmpx file:[34]

# template for Solaris 2.6 wtmpx, see the pack( ) doc # for more information $template = "A32 A4 A32 l s s2 x2 l2 l x20 s A257 x"; # determine the size of a record $recordsize = length(pack($template,( ))); # open the file open(WTMP,"/var/adm/wtmpx") or die "Unable to open wtmpx:$!\n"; # read through it one record at a time while (read(WTMP,$record,$recordsize)) ...

Get Perl for System Administration now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.