Failure Analysis

If your tests are doing anything interesting, some of them are bound to fail. Some of the failures will be because of product bugs, and others will be due to errors in the tests. Some failures in related areas may be due to the same bug or, if you are running a single test on multiple configurations, it may fail in multiple (or all) configurations. If the same test (e.g., “verify widget control activates menu items”) fails on both Windows XP and Windows Vista, failure analysis can analyze logfiles or other test collateral. If it determines that the same issue causes both, it can report only one failure.

If a team has a lot of tests and any significant number are failing, the test team can end up spending a lot of time investigating failed tests—so much time, in fact, that they have little time left for actually testing. The unfortunate alternative to this analysis paralysis is to simply gloss over the failure investigation and hope for the best (a practice that often ends with a critical bug finding its way into a customer’s hands).

The solution to this predicament is to automate the analysis and investigation of test failures. The primary item that enables analysis to work effectively is to have consistent logging implemented across all tests. Matching algorithms implemented by the failure analysis system can look for similarities in the logs among failed tests and identify failures potentially triggered by the same root cause. The failure analysis system ...

Get Beautiful Testing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.