Automated Testing Isn’t

It’s very tempting to speak of “automated testing” as if it were “automated manufacturing”—where we have the robot doing the exact same thing as the thinking human. So we take an application like the one shown in Figure 16-1 with a simple test script like this:

  1. Enter 4 in the first box.

  2. Enter 4 in the second box.

  3. Select the Multiply option from the Operations drop-down.

  4. Press Submit.

  5. Expect “16” in the answer box.

A very simple application

Figure 16-1. A very simple application

We get a computer to do all of those steps, and call it automation. The problem is that there is a hidden second expectation at the end of every test case documented this way: “And nothing else odd happened.”

The simplest way to deal with this “nothing else odd” is to capture the entire screen and compare runs, but then any time a developer moves a button, or you change screen resolution, color scheme, or anything else, the software will throw a false error.

These days it’s more common to check only for the exact assertion. Which means you miss things like the following:

  • An icon’s background color is not transparent.

  • After the submit, the Operations drop-down changed back to the default of Plus, so it reads like “4 + 4 = 16”.

  • After the second value is entered, the cancel button becomes disabled.

  • The Answer box is editable when it should be disabled (grayed out).

  • The operation took eight ...

Get Beautiful Testing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.