shutterstock_1006933030-1 medIf you’ve ever been using a piece of software that crashed or went into some catatonic state for no apparent reason, your first thought, probably, was “why didn’t they test this?” followed by some choice words for the developers.

Chances are good it was tested. How extensively or thoroughly is another question. Software testing, it turns out, is a gargantuan task, even for moderately complex software.

Types of Software Testing

There are three basic types of software testing.

  • Exploratory: A tester “pokes around” in the software, making sure it doesn’t crash right out of the gate, checking the look and feel, and ensuring there are no glaring problems. This type of testing might be done on an early build of the software to catch big problems while they are still easy to fix.
  • Case-based: In this type of testing, test cases and step-by-step scripts are written beforehand. The test cases are based on the documented software requirements. This type of testing is much more thorough than exploratory testing, and because the test cases are based on requirements, there’s little chance that some functionality will go out the door untested.
  • Regression: This type of testing is used for versions of the software after the first. Its purpose is to test existing functionality to ensure that implementation of new features didn’t break something that was working previously.

The trouble with any software testing is that as the complexity of the software increases, the time and effort required to test it increases exponentially. Consider, for example, a simple dialog box, with a six-item drop-down list and an OK button. To test this dialog box, the tester tries each of the six items in the drop-down list followed by clicking the OK button.

Now put a second six-item drop-down list on the dialog box. All of a sudden the tester has 36 different scenarios to test, just on that one dialog box.

So something with the complexity of a word processing or spreadsheet application requires an army of testers, and even the most thorough testing won’t hit every imaginable condition that the software might see in the hands of the end users. Some bugs, alas, will go undetected before release to customers.

Automated Software Testing

The highly iterative nature of agile software development means that software testers find themselves testing the same cases over and over again. Wouldn’t it be nice if it could be automated in some way?

Fortunately, it is possible to automate at least some software testing. Certain types of software, such as programs that run in the background without human interaction, are especially good candidates for automated testing because their inputs and outputs tend to be fewer and have specific, known formats. These types of programs can be tested with command-line scripts.

Graphical user interface (GUI) testing is harder to automate, but it still can be done. Several commercial testing automation platforms are available for exactly this purpose. These platforms can be used to automate testing for native applications, web apps, and even mobile apps using mobile platform emulators.

These platforms have to be “taught” how to perform each test. This is typically done by having a human tester run through each test case while the automated testing system records the tester’s actions, similar to how one would record a macro in Microsoft Word or Excel.

Advantages and Shortcomings of Automated QA

The main advantage of this approach to software testing is that once they are recorded, a large suite of test cases can be executed in a short time. This is especially useful in regression testing, which focuses on parts of the software that haven’t changed (or at least weren’t supposed to change).

The recorded test cases can also be run with different inputs to perform negative testing—that is, deliberately making mistakes to ensure the software under test shows an error message instead of crashing or going into la-la land.

Automated testing is less useful for unstable parts of the software under test. If a window or dialog box undergoes a complete redesign, the test scripts for that interface need to be updated as well. Automated testing is also less good at catching things like inconsistent color schemes, bad alignment of controls, misspelled labels, or cryptic error messages. Human testers still need to watch for these issues.

Thus, automated testing will never completely replace human QA testers, but it can free the humans up for the kinds of testing for which automated platforms are less suited. The result is more thoroughly tested software and a better product in the end users’ hands.

Learn More!