Most software development teams want to automate repetitive acceptance testing to give QA more time for designing test cases and performing complicated or edge-case testing. This becomes more important with short iterations on long-running projects because a long-running project has more use cases that need to be manually regression tested in a short QA cycle.
At the root of the problem of automating regression testing is the complex nature of testing itself. In real life, very few test steps are atomic; they almost always rely on state. Some automated regression testing tools do not factor state into their design. As such, the state must be handled in the fixture classes that support the framework DSL. For example, “edit user” assumes that you have already clicked on the “view user” link, but ensuring those pre-requisites is difficult. An automated regression testing tool can only be part of a solution.
The (Partial) Solution: Selenium
We looked into JBehave and FitNesse before finding Selenium, which was the best fit for our needs. For web applications, we have used Selenium with great success. Selenium tests nicely encapsulate the state and inter-dependencies between steps. However, QA may not know which use cases have been automated leading to duplication of effort. QA visibility into Selenium test coverage is limited because:
- Tests are written by developers.
- Tests are written in Java.
- There’s no link between stories and tests.
- Test coverage reports are based on code coverage, not UI paths.
The Full Solution: Tool + Process change
Automated regression testing for web applications could be addressed by modifying one’s software development process to include QA when writing regression tests. This could be done in the sprint following the story implementation, after QA has manually tested the story against a test plan. The QA test plan should be updated to reflect which cases have been automated.