Rapid Software Testing Conference Blog

Often times as a QA analyst, we struggle with how best to test a software product or component to find as many bugs as possible in the least amount of time, when we control neither the project schedule nor the quality of the software product.  It’s a dilemma that QA analysts repeatedly face because we’re at the end of the project cycle and our test schedule is impacted by the progress of the project and the budget or time that we have to test.

I’ve learned that over time as a tester, we simply deal with these dilemmas by making hard choices and voicing the risks involved.  If we had enough time and budget, we would test until the end of the world, however QA cannot change project deadlines nor do we make the decision on the delivery of a project.

Thus oftentimes, we must devise ways to test such that we’re able to find the most critical and major bugs prior to the project deadline.  The problem is, given these constraints, what is the best approach for testing?  What tests are more important or have higher value?  We all know that finding bugs early costs the least to fix, retest and deliver. And in an Agile environment, we’re able to start testing as early as possible by reading specifications, have meetings with business reps or the BA for the project, start devising a test strategy and writing of test cases. All this is done prior to even having working software to play with.  And when it comes time to test, we are so focused on following our devised test cases that we suffer from inattentive blindness.  Worse still is, we lose the objective of trying to find the most bugs in the least amount of time.  In the end, we still scramble to complete our testing by the project deadline because we didn’t find those critical and major bugs until the very end.  We then wonder why they were missed and a sense of guilt towards not being able to do a better job of finding them earlier.

When an opportunity arose to attend a QA conference, the “Rapid Software Testing” session caught my attention.  Is it really possible to do “rapid” software testing AND ensure the quality of a software product?  I was skeptical but a little hopeful that the conference could teach me something that I didn’t already know from my years of QA experience.

The conference wasn’t radical in the teachings of how to do rapid software testing.  In fact, a lot of the teachings were learned from experience.  What was interesting was the emphasis that testing is NOT a verification of specifications.  Testing is actually about answering questions.  Testing should be exploratory in nature so that creative thinking and observation is always done.

I found this concept very interesting because I have to agree that as testers we often fall prey to testing by doing specification verification.  That’s what our job is about.  But I like the fact that testers shouldn’t be testing to verify specifications because isn’t it already done by the developers when they unit test?  QA should focus on integration, performance, security, usability, installability and maintainability testing.  Those areas are hardly touched on by developers and there would be greater value in those types of testing than to do specification verification (aka functional testing).

Critical and major bugs that are reported by our customers are not normally from a broken feature but where it’s not integrating very well with other components when those components have bugs.  We often argue that due to time/budget constraints, we do not want to do low value items such as logging and handling of errors, resolve performance issues as there’s no baseline or requirement, handling of user error scenarios or invalid data input that should not occur in production.  But often times, these issues do arise and we’re forced to fix them when the customer finds them.  Due to the severity of the result from these issues, they are considered critical and major bugs, but we’re too focused on delivery of the functionality and not focused enough on the error scenarios that it can touch on. As a result we miss out on a lot of these issues during development.

By not focusing on functional testing but more on exploratory testing, QA would be able to find these higher value bugs sooner rather than later and in turn help reduce the cost of a project.  Functionality testing would still be done but it would be as a side effect from doing exploratory testing during integration, performance, security, usability, installability, maintainability testing.

What is exploratory testing? 

Exploratory software testing

• is a style of software testing

• that emphasizes the personal freedom and responsibility

• of the individual tester

• to continually optimize the value of her work

• by treating test-related learning, test design, and execution

• as mutually supportive activities that run in parallel

• throughout the project.¹

Exploratory testing does not mean we throw away test case writing, but record what testing was done during testing or after we’ve completed it, so that we are not dictated by previously written test steps.  This provides testers with the leeway to be creative and to think critically of different ways of doing something to avoid inattentive blindness and to better reflect how real users would use the application.  Exploratory testing would allow different areas of the functionality to be exercised earlier and to find the major & critical bugs earlier since the testing is not in sequence.

Overall, I found the conference to be very intriguing in helping me to think differently about testing and how I should approach it in the future.  Although I’ve heard of exploratory testing and have used it during smoke testing, I have never thought of using it as a test strategy to find bugs faster.

 

A shout-out to Matt Tammam, Johnny Park and Gina Chaves for reviewing and editing this blog.

¹Exploratory Testing: The State of the Art by Michael Bolton

 

It's only fair to share...
Share on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn

Leave a Reply