Several Agile teams that we have recently worked with in a coaching capacity have been having challenges with sprint planning – their estimates don’t seem to represent the actual amount of work required to complete their stories and they’re not sure what their velocity is even after having completed a number of sprints. This results in a loss of confidence in their sprint plans and the process in general. One common characteristic that we’ve observed when teams are facing these challenges is that often story completion is getting bottlenecked in QA due to a lack of automated test coverage (developers aren’t writing enough automated tests). We believe that the Theory of Constraints can help explain the challenges that can occur with the sprint planning process when stories are considered “development complete” without adequate automated test coverage.
The Theory of Constraints (ToC), a management philosophy, was first introduced by Eliyahu M. Goldratt in his 1984 book The Goal. The basic premise of the ToC is that the overall throughput of a system is limited by at least one constraint and is equal to the capacity of the constraint. As a result, all of the other steps in the system will inherently be underutilized and will have slack in their capacities. This is similar to the analogy of a chain only being as strong as its weakest link. As an example, imagine a system with multiple sequential steps with a constraint in the middle, as depicted below. Even though steps after the constraint may have higher capacities, they cannot operate any faster than the constraint. Likewise, steps ahead of the constraint can work at full capacity, but this is less than ideal because it will result in excess inventory building up which can lead to waste from having to manage that inventory.
The main goal of an Agile team is to complete User Stories for end users. Incremental development requires that Stories go through a lifecycle that typically looks like this:
As part of sprint planning we assume that the development step, coding and testing stories, is the constraint. This is because:
The ToC stipulates that a system where the constraint is not running at full capacity is not generating as much value as it could be Functioning, tested code is the primary deliverable so coding and testing is where the most value is generated
So in order to make sure that a team is generating the most value possible, development (coding and testing) needs to run at full capacity with no slack. As per the ToC, other steps in the system, such as Business Analysis (BA) and Quality Assurance(QA), should be staffed accordingly so that they have some slack and can keep ahead of development.
Automated testing is a key Agile practice which supports a number of other practices. When developers write and maintain suites of automated unit and functional tests they are:
- Building a safety net that provides quick feedback that a change in one part of the system hasn’t broken something somewhere else
- Documenting the intended implementation behaviour in the test suite code, creating a clearer implementation intent for current and future developers
- Testing the logic in the system, which allows QA to focus more on behavioral and boundary case testing (which aligns better with their skill sets)
Adding tests that can be used by automated Continuous Integration and Build processes
- Supporting downstream processes, such as automated deploys, that can leverage automated functional tests for regression testing
So, when developers don’t write enough automated tests they are essentially passing responsibility for logic testing down to QA, work that should have been – and is much more effectively – done by the developers through the writing of automated tests. This significantly increases QA workload, potentially causing the planning constraint to move from development, where it should be, to QA. This can result in confusion in the sprint planning process because, in our experience, when a team tries to estimate and plan across multiple tracks (or roles or steps) the relationship between estimates, velocity and sprint capacity becomes less predictable.
We recommend that sprint plans focus on a single track based on development effort and time. This means that estimates need to reflect development effort only, and sprint plans need to be based on developer capacity and velocity for the sprint. BA and QA analysts have important roles to play on the team in terms of identifying what stories need to do and, after development, validating that stories behave as expected. But, because functional, tested code is the primary deliverable and development is where the most value is added, teams should consist primarily of developers with sufficient numbers of BA and QA analysts added to keep the team productive. Generally we have found 1 BA and 1 QA for every 4 to 6 developers works best. This may seem like a relatively small complement of QA analysts, but when developers write automated unit and functional tests this significantly reduces QA’s work load by:
- Taking logic testing off of their plate (as noted above)
- Reducing the effort needed for regression testing (and all types of manual testing in general)
- Allowing QA to focus on behavioral and boundary case testing, which they are much more effective and efficient at doing
- Providing implementation intent documentation, in the form of test code, for QA and BAs alike which can act as a basis for constructive, detailed “this is how it should work” discussions
BA and QA analysts should attend sprint planning so that they can have a say in the selection of stories and where automated testing should focus to ensure that they don’t get overloaded, but it should be assumed that they generally will have slack available, will be able to keep up to development at current staffing levels, and don’t need a detailed plan that commits all of their time for the sprint.
To conclude, we have observed that when teams try to plan across multiple roles problems with sprint planning can occur. It’s like comparing apples to oranges, the roles and responsibilities of developers and QA analysts are very different and trying to plan around both of them can quickly cause the relationship between estimates, velocity and sprint capacity to become unclear, especially when automated test coverage is non-existent or inadequate. Our recommendation in these situations is straightforward: have the developers write more tests, take logic testing off of QA’s plate and have them focus on behavior and boundary cases to move the planning constraint away from them, and base estimates, velocity and sprint plans strictly on developer effort and time.
In my next post I will address two questions around concepts presented in this one:
- Why do we think that coding and testing should be the constraint and not other areas of SDLC?
- If your stories are being bottlenecked in QA, why not just add more QA staff to the team?