5x Scrum Team Velocity: How Test-Case Reviews Boosted Efficiency

Constructive critiques of software reviews can be highly beneficial in mitigating developer bias. Software engineers, whether creating architectural diagrams, coding units, or test cases, inherently harbor biases. Effective reviews of software artifacts typically require a fresh perspectiv

Breaking Down Biases in Software Development

Peer reviews of software can greatly benefit from constructive criticism, helping to overcome developer biases. As software engineers create architectural diagrams, write code, or design test cases, they inevitably bring their own biases to the table. To ensure effective reviews, a fresh perspective is often necessary, typically provided by someone other than the developer. Given that test cases are designed to exercise code, the code reviewer is well-suited to review test cases as well, bringing valuable insight into the code and effectively assuming the role of a white-box tester.

In this article, I'll share a recent experience where a newly formed Scrum team underwent significant growth and learning through its successes and setbacks. The role of test-case reviews proved twofold. Firstly, they brought technical expertise to both developers (gaining insight into testing) and testers (acquiring programming skills). Secondly, and most importantly, test-case reviews served as a catalyst for strengthening bonds between team members. The Scrum team transformed into a cohesive unit of members who shared and cared, with each individual pushing beyond their comfort zone for the team's benefit. You can learn more about boosting agile efficiency by visiting https://t8tech.com.

A Path to Enhanced Collaboration

In a Scrum team comprising five developers and a tester, we recently initiated test-case reviews. Initially, the tester was solely responsible for test-case design, development, and execution. Unit testing began later, when the team's efficiency and effectiveness became a concern. A pressing question emerged: Who should test what, when, and how, to maximize our team's velocity and minimize software bugs?

Forming a United Team

When the Scrum team was formed, everyone was eager to produce working code and help the team become one of the company's best. The goal was straightforward: Produce functional code as quickly as possible. If working code wasn't delivered promptly, there was a fear that the Scrum team would disband and members would join other teams.

Overcoming the QA Bottleneck

As all developers began coding, the majority of testing responsibilities fell on the tester. Without hesitation, the tester gathered all necessary data during Scrum ceremonies. All information required for designing, developing, prioritizing, and executing test cases was readily available. When a user story was developed and ready for QA testing, its status was set to QA. By then, the tester had finished designing and developing test cases and was awaiting test-case execution.

Following the successful completion of QA testing for a user story, its status was updated to code merging and ultimately to done. This approach proved effective for the first two sprints, each lasting two weeks. However, by the third sprint, a significant obstacle emerged. The number of user stories in QA status had increased substantially, and the team's velocity was heavily dependent on the tester's ability to test efficiently. As scrum velocity is directly linked to the number of user stories released per sprint, it became clear that QA testing was hindering the release process.

Overcoming the Testing Bottleneck through Strategic Resource Allocation

In an effort to address this issue, additional testers were temporarily reassigned from other scrum teams to assist with user-story testing. While this increased the team's velocity, it introduced complexity in coordinating efforts between teams. When a tester was reassigned to support another team, it had a detrimental impact on their original team's velocity. Although we managed to test more user stories over several sprints, we inadvertently created a significant problem for other teams. The gain of a single scrum team came at the expense of the entire company, prompting us to discontinue tester sharing.

A Pivotal Realization

It soon became apparent that the way forward lay in collaborative testing. The tester trained the team on smoke testing user stories at the UI level. Discussions ensued about risk-based software testing [1-2], testing for rapid feedback [1-2], prioritizing critical test cases, and iteratively testing the remainder. For user stories lacking a UI component, unit testing [3-5] and API-level testing were introduced [1-2]. As unit testing evolved and each developer performed a minimum set of smoke tests whenever applicable, our bottleneck was alleviated, although not entirely eliminated. Our velocity improved, but there remained considerable room for growth.

Reverting to Inefficient Practices

The team's mindset was that all members should contribute to testing to enhance velocity. Whole-team testing was viewed as a temporary solution to overcome our challenges, rather than a development best practice that should form the foundation for the team's growth and improvement. When the velocity issue began to improve, the team reverted to their old habits. Testing was once again primarily the responsibility of the tester. In sprints with fewer story points planned, the team either took on more user stories or performed bug fixing. The bottleneck of QA-based testing began to resurface.

Transforming Testing: From Bottleneck to Development Excellence

A profound shift in mindset occurred when the team came to regard testing as an indispensable best practice rather than a temporary fix. Following extensive training, education, and retrospective meetings, team members assumed ownership of testing responsibilities in the long run. It was acknowledged that testing was an integral aspect of every individual's role, albeit with varying levels of involvement and expertise. The goal was to tailor testing activities to maximize individual productivity and team performance.

Unit testing evolved from a desirable activity to a mandatory one, guided by principles outlined in [3-5]. It became the norm to write unit tests to detect bugs early and enhance the overall software quality. Factors such as code's internal quality [3] became a regular topic of discussion. Code reviews also became a standard practice, leading to one of the primary drivers of our team's growth: test-case reviews. These reviews could be conducted between developers or between a tester and a developer.

Technical Breakthroughs

Initially, test-case reviews were introduced as a training exercise where the tester guided developers on best testing practices. Key questions addressed included: What are the different approaches to creating test cases based on testing objectives? How to determine the level of detail in a test case, the number of test steps, and the scope of testing.

Developers began sharing their code with the tester, explaining the underlying principles and testing strategies. They also demonstrated how user interface interactions could be translated into code-level interactions. As the tester was responsible for reviewing test cases, she required a high-level understanding of the code, which developers provided by training her on the programming language fundamentals.

Developers refined their testing skills, adopting best practices for efficient and effective testing. The tester, in turn, gained a deeper understanding of coding basics, making her an ideal candidate for a software development role in testing.

Once developers were confident in writing and executing their test cases, they took over test-case reviews, which were incorporated into code reviews. Each user story included reviewed test cases, and the team eventually reached a point where test cases at any level (unit, API, UI) were reviewed by anyone in the team.

Fostering Collaborative Excellence

The introduction of test-case reviews had a profound impact on the team's interpersonal relationships, constructively challenging traditional software testing and development roles. This led to a reevaluation of the boundaries between these two disciplines, sparking questions about where software testing begins and ends, and where software development starts and concludes. Is it always beneficial to maintain a clear distinction between the two? The test-case review process sparked intriguing discussions and debates, fostering a deeper understanding of the commonalities and differences in perspectives among team members.

Conclusion

What began as an initiative to address the team's velocity bottleneck evolved into a best practice in software development. This transformation led to a collective growth and learning experience for the entire team, as individual members developed and improved through shared goals, challenges, and achievements. The team's mentality and bonding grew stronger, driven by a shared sense of purpose and motivation to learn from one another.

Test-case reviews played a pivotal role in this transformation, facilitating constructive interactions and knowledge sharing between developers and testers. By leveraging the test-case artifact and the need for collaborative testing, test-case reviews became the catalyst for improved team bonding and performance.

References

  1. Agile Testing: A Practical Guide for Testers and Agile Teams, Lisa Crispin and Janet Gregory, 2008
  2. More Agile Testing: Learning Journeys for the Whole Team, Janet Gregory and Lisa Crispin, 2014
  3. Test-Driven Development: By Example, Kent Beck, 2002
  4. The Art of Unit Testing, 2nd Edition, Roy Osherove, 2013
  5. Effective Unit Testing, Lasse Koskela, 2013

Ava Parker

115 Blog posts

Comments