Testing is a joyful and creative exercise for some of us that are professional testers and/or test managers. We automatically view an identified and confirmed issue as a victory of sorts, and strive to find more. To the non-professional testers on the team, however, finding issues can be disheartening. The greatest danger to the team and project is when the disheartened are the end-users and/or customers fulfilling their part of the testing required on the solution (the user-testers). If “working software” is the goal of the development episodes in the project, then “adopted solution” is the goal of the project once working software has been achieved.
Test managers will in particular have to be diligent about keeping end-user testers positive during the user acceptance testing stage. Issues discovered in this phase reflect the organization’s ability to adopt the solution, so they feel particularly severe to the testers no matter how fast a fix comes in. Trying to maintain a positive atmosphere and to avoid finger-pointing of blame between the customer and supplier organizations (even within the same company) demands creativity. The last thing a test manager wants to see is a rift between developers and testers.
Here are some ideas:
- Change “root cause analysis” to “resolution classification” so that user-testers witness where the resolution of any identified issue will come from. One thing people learn quickly is to be-friend problem solvers. On our last project, we took this so far as to change the title of the “root cause” field to “resolution category” in the issue log.
- Report positive headlines (passing tests) as frequently as negative headlines (failing tests). Some teams focus closely on issues and timelines to resolution and rightly so – but for the few minutes that it takes to also collect and highlight items that have passed – the effort is well worth it.
- Iterations. Completing an iteration yields a sense of accomplishment, even when there are issues discovered. The end-of-iteration retrospective is a chance to build hope and confidence for the next iteration. Don’t skip the retrospectives. We did on our last project and regretted it later because we began to lose focus on our process improvement efforts after that.
- Continuous retrospective – post a learning matrix (things to change, things to keep, things to research, things to praise) in the testing common area (you do have one, don’t you?) and encourage people to post things to it on a regular basis. Follow up on each item, from every category. It’s easy to address only the “things to change” but avoid that pitfall. Address items in the other quadrants during the daily stand-up/triage, or create an event just for that purpose.
- Model the test logistics using a big visible model in any notation you desire. Find a way to enable people to mark up the model as they find things that bug them about the test process or any other logistics. We found that we got more consistent test results from people that shared a mental model of what we were up to.
- Use test missions and a hierarchical sense of what needs to be tested so that part-time testers can participate on an equal footing with full-time testers. If you’re using business testers, some of them will, for at least some of time, be part-time. If you can provide them with an achievable “call sheet” or session sheet that describes their test mission then they usually work to satisfy the mission as opposed to just putting in time. You’ll squeak a bit more work out of them and they will leave with a sense of accomplishment. You might have to coach those individuals that are writing the test missions – that’s not an easy task, and not everyone can do it.
I’m sure there are many more ways of encouraging testers during the intense periods of the project. Contact me on-line on twitter @testfirst if you have more.