When doing agile software development, our main challenge is having little enough process to go fast, but enough process to avoid crashing. The desire to go fast is all well and good, but we don't want to have our only feedback mechanism on whether we're going *too* fast to be blunt force trauma.
I have no doubt that application of agile methods such as XP without equal application of both the freedoms and the responsibilities they afford will end in tears. This is often the biggest challenge with developers new to XP. They love the freedom to refactor and design on the fly, but their enthusiasm sometimes wanes when confronted with the discipline of maintaining test coverage under schedule pressure.
So how do we know when we're going too fast? One problem with perceiving the team's speed (as opposed to velocity :P) is that different members of the team will have different comfort levels with the shared rate of progress. Several times recently I've found myself responding to feedback from team members that would like a little more process to avoid some issue or another, be it screen rework or the presence of defects during QA. I try to be careful to get enough feedback to find out if what we have is just turbulence or the swelling sound of a wing shearing off the fuselage. Most times it's the former, and in these cases I find there's a recurring theme to my response: prevention isn't always better than cure.
For example, in our current system we have 500+ story cards. If about 10% of these result in a defect (which looks about right at the moment) and each of these cost a day for a pair to fix (no metrics but I think that's more than the reality), that's 50x2 days of rework. If we were to put in place a process to prevent all these defects, it would probably cost the team at least two hours per story (that's only two people talking for an hour) to do the extra analysis and testing for EVERY story card. That's 1,000 hours extra work, or comfortably more than the 100 days of rework we have to do to cover off the defects. That's also on the unrealistic assumption that you do actually prevent all the defects! It's just a numbers game in which you win if the quality is sufficient that the amount of rework on the defects is less than the amount of effort to prevent all those defects plus the extra effort you put in to the majority of cards that would never have had defects in the first place.
So I find I'm spending a bit of time reassuring the passengers that a little turbulence is normal, and preventing it is not only unnecessary, it's counter-productive. At the same time I try to maintain a healthy paranoia about the process. Of course, the danger lies in not being able to tell turbulence from a tailspin, so if anyone has ideas on how to do this, bring it on!
 
You're right - I do like it ;)
ReplyDeleteYou can classify the bugs that come in. We have done this and only about 40% of them are "developer bugs". You can try different QA techniques and see what happens. In our judgement, pair programming will not pay for itself on our project. But many other techniques will.
ReplyDeleteSo, I agree.