Thursday, December 13, 2007

Quality Success Criteria

What should be the criteria for a release to declare it as a “Quality Success”?

The most common and expected answer against this question is that “This depends on the number of bugs found in the release, fixed in the release and delivered with the release.”

Among these three dependencies, the most crucial one is the number of bugs delivered with the release. In most of the companies, I have seen a standard format of this Quality Success. On a scale of 1-5, assuming 1 is the worst quality and 5 is the best quality, following metrics is followed frequently to rate the release from quality point of view.

  1. If number of major + critical bugs is 0, quality rating is 5.
  2. If number of major + critical bugs is 2, quality rating is 4.
  3. If number of major + critical bugs is 5, quality rating is 3.
  4. If number of major + critical bugs is 10, quality rating is 2.
  5. If number of major + critical bugs is 15 or more, quality rating is 1.

I believe rating the quality standards in terms of only the severity (major, critical) of a bug is a fake rating for a release. Ideally, a combination of severity and priority should be the criteria for assigning the quality rating to a release.

I believe categorization of bugs in two classes – Major and Critical, will develop a new hot thread between the development and Testing teams especially when this is the criteria for “Quality Success Ranking”. For each and every Critical or Major bug, there will be a definite discussion (or rather hot discussion) between the development team and the Test team. No development team will allow willingly firing bugs by Testing team having categories either Major or Critical because increasing of this number will definitely affect the quality rating of the release and eventually the thread will end at the performance of the development team. Even if there is a genuine critical or major bug in the release, the development team will try their best to lower down the severity of the bug. I have seen this in one of my previous company which is supposed to be the biggest IT giant in the world.

In my opinion, we can categorize this breakup into two classes – Must-fix bugs and Non-must fix bugs which is totally independent from this “Fighting point” of Major and Critical.

Basically, these two categories (Must-fix bugs and Non-must-fix bugs) are the outcomes of the combined and extracted result of the two factors of a bug – Severity and Priority.

Confusing …… Let me explain. Please see the below matrix.

(Severity is Critical) + (Priority is Lower) = Non-must-fix bugs

(Severity is Trivial) + (Priority is Higher) = Must-fix bugs

(Severity is Critical) + (Priority is Higher) = Must-fix bugs

(Severity is Trivial) + (Priority is Lower) = Non-must-fix bugs

So, the take away point for all the testers who has invested their time in reading this article is that better to categorize the bugs in two classes – Must-fix bugs and Non-must fix bugs and then make efforts to convince the management that these two are the best options to rate the release from quality point of view. Other wise the quality ratings based on only the severity of the bug is as good as nothing.

I often said that Quality is everybody’s job and that’s true. But it must start with management. Management’s job is to lead people toward a goal. And quality is the only goal that matters.

-- Sanat Sharma


1 comment:

Anonymous said...

Must fix bugs and non must fix bugs is definitely a welcome suggestion but there are other complications in this procedure. But definitely everything is possible if you can convince your management for the same. Good thought.
Michael Sche.