Tuesday, November 18, 2008

Number of bugs vs. Performance of Tester

I will start this blog with an anecdote.

Start of anecdote.

Around four years back, I was working (or better to say testing) in telecom lab when suddenly one of the Project Manager (PM) came to me and said that there is an emergency meeting with the client and you need to be there immediately. I immediately rushed to the conference room and found that the room was full of hot vibrations, which was due to the hot discussions that were going on with the client. It was a telephonic call with the client sitting across the oceans in London. I joined it and the first question that has been fired on me from the client was that how many bugs you have found in the last release that has been delivered to them. Although I know the exact number of bugs, still I said “Around 25 – 30”. The response from the PM, sitting besides me, was “No Sanat, tell us the exact number”. I said “What do you want to retrieve from the exact number of bugs”. And that too in front of the client, I whispered slowly. The response from the PM was shocking. He said “We want to measure the performance of the testing team based on the number of exact bugs that you have found”. I was not able to answer (or to react) on his question at that time but I provided him the exact number of bugs. I immediately asked to leave that room and I done so.

But the concern that was hitting me again and again was why a tester is being rated based on the number of bugs that have been logged by him/her. Although I must be a happy man at that time since I was the person who almost fired 80% of the bugs in that particular product that have been tested by me (with other 4 testers) for more than 3 years. I was thinking on that on a very serious node and in the meantime, one fine day, my manager (Testing Manager) called a meeting with the all the testers (we were 5 at that time, including me). He said that we should fire more and more number of bugs against the release to improve the testing and to show the client that effective testing has been performed.

I totally followed his instructions and that started creating an unhealthy environment between the testers and programmers. All the testers (including me) started logging bugs in the databases with a very high frequency without even thinking of whether it is a bug or not. And without even throwing a second thought on the bugs, the programmer started rejecting the bugs. This panic and uncontrolled situation continued for one week and then a team meeting has been called to analyze the same. The meeting concluded with a node that the decision taken before was incorrect and there must be something that should measure the performance of the testing team.

End of anecdote.

To understand this topic in a better way, first we should understand the common problems in the Software Development Process that redirects any Software to a buggy Software. Some of the problems that I have seen are:

  • Poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems.
  • Unrealistic schedule - if too much work is crammed in too little time, problems are expected.
  • Inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash.
  • Poor planning - requests to pile on new features after development is underway which is extremely common.
  • Miscommunication - if developers don't know what's needed or customers have erroneous expectations, problems are guaranteed.

Now I am coming back to the points that need to be considered to analyze any tester’s performance. See the underlined text below. You will get the viewpoint of mine that I always considered for this concern.

A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.

A good Test engineer should:

  • be familiar with the software development process.
  • be able to maintain enthusiasm of their team and promote a positive atmosphere, despite what is a somewhat 'negative' process (e.g., looking for or preventing problems).
  • be able to promote teamwork to increase productivity.
  • be able to promote cooperation between software, test, and QA engineers.
  • have the diplomatic skills needed to promote improvements in QA processes.
  • have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to.
  • have people judgment skills for hiring and keeping skilled personnel.
  • be able to communicate with technical and non-technical people, engineers, managers, and customers.
  • be able to run meetings and keep them focused.

I believe testing cannot be measured accurately. Although there are n number of formulas that I am also using to provide some number as a performance of the individual tester, but still I disagree. Those numbers (or formulas) are good for management because management always believes in number. Testing cannot be measured, it can only be monitored. As I always said “Testing never ends”.

I will conclude this blog with Dr Cem Kaner amazing statement on this concern. “If you really need a simple number to use to rank your testers, use a random number generator. It is fairer than bug counting, it probably creates less political infighting, and it might be more accurate”.

-- Sanat Sharma

No comments: