Thursday, July 12, 2007

Bug Reports - Five major issues

Right through my carrier so far, I have seen a lot of hot negotiations between the testers and the developers and that too for Bug Reports. I have fired a lot of bugs in my carrier (professional + personal) and still firing. Some of my all time favorite bugs are spectacular crashes. They are fun to watch and rewarding to find.

If there is a critical bug in released software, one of the two things happened:

Either no one found the bug before release or
Someone found the bug but no one fixed it before release.

None of us likes it when bugs in the first category bite us, but it’s the bug in the second category that hurt the most. There are some reasons that insist this second category. Using this article, I will try to summarize some points that should be taken care before firing the bug reports.

Test team is supposed to file (or I will say “fire”) bugs in the Bugs Tracking System. But normally I have seen that the Bug Report submitted by the tester is not clear and do not give a clear picture of what went wrong while testing. There are some points that should be mentioned clearly in the bug report. And also there are some things that we should ignore while reporting a bug against the product.

Presenting a list of five major problems that is quite common in the Bug reports.

1. Testers do not describe how to reproduce the bug. Either no procedure is given, or the given procedure doesn't work. Either case will likely get the bug report shelved. There should be an exact set of test steps which should clearly convey to the development team that what are the steps to reproduce the bug. Ideally, a tester should try to reproduce the issue thrice and then make a consolidated list of steps to reproduce. That will help the development team to understand the issue in a better way and they will easily reproduce that bug without even taking help with the testing team. When I started my carrier as a tester, no one was able to understand the bug report that I was sending. But gradually, I understood that how important is the clarity of the report for the whole team.

2. They don't explain what went wrong. At what point in the procedure does the bug occur? What should happen there? What actually happened? These are more or less related to the point above. While mentioning the steps to reproduce in the bug report, it should be clearly mentioned that at what step or at what point, the bug enters in the test case execution. It’s better to explain ideally what should happen at that particular step or how the application should handle the scenario at that time. Actual output and expected output should be clearly mentioned. As I said before, I also got this mania in my initial 2 years of testing experience.

3. They are not persuasive about the priority of the bug. Your job is to have the seriousness of the bug accurately assessed. There's a natural tendency for programmers and managers to rate bugs as less serious than they are. If you believe a bug is serious, explain why a customer would view it the way you do. If you found the bug with an odd case, take the time to reproduce it with a more obviously common or compelling case. As a tester, you should assign carefully “priority of fixing” of that bug.

4. They do not help the programmer in debugging. This is a simple cost/benefit tradeoff. A small amount of time spent simplifying the procedure for reproducing the bug or exploring the various ways it could occur may save a great deal of programmer time. Try to dig out the issue and concentrate on “Root Cause Analysis (RCA)” of the Bug. It’s better to raise an issue and simultaneously try to analyze the RCA of that bug. If you have any observation regarding this, you should add it on the bug report. This will definitely help the developers to resolve the issue quickly. I used to mention the detailed observation of the bug and try to dig the issue as far as I can perform. But please don’t forget to add your observations in the Bug Report.

5. They are insulting, so they poison the relationship between developers and testers. A smooth working relationship between developers and testers is essential for efficient testing. As a test team member, you are always a “Bad News Reader”. So we should be polite and diplomatic while writing a bug report. In my 6 years of work experience in testing, I have seen that 80% of time, development team and testing team discusses about the bugs. Or I can say fighting on the issues.

So take care of these points and try to develop a healthy rivalry between the developers and testers. Clarifying the problem can take you a long way toward solving it.

-- Sanat Sharma

Monday, July 09, 2007

Winning Test Team
____________________________

According to the “American Heritage Dictionary”, testing means "a procedure for critical evaluation”. That means as a tester, we need to be skilled at thinking critically.

Defining success for a test team is always difficult. In my opinion, testing never ends. Its just get transfer from the testing team to the customer. Every time your customer uses the program, a test is being conducted. We will always release software with bugs. We will always wonder if our test coverage was as thorough as it could have been. Yet if we are to accept these as part of our job, how do we know if we've been successful? It is quite difficult to analyze the fact that how much successful our testing team is. So to say that we have a most successful test team is a tricky and tough task

Till now, I have work experience of around 6+ years in testing and quality assurance. I believe that as a tester, you have to look for all the negative aspects of the software. But sometimes, you got some very amazing and ignored answers that could kill your spirit of testing

Take, for example, a great test team I worked with a couple of years ago. A well known telecom company has outsourced their testing part to the company where I was working as a Test Lead. Now we have done our task for one of the major release and still I have some doubts that there are still some bugs in the software. When I discussed this issue with that company’s Project Manager, he said that there is nothing as such. You have done your work and it is fine. When I again expressed some doubts in the software, he simply said that your task as an outsourced test team is complete now. Forget about the release and we will take care of further processing.

If your test team is positive and happy, you can almost guarantee that you've got the wrong people. You need a team to be critical, judgmental, and, well, negative. To be a good tester you've got to be pessimistic. You should always think that there are some bugs still in the software. You've got to be negative If my testers are saying things like "I'm sure there aren't any bugs left," then as a Test Lead, I am not dealing with a successful test team. In that case, I will try to convince the team and try to change their mindset.

I've worked with a lot of talented testers and I have seen that those testers are always critical by voice. As a tester, they are always a bad news reader. But I have worked with a number of testers also, who perceive testing as a positive role. While they know that they can't find every bug, they also know that increasing the number of bugs found contributes greatly to the overall quality of the product. Actually, for developers, things come to an end after coding the module. But the problem for many testers – and I’m no exception – is the question of when enough is enough.

The testing team’s goal is to find every feasible bug in a piece of software, even if someone says it isn't feasible. While development may meet its goal, we as a tester should always think that "We haven't quite finished our task yet."

Besides, finding only bugs in the software and acting as a bad new reader always, I think we should also appraise the tasks done by the development team. This will eliminate their belief that whenever we discuss anything with them, we have some bugs in our mind. This method is quite difficult as I have said before that testing means a procedure of critical evaluation. But in my opinion, we should try this once to soften the relationship between us and the development team. Actually I am also working on these points with the development team, and really speaking I have no positive outcome by following these. But I am sure, after some time, this will definitely pay me. I believe that “Efforts may fail, but don’t fail to make efforts”.

1. Highlight the developer’s work and good point in the code.

Make a regular practice to comment on one or two positive points every day of the developer’s work to whom you are working with. It takes a lot of effort to lift the spirits for those developer by a tester who highlight the negative points day in and day out. So help your development team by highlighting positive points daily.
-------------------------------------------

2. Set Clear Goals and define your testing scope.

Setting clear goals is crucial for the test team and the individual, including you. Testing sometimes seems to go on forever, and a team that feels there’s no end in sight is likely to end up despondent. Define test areas, but make sure they have boundaries with clear completion criteria. Try to set the testing goals of your project and clearly define the scope of the testing that you and your team are going to execute. If there is not enough time to test the component, use some risk analysis testing methods to test the component in a better way. Prioritize the test cases and testing. This gives the testers something to aim for and feel good about when they achieve it.
---------------------------------------------

3. Encourage Developers.

Some of the best and most talented developers I've worked with have always been positive and grateful for the testers' efforts. Developers with the strength of mind to encourage and support testers in this scenario really help to maintain a tester's enthusiasm. If nothing else, it's in the developers' interest to help maintain the testers' enthusiasm, as it is likely to lead to more defects found before a product is released.
----------------------------------------------

4. Look for some positives also in the Software.

Number of test cases failed should not be our first criteria to evaluate the work done by the development team. You should also highlight the number of pass test cases. Finding bugs is good from the tester’s view, but highlighting good point is really great from the developer’s point of view. Instead of focusing on the negatives only, we should emphasize the positives also. If development is nearing completion on a certain aspect of functionality and you've completed a test run over this functionality, highlight how well testing has gone. You'll be surprised how your spirits lift when you take the time to compliment others for something they've done well.
----------------------------------------------

To handle yourself, use your head but to handle others, use your heart. While testing may be mostly critical, don't overlook the importance of being positive.

-- Sanat Sharma