Monday, August 20, 2007

RISK DRIVEN TESTING

In most of the software companies, the time owed by the test team to test a component is fairly less as compared to what the actual time should be to test it. Since this is a very widespread process, the test team should work according to the condition. And the dilemma is when the software cycle shrinks; the management tries to cut down the testing cycle. The management believes that testing phase is the only phase that can be condensed in terms of time and they can deliver the component within the scope of the reduced project plan. Testers are used to working in time-pressured projects, without requirements, using unfamiliar technology in culture that prefer to code first and plan second.

So what and how to proceed in this type of scenarios? The only technique that comes into my mind is Risk Driven Testing (RDT). Whenever there's too much to do and not enough time to do it, we have to prioritize so that at least the most important things get done. So prioritization has received a lot of attention and this approach is called Risk Driven Testing.

What if there isn't enough time for thorough testing? The test team should use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience.

In one of my previous organizations, it was a routine process to shrink the testing cycle according to the tight timelines of the project. Once I discussed this issue with my management. The simple point they asked me is “What is the time that you required to test this feature?” I said this is not a valid question from a testing team member. This question is same as HR asks you how much salary you want. The answer for both the questions is same. “Give me as many as you can”. Initially, the manager didn’t understand my interpretation. But gradually, he realized that he has asked a wrong question from a right guy.

It is true that testing never ends. I am still testing my Windows Imate cell phone that is having 5 issues in it till date. But that is not the solution in the current tight timelines scenarios. One should prioritize the testing tasks and perform it according to the priority. This is what called “Risk Driven Testing”. As said above, the RDT requires judgment skills, common sense and experience. Once you are completely familiar with the product/project/technology, you can perform RDT in a better way. Effective testing and clear communications of results is an integrated part of RDT.

Following are the consideration that can be included in RDT.

Which functionality is most important to the project's intended purpose?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?
Which aspects of the application can be tested early in the development cycle?
Which parts of the code are most complex, and thus most subject to errors?
Which parts of the application were developed in rush or panic mode?
Which aspects of similar/related previous projects caused problems?
Which aspects of similar/related previous projects had large maintenance expenses?
Which parts of the requirements and design are unclear or poorly thought out?
What do the developers think are the highest-risk aspects of the application?
What kinds of problems would cause the worst publicity?
What kinds of problems would cause the most customer service complaints?
What kinds of tests could easily cover multiple functionalities?
Which tests will have the best high-risk-coverage to time-required ratio?

-- Sanat Sharma

No comments: