Total Pageviews

This blog is dedicated to knowledge about software testing.

Sunday, March 21, 2010

A small set of guidelines for testing in Agile, multi platform & multi project model

Software testing in itself is quite a challenge. Aspects like changing requirements, fluctuating deployment deadlines, which keep growing in number, make testing more arduous. Thoroughly understanding the product, hunting for its weakness and establishing its strengths by a executing a huge set of validations and verifications; these make it a busy day for the tester.


In an agile model of testing, focus is on testing iteratively against newly developed code until quality is achieved from an end customer's perspective. Testers have to adapt to rapid deployment cycles and changes in testing patterns.Continuous testing is the only way to ensure continuous progress. The tester has to work within the tight schedule, changing requirements, shorter deployment deadlines, plan testing for the product as it keeps growing with limited and/or less time.

Furthermore, in testing multiple platforms, the tester has to juggle between multiple platforms, keeping their scope, differences and similarities in mind.

Likewise, juggling between testing different products and different projects makes it a more convoluted task. To balance between different product requirements, perspectives of end users, and scope is definitely an energy drainer.

A combination of the above: "Testing in an Agile, multi platform & multi project model" is the perfect adventure trip you could have asked for. It has lot of challenges, obscurities, un explored areas, non estimated intricacies and limited amount of resources like time.

Herein, I have listed some simple guidelines which I found very helpful to deal with the situation stated above.

  • Be active, alert and inquisitive during the requirements discussion.
    • Think of all loopholes possible and bring them foreword.
    • Keep asking doubts to make requirements clear.
    • Keep noting points which are to be captured as test cases.
    • Analyze the testability of the requirements and request for new flows which would make it easier to test the software. (Eg.: The software is built for being used in 3G network, but as a tester you have access only to WiFi. So you have to request it to be working in WiFi also, else you wont be able to test it. )
  • After the Requirements discussion, capture the test cases soon enough when they are fresh in mind.
    • This helps avoid missing on test cases.
    • This make it easy to estimate time and resources needed to execute the test cases.
    • Any flows which were missed out in requirements discussion, and have been figured out while the test cases are being documented, they can be reported and the issues can be addressed earlier in the SDLC.
  • After documenting the test cases get them reviewed
    • By the Team Leads & Managers
    • By developers who have the ownership of building that feature.
    • This helps in:
      • Getting informed about any changes which were done later and have not yet been told upfront to testers
      • Getting additional feedback which can enhance the test cases.
  • Before beginning the testing round for testing the feature/ fixes:
    • Understand the delta which is coming from the development team to be tested.
    • Get a walk through if needed to understand last minute changes done which have not been informed yet.
    • Analyze the impact areas of the code changes to plan out regression.
      • Discuss them with the developer also to add/ update them.
    • Understand the test environment needed and set it up.
    • Understand the test data needed to test, analyze possible combinations of test data and arrange for all possible permutations of the types of inputs.
      • Example: numeric, alpha numeric, special characters, symbols, images, files, HTML text)
  • During test execution:
    • Confirm any changes whether they are as per latest requirements or they are issues if you have not been in loop about them.
      • This minimizes logging of invalid bugs.
    • Report bug ASAP in any form of communication, and then track them.
      • Helps in reducing the time of getting the build with fixes.
    • Make a note of changes to be updated in test cases so that the test cases are reusable. Else, they might be come invalid and a dead investment.
  • After test execution:
    • Report all issues found
    • Report test case status
    • Report areas which need to be revisited in the test case document to update it.
      • These help in getting feedback at an earlier level.
      • For example, some changes which were not documented, by latest requirement may have been removed/hidden/disabled for now due to some other priorities or some issues with that feature.
  • For deployment scenarios:
    • Run a smoke test case checklist before deployment in the environment form which code is to be deployed and after deployment,
    • On the environment on which code has been deployed
    • This is to make sure the core flows always work fine no mater what changes came in.
  • Analyze and keep on planning on how to make test cycles more efficient and have broader coverage.
    • Integrate Automation wherever possible
    • Plan test scenarios and testing themes for different days in the week, along with the daily to be tested items.
  • Plan for regular test case execution and review.
  • Keep building on Regression test cases

Creative Commons License
Software Testing by Indira Pai is licensed under a Creative Commons Attribution 3.0 Unported License.