Total Pageviews

Software Testing Junction

This blog is dedicated to knowledge about software testing.

Friday, May 25, 2012

Initiative on bug prioritization

When I fill my resume, when I look at other tester's resume, I find people mention bug reporting and tracking as a responsibility. But being a  tester is it enough?
 I hear many testers stating that their job description involves only bug reporting. Why should they bother if its fixed? If it is fixed, then they verify and do regression. If not, then why bother, its lesser work. Others team mates encourage this attitude as it helps reduce their workload: Developers are always busy and can do without solving another bug, Managers have to manage critical deployments and can do without extra work on developers.
We took an initiative from our end as the testing team. After every bug tracking, we used meet our manager and discuss/debate with our list of bugs which had to be fixed before next deployment. This effort was done in addition to the fact that the manager had himself prioritized some bugs. Initially we were not welcomed, but after few cycles of deployment, our team mates understood the value of our effort: At the end, aim was to make software quality better with every deployment. As time went on, due to our continued effort, we got positive feedback for a high quality software from our customer base. Although it was a team effort, we testers were especially praised for our extra efforts. 

Wednesday, January 5, 2011

Tester & Developer TeamWork

I always get different views about tester-developer relationship scenario based discussions. Some feel that the two roles cannot work as a friendly team, as that would hamper the quality of the work and the software. Some suggest a more moderate approach stating testers and developers should be neither friends nor foes.

How you work with developers depends on your organization structure and polices. I work in a Startup organization, in a Agile based model. Hence I have got the opportunity to work very closely with the developers.

Sharing my views from my experience on it...

  • How can a good Tester-Developer relation result into better testing (or bad testing ?)
Any relationship which is good within a team always helps out in smoother interactions. ie., if we accept we work in a team towards a common goal: To release a bug free ( or as near to that as possible) software.
Examples where in it can be helpful:
  1. Test cases can be reviewed by developers also, and inputs from them as to inclusion of some flows, some variations of test data always helps in building up more confidence in the system.
  2. Developers can give feedback on testing cycles where in improvements can be beneficial to both testers and developers, such as flows which reduce testing time, additional information required in bug reporting which can help solve the bug more quickly with a high turnaround efficiency.
  3. Testers can give feedback to developers as to the basic testing (Unit testing, System testing ) done on their end before handing it on to testers. This ensures that the easy to find bugs are found out and being worked upon, and testing team digs out other bugs for which they normally dont get time to test, hence a higher amount of confidence in testing cycles can be built.
  4. And as a result of good team relationship, developers can always help out testers debug bugs which are non reproducible of difficult to reproduce, as they have coded that s/w, so they know the flows which can be helpful to reach there.
  5. When a status of a bug is changed say from higher priority to a lower one, or from a valid to invalid one, or from fixed to reopened one, a discussion before doing so between the developer and tester always helps out take the whole change in a positive way, rather then enabling them to fight about it.
  6. Developer shares out small tweak changes, which normally are skipped in release notes so that the tester can test in detail.
  • How can a bad Tester-Developer relation result into bad testing (or better testing ?).....
Some Examples of bad developer tester relationship:
  1. Developer not accepting a bug found by Tester as a bug and marking it invalid before discussing it out..
  2. Developer not accepting priority/ severity for a bug logged by Tester and marking it as lower priority rather then discussing it out.
  3. Developer making changes in code and not letting the tester know.
  4. Tester logging a bug without complete information, making it the developer's headache to debug and find out the actual steps of the bug.
  5. Both of them may not be on the same page with regards to requirements, and this may lead to logging of invalid bugs or marking valid bugs as invalid due to lack of communication within themselves
  6. Ego issues will result in lower quality work on both ends
  7. Instead of working towards common a goal of releasing a high quality bug free S/W, they both end up in trying to show the other down.
I have personally benefited a lot from developing a healthy positive relationship with the developer when testing his piece of code. Developers and testers often encounter the Ugly Baby Syndrome, wherein the Tester, has to tell the developer that his baby is ugly (his code has bugs), which many developers find very difficult to take positively. Naturally, when someone criticizes your hard work, no one likes it. Hence, there is bound to be some level of discomfort between the developer and the tester. It always better to understand, that the aim of testing is not to make the lives of developers difficult and vice versa, by both the developers and the testers, but to work together for a releasing a high quality S/W to clients.

Creative Commons License
Software Testing by Indira Pai is licensed under a Creative Commons Attribution 3.0 Unported License.

Thursday, April 15, 2010

Types of Mobile Applications

Mobile Applications are increasingly advancing in the market in the current trend. More and more applications are being developed and launched in mobile versions to extend the market to this big set of users who are active stakeholders of mobile application domain. Mobile versions of a product differ in many ways from their other platform counter parts. This particular article deals with the different types of mobile applications which are commonly used by mobile users, and their specific attributes.


Browser Based Applications:


These are applications built for mobile browsing. They can be accessed by entering the particular URL in mobile browser. Often, the URL start with 'm' (Eg: m.google.com). For these type of applications, no install or uninstall, or even software upgrade is needed, as all the user has to do here is access the site URL on his mobile browser.

While building a mobile browser based website, certain aspects should be kept in mind such designing the layout, UI components and functions made available to user. Mobile browser would normally be a subset to the web browser, and as the screen on which user would view the mobile site will be much smaller as that of a desktop/ laptop system, hence cluttering the UI with lot of links, images, buttons and any other component, and providing a lot of functionality which may not really be needed when the user is on the move, would lead to depreciation in usage of that browser based application. This would be very unoptimal and non user friendly as the user will be lost while using this browser.

Additionally, since this would be a browser based application, the local system database cant be accessed to store much information, and hence a lot of time will be consumed when user would try to access any page which has a lot of UI components. This would again lead to reduction in usage of that application. For example, in case user wants to access mobile version of a financial site, he would definitely expect good performance speed, else he may not be able to perform the transactions at the desired price, for stock prices may vary any instant.


Pre Installed Applications



These are the mobile applications which are shipped as in built software with the mobile device. Some of the examples are applications without which one cannot imagine a mobile phone: Native phone book, SMS/Email Client and so on.



There is another category, which adds on to the device to accelerate device sales. They can be some specific application built for a mobile device manufacturer, for a particular handset model (For Eg: The yahoo messenger is pre shipped with many devices). In such case, if the device with which these applications are to be shipped are not ready yet, then a device prototype is used to develop and test these.

Normally the OS of the prototype is not very stable, which makes the testing and development difficult. Its critical to make sure these applications work fine and are of supreme quality before being shipped with handsets, as these can neither be installed, nor uninstalled. However, these applications can be upgraded, but this mostly is an auto upgrade. After being shipped with device, if the application has to be deleted, the device ROM would be needed to be erased. This would make it painful for user to go for that handset, and would definitely affect the sales of handsets negatively, instead of accelerating it.

Installable Applications


These are applications whose executable files/packages can be downloaded/received by wireless/wired media. How these executable files are received in device can be platform/device specific. They can be installed and uninstalled from the device. Upgrades for these applications can be done based on application/platform design.



The various sources from which the executable files can be received are:
1. Mobile App Store
2. Over The Air (OTA)
3. Transfer via wired media like USB cable from the system.
4. Transfer via wireless media like BlueTooth and InfraRed.

They can deal with local device database, can store information there, which can be helpful in faster execution of the application. However, one must be careful with file size of the application, if it consumes too much memory, then it may lead to user's dissatisfaction, and further to reduced usage of the application.

*Please Note: Contents of this article have been presented at Software Testing conference at indicThreads in March 2010

Creative Commons License
Software Testing by Indira Pai is licensed under a Creative Commons Attribution 3.0 Unported License.

Testing Techniques of Mobile Applications

When you use a mobile application, you may not realize the challenges which the team of developers and testers have dealt with before packaging this product for launch. Developing and Testing mobile applications is a complex and challenging task, and involves quite a few brainstorms to deal with many a cases, which are not normally encountered in other platforms like Web and Desktop.

The small screen, the compact device with lots of hidden mysteries around hardware inside it, the power supply, the network connection behavior and many such aspects make mobile application computing an interesting undertaking. This article deals with a few mobile testing generics, which can be applied to any mobile application type on any mobile application in any mobile platform.


Mobile Testing Generics :

Network Related cases
  • Testing in various Network Types
Some mobile applications which require network connection operate on different network types on different handsets.
Examples of such applications are:
  • Search Based applications
  • Financial transactions aiding applications
  • Email/IM based applications.
Such applications should be tested on all possible network types, that the devices for which they are being built for support.

Some network types on which applications can be tested in different devices are:
  • 2G
    • GPRS
    • CDMA
    • EDGE
  • 3G
  • Wi-Fi
  • Different types of plans based on service providers.
  • Testing in various Network Strengths
Mobile applications which operate on network connections should be tested in different network strengths.

Various measures of network strengths would be:
  • No Network
  • Low
  • Medium
  • High
Additionally, testing during network strength change should also be done. Some example of such cases would be:
  • Change of network strength from No Network/Low Network to high Network.
  • Change of network strength from High to No Network/Low Network.
  • Testing in various Network Speeds
Network speed affects rate of data transfer, and hence provides another important set of conditions in which the mobile application should be tested:
  • Low Speed
  • Medium Speed
  • High Speed
Change of network Speed can also cause the rate of data transfer to be affected, and hence form another set of test case criteria:
  • Low to high speed transition during data transfer
  • High to low speed transition during data transfer.

Memory Management Related cases
  • Monitoring Memory Usage patterns
Memory management and Garbage Collection have no specific test cases for them, but need a lot of observation while doing an action which requires memory like creating an object. Multiple combinations of such actions done in different sequences and under continuous monitoring is the key to test memory consumption patterns of the applications. If the application crashes during such cases, normally the reason is an out of memory exception. Monitoring for memory usage should be done on different times. One should observe memory usage pattern as the application is being:
  • Launched
  • Run in foreground
  • Run in background
  • Exit
  • Running continuously for a long time.
  • Monitor memory usage patterns for different number of third party applications installed in device
Free memory available for an application usage also depends upon other applications which are installed in the device. It may so happen that the application may manage memory very effectively when tested stand alone in a device which doesn't have many third party applications, but it may not be able to do so when there are multiple other applications installed in the device. It therefore becomes an important test scenario to observe memory usage of the application when it is running in a device which has multiple applications installed. One should check memory usage patterns when in device apart from the pre installed applications:
  • No other applications are installed. So a lot of free memory available in device.
  • Some third party applications are installed. So lesser free memory available in device.
  • Lot of third party applications are installed. Hence, very less free memory available in device.
  • Memory consumption pattern when applications are in different modes
One should also validate that when multiple applications run in the device, memory consumption of the application is not having any issues, or causing any issues to other application which are running in the device. This leads to another set of test scenarios where in one should check memory consumption pattern when multiple applications are running while the application is being:
  • Launched
  • Run in foreground
  • Run in background
  • Exit
  • Running continuously for a long time.
Battery Related cases
  • Testing in various Battery Strengths
Running of mobile applications in a mobile device gets affected in quite a few ways by the state of battery of the device. It may so happen when in low battery mode, the device automatically goes into silent mode as per setting. Also, during charging the incoming call alert might change from vibrate to ring as per setting. Cases as stated above and many more need to be observed when the application is running in foreground or background for the following device battery strengths:
  • Critical
  • Low
  • During Charging
  • High
  • Monitoring Battery Consumption patterns
Battery consumption rate is a very critical test scenario for mobile applications. No matter how good the application is, if while using it the device battery drains considerably, then the user would surely reduce or stop using the application. Therefore, the battery consumption rate of the application must be observed when its running in foreground or background for a long time. Also, if parallely some other applications are running along with the application in question, this should not affect the battery consumption rate adversely by that application.
Some Other cases
  • Interruptions
Interruptions are activities which can occur parallely in the device while the application is being:
  • Installed
  • Launched
  • Run
  • Exit
  • Upgraded
  • Uninstalled/Deleted
The impact of the interruption on the application needs to be monitored. Some examples
are:
  • Incoming call
  • Receiving incoming call
  • Receiving message
  • Device shutdown
  • Remove battery
  • Camera activated
  • Lose network connectivity and then regain.
Its important to validate that application handles interruptions and doesn't crash as the interruptions can occur in a normal use scenario frequently when the application is running.
  • Debug Build
This is a very important asset for reproducing difficult to reproduce bugs in mobile device. Debug build refers to a build released in Debug mode, which has logs enabled. The enabling of logs is based on a sequence of keys. Once the logs are enabled, all events and actions from the application are recorded when the application is running. This helps in retracing the issue.

*Please Note: Contents of this article have been presented at Software Testing conference at indicThreads in March 2010


Creative Commons License
Software Testing by Indira Pai is licensed under a Creative Commons Attribution 3.0 Unported License.

Sunday, March 21, 2010

A small set of guidelines for testing in Agile, multi platform & multi project model

Software testing in itself is quite a challenge. Aspects like changing requirements, fluctuating deployment deadlines, which keep growing in number, make testing more arduous. Thoroughly understanding the product, hunting for its weakness and establishing its strengths by a executing a huge set of validations and verifications; these make it a busy day for the tester.


In an agile model of testing, focus is on testing iteratively against newly developed code until quality is achieved from an end customer's perspective. Testers have to adapt to rapid deployment cycles and changes in testing patterns.Continuous testing is the only way to ensure continuous progress. The tester has to work within the tight schedule, changing requirements, shorter deployment deadlines, plan testing for the product as it keeps growing with limited and/or less time.

Furthermore, in testing multiple platforms, the tester has to juggle between multiple platforms, keeping their scope, differences and similarities in mind.

Likewise, juggling between testing different products and different projects makes it a more convoluted task. To balance between different product requirements, perspectives of end users, and scope is definitely an energy drainer.

A combination of the above: "Testing in an Agile, multi platform & multi project model" is the perfect adventure trip you could have asked for. It has lot of challenges, obscurities, un explored areas, non estimated intricacies and limited amount of resources like time.

Herein, I have listed some simple guidelines which I found very helpful to deal with the situation stated above.

  • Be active, alert and inquisitive during the requirements discussion.
    • Think of all loopholes possible and bring them foreword.
    • Keep asking doubts to make requirements clear.
    • Keep noting points which are to be captured as test cases.
    • Analyze the testability of the requirements and request for new flows which would make it easier to test the software. (Eg.: The software is built for being used in 3G network, but as a tester you have access only to WiFi. So you have to request it to be working in WiFi also, else you wont be able to test it. )
  • After the Requirements discussion, capture the test cases soon enough when they are fresh in mind.
    • This helps avoid missing on test cases.
    • This make it easy to estimate time and resources needed to execute the test cases.
    • Any flows which were missed out in requirements discussion, and have been figured out while the test cases are being documented, they can be reported and the issues can be addressed earlier in the SDLC.
  • After documenting the test cases get them reviewed
    • By the Team Leads & Managers
    • By developers who have the ownership of building that feature.
    • This helps in:
      • Getting informed about any changes which were done later and have not yet been told upfront to testers
      • Getting additional feedback which can enhance the test cases.
  • Before beginning the testing round for testing the feature/ fixes:
    • Understand the delta which is coming from the development team to be tested.
    • Get a walk through if needed to understand last minute changes done which have not been informed yet.
    • Analyze the impact areas of the code changes to plan out regression.
      • Discuss them with the developer also to add/ update them.
    • Understand the test environment needed and set it up.
    • Understand the test data needed to test, analyze possible combinations of test data and arrange for all possible permutations of the types of inputs.
      • Example: numeric, alpha numeric, special characters, symbols, images, files, HTML text)
  • During test execution:
    • Confirm any changes whether they are as per latest requirements or they are issues if you have not been in loop about them.
      • This minimizes logging of invalid bugs.
    • Report bug ASAP in any form of communication, and then track them.
      • Helps in reducing the time of getting the build with fixes.
    • Make a note of changes to be updated in test cases so that the test cases are reusable. Else, they might be come invalid and a dead investment.
  • After test execution:
    • Report all issues found
    • Report test case status
    • Report areas which need to be revisited in the test case document to update it.
      • These help in getting feedback at an earlier level.
      • For example, some changes which were not documented, by latest requirement may have been removed/hidden/disabled for now due to some other priorities or some issues with that feature.
  • For deployment scenarios:
    • Run a smoke test case checklist before deployment in the environment form which code is to be deployed and after deployment,
    • On the environment on which code has been deployed
    • This is to make sure the core flows always work fine no mater what changes came in.
  • Analyze and keep on planning on how to make test cycles more efficient and have broader coverage.
    • Integrate Automation wherever possible
    • Plan test scenarios and testing themes for different days in the week, along with the daily to be tested items.
  • Plan for regular test case execution and review.
  • Keep building on Regression test cases

Creative Commons License
Software Testing by Indira Pai is licensed under a Creative Commons Attribution 3.0 Unported License.

Thursday, January 21, 2010

Multiple Platform Testing



Initially, desktop based applications were the ones which were mainly talked about with regards to testing. And then came the era of web based application, and web versions of desktop based applications. Porting of applications like messengers to a web platform (as done by Meebo) caused a great rush to shift to the web platform which enhanced the user experience.

As time is passing by, mobile users are increasing, and the demand of applications across mobile platform is increasing. Applications which are web platform based, are not only moving towards desktop versions, but are also getting origins in the mobile application and the WAP version to cover maximum users, and maximum utilization by the users.

Now, launching applications across different platforms is becoming a trend, giving businesses a competitive edge. Hence, there arose a need of multi-platform testing. Instead of hiring experts of different platforms, for testing the same application on different platforms, it becomes easier and simpler if the same team does the testing across platforms.

Testing across multiple platforms is a challenge in itself. It requires a great deal of skills for a tester. The tester has to balance his testing between a lot of factors such as platform specifications and scope, application specification and scope, compatibility and end user experience across platforms, keeping in mind their similarities and differences.

The following comprises of identifying some key points for multi platform testing:

· Understanding the specifications, limitations and usability prospects of different platforms on which the tester is going to work.

Different platforms have different specifications limitations and prospects, which can impact the requirements and design of the software specific to a platform. Also the platform version specific features play a key role in defining the functioning of the applications and its test case.

For Example, desktop platforms can deal with bulk data, but don’t cater to continuous connectivity for the users on the move. Where as mobile platforms help user to stay connected, but have a limitation that they can’t deal with bulk amount of data.

If a product is designed for web, desktop, mobile and WAP versions, then the tester has to be aware of about platform specific parameters such as OS Version, memory management, process management etc. This is very important to know as in an event of software crash, such information is very handy.


· Understanding the end users particular to a platform.

The end users of the platform decide the key features of the different versions of the application. For Example, a web user would get the complete master set of all the features of the product, the desktop version user would get a smaller set with considerable features, and mobile application and WAP versions would further see a reduction in features, with a package of certain primary functions. This is helpful in testing usability of the product with respect to platform.


· Understanding the usability scope of the platform which directly or indirectly affects the scope of the application which has to be tested/ used on the platform.

Every platform has its usability scope. The specification of product version for the platform should be designed such that it is in sync with the usability scope of the platform.

The usability scope of the current mobile platforms is much lesser than those of desktop platforms. Hence, the product specifications for these platforms are designed accordingly. Additionally, some flows in the application might arise due to the various limitations of the specific platform.

Keep in mind the user experience of multiple platforms and visualize if feature and facilities of the application in one platform can be a boon or not in another, thereby enhancing usability of the application.


· Understand the GUI nitty-gritty specific to the platform.

GUI of the application differs as per the platform for which it has been designed.

The web and desktop version can handle bulk data, bulk transactions and a lot of features and functions. In those platforms, having quite a few functions neatly organized in layout would not affect the GUI adversely.

But the same design in a smaller screen of mobile platform version would look unorganized, and reduce its user friendliness. But having a mobile version is always help as it provides quick and easy access to information for users on the move.

Hence GUI designing is a key component to ensure the product’s usability and its appeal to the users.


· Identify and design test cases specific for a platform which would impact application usability directly or indirectly.

Test Cases specific to a platform should be identified, designed and executed accordingly.

For example, in web and desktop applications, test cases for performance related issues should be considered as it mainly deals with bulk data. And in mobile applications, connectivity related, and client-server response time cases should be considered as user.


· Identify and design test cases which will be consistent for the application across all platforms.

Some test cases would be common for all platform specific versions of the applications like login, other basic application flow. Such cases should also be designed and executed to test the consistency of such functions of application across different platform.


· Understand the test environment set up required as per the platform and as per the application.

Test environment set ups will be different for different platforms. Mobile platforms should have test cases for different network types, device battery strength and different device configurations. Desktop machines should have test cases around the system process status variations, different memory status configurations, and different system configurations.

Different combinations along the above stated cases, and the cases around different system requirements of different platform versions of the applications should be considered for testing environment set ups.


· Use test data by keeping in mind the application's usability perspective with respect to the platform.

Test data should be designed according to different platform versions. As stated earlier, as web and desktop applications can deal with bulk data, test data would be in bigger quantity for those platform versions of the application. On the other hand, mobile versions can’t be tested with the same amount of test data, and need to be tested with smaller subsets of the test data.

Creative Commons License
Software Testing by Indira Pai is licensed under a Creative Commons Attribution 3.0 Unported License.