Skip to main content
Skip to footer
Contact Us
We are taking you to another website now.
May 17, 2016

We live surrounded by handheld devices, all running on a variety of operating systems and platforms. The constant urge to plug into the world – anytime, anywhere, and from anything, requires mobile app vendors to rapidly roll out apps that are compatible with a multitude of hardware and operating system combinations. These apps must be maintained, supported, and improved through continuous upgrades that not only enhance the functionality and interface, but also match the pace of underlying technology change.

Today's agile test environments are characterized by plummeting cycle times and changing market conditions. This need for continuous improvement and upgrade demands an effective, swift, and comprehensive quality assurance and testing strategy to mitigate inherent risks of the complex computing, diverse technology, and heterogeneous network landscape. But where does one begin?

Automation is a good start. Tools can accelerate and optimize test cycles. Identifying the right-fit automation is a critical success factor for a mobile testing strategy. Test automation, besides being critical and strategic, is also witnessing a shift in vision and focus – from increasing productivity and reducing costs, to broader issues, such as improving the quality and flexibility of the software testing process. It calls for detailed and comprehensive analysis of the available tools repertoire. Given the variety and options, making the right choice is a challenging task, rife with complexities. Apart from basic factors such as scalability, portability and reliability, other factors that influence tool selection are:

  • Automation cost: This is the cumulative sum of the tool's acquisition and maintenance cost, combined with associated costs such as script development and execution. In all cases, cost of automation must never exceed the cost of manual testing. Break-even analysis can help strengthen the go-no go decision for tool adoption.
  • Customization effort: This addresses the effort required to adapt tool test scripts to multiple operating systems, operating system variations, and device models. Based on the type of application (native, hybrid, or web) under test, customization effort can significantly influence a tool's adoption decision either way. While web applications may require minimal to nil customization effort, native applications present a bigger ask as the GUI and screen properties (order of screens, traversal across screens and usage of hardware buttons for various screen transitions) vary across phone platforms.
  • Percentage of automation: For a given set of application features, this metric helps quantify benefits by computing return on investment required to justify automation.
  • Content accuracy: This measures the tool's ability to verify application content (image, text, audio and video) accurately to ensure the quality of the final product. For example, the tool should recognize text in a selected screen irrespective of the color, background, theme, or font used.
  • Multitasking support: The ability of the tool to connect and support the execution of test scripts on multiple devices concurrently will help save testing time and also reduce the number of licenses required for parallel execution on test devices. In such scenarios, tool performance must be continuously tracked and monitored, because it can negatively impact test productivity and effort.

Given these factors, how does one craft a systemic approach to decide which automation tool will best fit the project requirements? Here's a quick three step approach for evaluating mobile testing tools:

  1. Requirements and feature mapping: Comprises typical requirements gathering activities, wherein business and technical requirements are articulated and documented to serve as a future reference checklist. Tool functionality can then be mapped and compared with the checklist items. The process helps filter out tools that do not meet the evaluation criteria.
  2. Tools feature score: Once the tools are identified in line with the requirements analysis criteria, their features can be evaluated and compared to determine the feature grading score of each tool. The grading score helps further sift and arrive at the final shortlist. Trial versions of the shortlisted tools can be downloaded for a pilot or Proof of Concept (PoC) exercise.
  3. Proof of Concept (PoC): An iterative PoC involving all downloaded tools is the final step of the selection process. Sample test scenarios with the most comprehensive coverage must be executed using the selected tools. Finally, the one tool that best suits the project requirements must be selected.

While formulating their mobile testing strategies, testing teams must also evaluate functional and technical parameters, and contextualize them, in line with localized business requirements. Other factors to consider are – possible modes of connectivity, configuration and response time, support for test management, integration with external tools, performance under varying network conditions, and reusability of scripts across operating systems and devices.

Creating such a strategy is both a science and an art. 'Science' focuses on the technology landscape – the spectrum of platforms and browsers, app runtimes, network options and related challenges, and hardware issues. The 'art' lies in collating the sheer variety of technical aspects through creative methods, and then applying knowledge and experience to compose what best suits the particular business context.

While this approach provides a framework to begin with, there are numerous business scenarios, where much depends on the tester’s skill, experience, and ability to adopt a holistic view, and strike the right balance between technical, management and business aspects. That is undoubtedly a fine art!

Simply put, discretionary knowledge of the particular business context is as important as the given framework. This 'science-art' balancing act, when coupled with the three step approach outlined above, will help testing teams strike the right chord between technology and business requirements. For a more detailed discussion, you can read a TCS whitepaper on the science and art of selecting the right mobile testing tools for your enterprise.

Kanthi is leading assets & innovation for Mobility Assurance CoE in Assurance Services unit, TCS. She has over 17 years of experience in pre-sales, mobile consultancy and test automation, program management, and development of new offerings. A voracious reader with a penchant for developing automation tools, she has conceptualized and implemented "Remote Android Blackbox Instructive Tests" or 'RABBIT' for platform level testing of Android devices. Prior to working with TCS, she has successfully worked as an automation consultant towards the launch of phone models for popular device vendors.


Thank you for downloading

Your opinion counts! Let us know what you think by choosing one option below.