Banner Image

Business and Technology Insights

Mobile App Usability - Techniques for Assuring Four Key Aspects

 
April 18, 2017

Widespread smart phone adoption has resulted in customers expecting near-desktop functionality and quality on their mobile devices as well. This puts greater emphasis on application quality and responsive design than ever before. And with reducing device dimensions, changing screen size, and premium on-device memory, assuring high level mobile app usability can be challenging. Enter Usability testing – an applied assurance area, comprising formal methods and techniques for assuring responsiveness, affordance and intuitiveness of mobile web apps.

Early usability tests for mobile applications were conducted using the Think Aloud protocol, suggested by K. A. Ericsson and H. A. Simon (1980, 1984). As the name suggests, think aloud testing requires participants to think aloud and speak their mind, as they perform tasks with the app. Speech and conversation analysis provide insights on users’ cognitive thinking. The analysis results provide key design inputs and requirements for improving applications. Another important aspect field testing, helps improve app performance, localization and accessibility. As the name suggests, in field testing, testers study application and user behavior in a natural context on the field, and not in a lab setting. Its a useful alternative, especially when app functionality and performance depend on network reliability, device capability, and user movement across locations.

Heuristic evaluation, based on predefined guidelines, is another usability testing technique. Apps are heuristically evaluated against set guidelines for accuracy, emotional response, memorability, response time, errors and overall experience. The test findings are then prioritized by severity level and business need. In some business situations, design walkthroughs and working prototypes are best suited. Before embarking on prototype development, customer requirements must be accurately determined, through detailed customer interviews, to avoid rework and optimize cost of quality.

Specifically, these techniques measure user response to application events across four key aspects:

Ease of use is about assuring intuitiveness, or the ease with which users perform tasks with the app. Besides measuring user recall of screens, terms and interface elements, steps, clicks, and task completion time, ease of use testing also measures number of interface-induced user errors. It also assures app content such as the quality of in-app text, images, audio and video, and checks for logical information flow. This, in turn, enables users to quickly navigate their way through the app. Adopting a minimalist design from the outset avoiding repetitive user inputs for application flow and task completion – is a good way to pass the ease of use test.

The second aspect – performance, is about improving user productivity with the app. Specifically, the test measures users task performance time how quickly they complete their tasks with the app. Task oriented interfaces, designed to follow users natural, intuitive workflow, improve results of this test.

Next is adaptability of users as well as devices. In addition to measuring how users with diverse skills adapt to and familiarize themselves with the apps interface, the adaptability test, also measures the apps ability to operate under changing environment conditions and varying network speeds. Here, device and interface design combine with visual experience – branding, graphical design and layout – to engage users and gain their attention.

The fourth factor relevance, is a measure of user experience and satisfaction. The test checks for relevancy of content, and prevents irrelevant content that may lead to user frustration and loss of interest. Apps with high relevance are likely to be referred and recommended. With social media becoming the preferred referral mode, its essential for mobile apps to provide easy, social media sharing functionality.

To assure these factors, app testers must veer away from the traditional test scenario-test case and system responses-user action approaches, and conduct. Usability testing should be conducted in specialized testing labs, comprising a monitoring area, with computers connected to monitoring devices, and loaded with specialized software that tracks eyeball and mouse movements, and generates heat maps showing which parts of the interface receive maximum attention. The laboratory environment enables users to concentrate on tasks, without distractions and interference. While the software tracks and records keyboard inputs and mouse movements, a video camera captures users eyeball movement and expressions. A one-way mirror enables gesture and emotions tracking by observers on the other side.

Tools such as Moraes and Silverback measure task completion time, capture facial recognition and track eye movement, to gain usability insights. Usability testing labs can also be integrated with as a service cloud based usability testing platforms such as Appsee and Userlytics. Appsee a mobile analytics platform, uses heat maps and session recordings to track real-time user app interaction. Userlytics drives crowd-testing, involving usability testers with diverse backgrounds. Its like having a dedicated testing function, without investing in one. Initially designed for desktop and web application testing, usability labs are now being customized and adapted for mobile testing.

Based on my experience, I recommend a combination of milestone-based, elaborate laboratory, and minimal field tests, from the outset. Despite its lab, software and tools-heavy approach, usability testing clearly, is about user interactions, collaboration and feedback. As Dan Saffer says in his book, Microinteractions, Speak Human. Feedback is for human beings, so address them as such. Use the words your users use. In todays digital mobilization era, where challenges are manifold, markets are fierce, and competition is just an app away, good user experience is the key to delighting and retaining customers. And usability testing, integrated into the mainstream testing lifecycle, assures the good experience.

Kanthi is leading assets & innovation for Mobility Assurance CoE in Assurance Services unit, TCS. She has over 17 years of experience in pre-sales, mobile consultancy and test automation, program management, and development of new offerings. A voracious reader with a penchant for developing automation tools, she has conceptualized and implemented "Remote Android Blackbox Instructive Tests" or 'RABBIT' for platform level testing of Android devices. Prior to working with TCS, she has successfully worked as an automation consultant towards the launch of phone models for popular device vendors.