Banner Image

Business and Technology Insights

Big Data Driven Intelligent Utilities: An Assurance Framework

 
February 21, 2017

In the previous post, we focused on Big Data use cases for the Utilities sector. The challenges listed clearly point in one direction the need for faster and accurate data processing infrastructure. Thankfully, the digital forces, specifically the cloud, are making breakthroughs in high-speed, high-volume data processing with minimal upfront investment in technology. While technology is clearly an important component of Big Data, a holistic QA strategy is a quintessential aspect of the Big Data solution.

Heres a 7-step assurance framework that can adequately address Big Datas challenges:

Managing Data Validation: Big Data systems must be capable of reading data of any size, from any source, and at any speed. The framework must validate the structured and unstructured data drawn from spreadsheets, text files, database tables, audio, video, social media posts and tweets, and even data from watt meters and thermostats.

Real-time Data Processing: In contrast to the batch processing and latency of traditional data warehousing implementations, Big Data systems must analyze data clusters in real-time. This is achieved by validating the initial large volume data, then processing it in chunks, and finally storing the data in nodes for further analysis and communication.

Processing Logic Validation: The processing logic of a Big Data solution must also include algorithms and procedures to tidy up and organize the mammoth unstructured data into clusters, after which, the enterprise data warehouse (EDW) comes into play. The system must include a mechanism for cross-checking incoming records with the outgoing data, inspecting both, the number and hygiene of records transferred. Besides ensuring the integrity of incoming data from devices, sensors and smart meters, the cross-checking mechanism also prevents data loss.

Data Processing Speed Testing: For any digital data analysis, speed of testing is vital for assuring smooth operations and safety compliance, and preventing catastrophic events. Hence, automation has an important role. In the energy sector context, sensors continuously collect performance data on multiple attributes such as current, pressure, temperature, and wind direction. This large data volume is then transmitted to control stations for real-time processing. To automate testing and identify bad data, Big Data systems must seamlessly integrate with third party tools that validate data from source files and other databases, compare it with the target data, analyze the results, and generate reports that pin-point differences. Given the critical need for continuously optimized operations, zero power failure incidents, and real-time grid monitoring, the energy sector too, must consider the use of tools as part of its Big Data systems landscape.

Distributed File System Validation: As Big Data processing is distributed across clusters, failure of a single node in a cluster not only impacts downstream processing, but also prevents full functional testing. Therefore, data validation and performance testing are keys to success with Big Data. Starting with benchmarks for cluster environment data, the validation process helps determine data flow parameters such as the velocity, volume and variety of data processed per node. Its also a good way to scientifically estimate the volume of data thatll be generated by devices, grids and meters, and provision adequate infrastructure.

Job Profiling: Given the large volume of unstructured data, distributed across file system clusters, missing out on even one algorithm, could mean destabilized clusters, and adverse business impact. Therefore, job profiling – the process of validating algorithms on a small data chunk helps reduce errors and chances of failure.

Schema Validation: Migrating legacy databases to Big Data involves architectural makeovers, making solution-specific integration testing imperative. Field to field mapping and schema validation ensure data accuracy, and zero or minimal impact to critical data during migration.

This framework equips Big Data assurance teams to avoid pitfalls, reduce errors, prevent technological obsolescence, minimize legacy maintenance, and enable swift ramp-ups. Armed with this framework, your Big Data investments will stand the test of time, and ensure success with the use cases listed in the previous post from smart grid maintenance, to operational efficiency. The framework, besides positively impacting your business bottom-line, will also reduce your customers utility bill. In its current or adapted form, it could serve as a high level blue print for designing enterprise Big Data systems, or as a checklist to evaluate existing solutions. Its important because intelligent utilities require equally intelligent systems.

Vinod Upendra is a consultant of Big Data and Analytics Assurance Center of Excellence (CoE) at TCS. This CoE focuses on R&D, test process definitions, test automation solution development and competency development on Big Data technologies. He is responsible for creating the testing solution approach to maintain data quality, consistency and to strengthen credibility and performance of business critical Big Data applications to customers. With 14+ years of experience in IT and software development industry, he has taken on varied roles in Retail, Finance and Automotive sectors. He is a technology enthusiast and is passionate about applying big data analytical techniques to create value in business.