BUSINESS AND TECHNOLOGY INSIGHTS

Demystifying DataOps: Are you ready for a change?

 
December 30, 2019

Data-driven strategy has become the new normal in today’s intensely competitive world. In order to make the most of opportunities, businesses need relevant, appropriate and sufficient data at the right time. But with data coming in a wide variety of formats, from multiple sources and sometimes in silos, and given the dependency of data on IT and sub-optimal collaboration between business and IT teams, the most pertinent question is: do businesses really get the data they need when they need it?

In most cases, the answer is likely to be ‘No’. An emerging concept called DataOps can change this ‘No’ to ‘Yes’. DataOps makes it possible to automate data delivery, on time and in an efficient manner.

DataOps being a nascent area, you might have some basic questions:

· What is DataOps?

· What are the typical use cases of DataOps?

· Is it important for organizations to adopt DataOps?

· What measures should an enterprise take for effective adoption of DataOps?

Through this blog, let us understand DataOps and its need.

DataOps: An introduction

DataOps is a framework comprising people, processes and tools to deliver high quality, trustworthy and secure data on demand through an automated and collaborative approach. It defines and governs the way data fabricators and data consumers interact and collaborate with each other for faster on-time delivery of relevant data.

Solution to Data Challenges

Businesses are striving to extract as many insights from their data as possible through thorough analysis. This allows businesses to stay relevant by becoming more customer-centric and maximizing their revenue. However, as mentioned above, there are many challenges to quick access of relevant data, such as different ways of handling and governing data, data in silos, teams being non-collaborative, lack of common policies, different meaning of same data in different departments. To address such challenges, there is a need of an enterprise-wide framework for data operations through increased collaboration and automation.

Typical Use Cases

A prime use case of DataOps is data analytics, which needs relevant data on demand. An increased adoption of DevOps practices has paved the way for agile and automated data processing and delivery. This results into another DataOps use case: provisioning relevant test data on demand.

Key components

Tools and Technologies: DataOps can be enabled by creating a “data pipeline”, which is a sequence of data handling steps, such as request for data, validation of request, data acquisition, data consolidation, data transformation, data cleansing, data security, data provisioning, and data visualization, reporting and governance. Your organization might already have standalone software for such operations. Such software need to work in conjunction, forming a larger data processing pipeline. Manual steps have to be eliminated wherever feasible to enable agile delivery of appropriate data. The overall solution could have intelligence to re-provision data that was prepared following some earlier data request. It could have mechanisms and metrics to continuously analyze data pipeline performance, data leakage and timeliness of data delivery.

Process: For DataOps to effectively automate delivery of data, it is important to monitor every step of the data pipeline. For this, organizations should implement appropriate governance controls to monitor service-level agreement (SLA) of data requests, audit logs, dashboards, alerts and notifications. To ensure data security, data access should be managed meticulously. Unauthorized users should not get access to data and authorized users should get access only to data they need. Along with this, automated governance workflow should be used to ensure better control whenever there is any change in existing data pipelines and for configuring new pipelines.

People: The employees in your organization are critical stakeholders in adopting a DataOps culture. An effective deployment of DataOps will need a mindset change, and we humans arguably have an innate resistance to change. To get employees on board, organizations can cite the successes of DevOps-based application delivery and educate employees about similarities in the core principles of DataOps and DevOps. Organizations should also identify DataOps Champions, who can be the catalysts for quicker and effective DataOps adoption.

Leadership Role

DataOps is expected to gain increasing traction within the data management programs of organizations. For DataOps to succeed, the three key entities — people, process and technology — need to work in unison, with the required attention from organization’s leadership.

Jayant Dani is currently spearheading the conceptualization, architecture, design, and engineering of Tata Consultancy Services (TCS) proprietary integrated data management platform, MasterCraft Data Plus. Also, he is playing a leading role in setting up the Big Data practice in TCS. He has over 18 years of experience, and his area of expertise includes technical leadership of large solution teams, client relationships, and management of large scale implementations. Dani is passionate about architecting technology solutions across industry verticals and has many publications to his credit, including a paper on Big Data management presented at Institute of Electrical and Electronics Engineers (IEEE) conference.

Sameer Rane handles market analyst connect and marketing aspects of TCS MasterCraft™ DataPlus, which is a data management software from TCS.  Sameer has 15 years of experience in the IT industry and has played a leading role in maintenance, deployment, and management of software products and solutions. His current areas of interest include Data Privacy and Data Quality Management.