Is HyperOps the new enterprise IT system?
6 MINS READ
Making operations and maintenance smarter
Operations technology at an enterprise addresses various planning, operations, and maintenance activities, addressing an array of business objectives. The technology includes setup, control, monitoring, and tuning the sensors or devices in an industrial setting under the Industry 4.0 framework, adopting various forms and functions based on the industry of interest. Current technological developments in Industry 4.0 are heavily focused on the manufacturing and utilities segment. There is a great deal of opportunity to make operations and maintenance (O&M) smarter catering to a variety of industries. SmartOps, focused on targeted processes and automation, has led to siloed technology addressing point problems. We introduce HyperOps, which is an end-to-end planning-to-sustenance concept that leverages artificial intelligence-powered resilient design to recover, rediscover, and respond to emerging business situations.
Information technology (IT) revolution has transformed our world over the last three decades. The first stage involved computerization of business processes across industry verticals with computers being ubiquitous as part of everyday life. Over the years, companies have undertaken a digital transformation journey under the aegis of a chief information officer. Augmenting digitalization, the age of artificial intelligence (AI) has begun to influence several IT-automation tasks in creating a smarter IT ecosystem. Some of the examples in IT automation include recommendation for consumers (e-commerce), smart procurement solutions (any industry), automated workforce deployment solutions (utility), automation of start and end day checks, order creation in customer relationship management systems (communications, media, and technology), demand and production planning (life sciences) to name a few. These are fascinating developments indeed.
HyperOps and full stakeholder play
With this background, let us look at the role IT systems play as a part of a larger industry ecosystem. On average, IT spends around 3.28% of the overall budget across different industry segments. The IT budget typically includes computers to telecommunication to software to audio-visual equipment covering a large part of the data and information lifecycle. Depending on the industry, there is significant scope and opportunity to use automation powered by AI, well beyond the scope of the CIO’s traditional ambit, opening access to CxOs in the form of full stakeholder play and including business owners, chief operating officers, and the chief technology officer.
Digitalization and technology adaptation in core business processes will deliver high value to clients by extending the enterprise innovation and differentiation to the last mile (planning, operation, and optimization). This leads to the extension of traditional IT services into operations under what we call HyperOps, which stands for AI-powered autonomous planning, operations, maintenance, and sustenance at hyper-scale, addressing resilience as a core purpose. This is far richer and wider than traditional operations technology that involves control and monitoring of devices in several industries; its operation go beyond controlling devices. It offers end-to-end benefits by creating an IP-driven service platform and is a natural extension to DevOps in IT (Pal & Purushothaman, 2016). For example, in the utilities business, a classical asset inspection OT does not address the issues beyond SCADA and other monitoring systems. HyperOps for asset inspection provides operations and maintenance platform that can collect, manage, analyze, and interpret heterogeneous data using artificial intelligence.
Business operations in infrastructure-rich industries like manufacturing, energy, and resources or travel and transportation are expert-centric. Although technology plays a role, it is often restricted to the digitization of information that is perceived and consumed by a human expert. Due to multiple digital transformation journeys undertaken by industries over the past decade, there is now an excellent opportunity to bring in automation (augmented, automated, and amplified by intelligence) in business operations wherever possible. Consider the current technology ecosystem in the world that comprises advanced sensors, communication devices, and compute power, which is a part of IoT infrastructure or the digital twin infrastructure. This results in the generation of a lot of data. AI can help in converting these digital systems to smart response systems that can interpret the data at high speed and precision necessary to act.
Traditional IT systems versus HyperOps
There have been successful attempts in automating IT while there is a larger opportunity to be unlocked and building a case for HyperOps. The core challenge is the limited nature of the data used by IT systems and a lot more that can be processed by HyperOps systems. Extending the argument, traditional IT systems have used structured data that is either computer-generated or manually input in a methodical way with the help of software. On the other hand, HyperOps systems rely on unstructured data (even opportunistically captured) such as images, videos, signals, or other types. Conversion of unstructured data into usable and useful data is a non-trivial activity. Advances in machine learning and AI in the past decade have enabled such activity with consistent reproducible results, turning the tide towards wider adoption of HyperOps.
Consider utility inspection as an example: a field engineer goes around manually inspecting the asset. When an issue is observed involving the asset, a handheld device is used to enter the asset ID, the asset type, the nature of the defect, the severity of the defect, and subsequently, a ticket is (automatically) raised to address the same. This is the current IT process where IT becomes part of a routine inspection. Looking at the future, we could have a drone employed to locate the asset and acquire the data; the data is then stored on a cloud, image analysis is performed in automated fashion to detect the anomalies in each component of the asset, and to measure the severity of the defect. The AI-driven secure HyperOps system with next-gen technologies plays a major role in planning, operations, maintenance, and sustenance then becomes the new normal for the enterprise in the years ahead. This is well beyond the enablement in SCADA systems that are used by industry in operations and maintenance.
A FEW CHALLENGES, AND SURMOUNTING these
Highlighted below are a few challenges in the development and deployment of HyperOps systems:
End-users are specialists in their area but need help in interfacing with the system. A natural language interface is a necessity.
A domain specific HyperOps knowledge base is critical in realization and must be developed and maintained
A scalable resilient and trustworthy system that will play a major role in the uptake of HyperOps
Generation of insights through reasoning and cognitive systems that will play a major role driving the next generation of AI.
Current users are usually aware of the developments in AI but may not be capable of creating an AI-powered HyperOps system. Automated workflow generation and pipeline synthesis for AI systems for different tasks can address this challenge.
Many of these challenges can be addressed by adopting a Machine First™ Development Model (MFDM™) with associated accelerators and platforms to enable the field experts to deploy the technology. There is a need for developing a variety of enabling platforms such as secure and intelligent edge and an always-on platform leveraging the cloud hyperscalers, natural human experience (NHE) driven conversational platforms, automation and AI platforms, among others.
If we succeed in addressing the above challenges, there is no doubt that HyperOps will be the new IT and will significantly drive growth and transformation stories across industry segments in the coming decade.
Mahesh Rangarajan is a TCS Distinguished Engineer and Head of Semantic Systems Accelerator at TCS Research, and comes with over 23 years of industry experience.