With artificial intelligence (AI), machine learning (ML), and natural language processing (NLP) accelerating automation, there is a paradigm shift in customer interactions in the IT operations management space. Conversational bots and virtual assistants, enabled with agentic AI capabilities, have emerged recently that can beautifully interpret human inputs and generate contextually relevant responses.
Enterprises are increasingly leveraging next-gen application management services (AMS) in this area to adopt new business models for superlative customer experience and accelerated growth.
As part of their next-gen AMS strategy, organizations are adopting intelligent solutions with real-time data-driven insights for better decisions.
An intelligent application management system, with AI at the core, streamlines operations and enables data-driven, smarter decision-making across the enterprise. Let us look at a few scenarios with AI-driven value propositions for AMS:
Let’s look at some opportunities and address value:
Closed incidents with proven resolution are used as training data in a supervised method of service desk ticket classification.
1. Data pre-processing
The historical data to be used as training data should be critically analyzed to address any bias, missing data and noisy data.
2. Feature extraction
Feature extraction refers to the process of transforming raw data into numerical features that can be processed while preserving the information in the original data set. Before applying any machine learning algorithm an historical ticket dump needs to be converted into numerical representations. Python’s Scikit-learn's CountVectorizer is used to convert a collection of text documents to a vector of term/token counts.
To help with pattern recognition, an n-dimensional feature vector is created on ticket data using a frequency-inverse document frequency (TF-IDF) weightage process. In this numerical representation, each element of the vector, tagged with a TF-IDF value, represents distinctive words. It diminishes the weight of terms that occur very frequently in the document set and increases the weight of terms that occur rarely.
3. Classification models
Classification models like logistic regression, Random Forest classifier, Multinomial Naive Bayes, MLP and support vector machines, and LSVM are used along with ensemble models like stacking.
The architecture of a stacking model involves two or more base models, often referred to as level-0 models, and a meta-model that combines the predictions of the base models, referred to as a level-1 model. Linear models are often used as the meta-model, such as logistic regression for classification tasks.
Stacking models, trained on the predictions made by base models, can make predictions that have improved performance compared to any individual base model in the ensemble. The machine learning pipeline is used to automate the model building process.
4. Model evaluation
Stratified cross-validation is used to provide train/test splits in training a data set for evaluating each model. Finally, the model with best accuracy gets picked up for deployment.
K-Means clustering is an unsupervised learning algorithm that can categorize tickets data into K-number of clusters.
This is done by assigning data-points to clusters based on the distance from a cluster centroid. This applies ‘distance metrics’ which respectively combines Jaccard distance and Cosine distance for fixed data and free entities from incidents data.
Cluster labels are identified by mining and extracting logical item sets of each cluster and then performing semantic labelling.
In many scenarios ‘K’ is not explicit, and we need to analyze datasets to iteratively determine the optimal number of ‘K’.
GenAI will augment, accelerate, and simplify the operational processes, always with humans in the loop. Based on experiential knowledge, we list relevant AMS use cases and target personas, and the potential benefits GenAI can bring to each
GenAI offers a wide range of impactful outcomes. We explore two high value use cases in the context of AMS operations.
Scenario 1 | Getting customer feedback and measuring satisfaction
Problem statement
Transformation with AI and GenAI
Potential benefits
Scenario 2 | AMS team onboarding for both transition and steady-state support
Problem statement
Transformation with agentic AI
Potential benefits
APPROACH
Enterprises can adopt a consulting-led approach to define agentic AIs trained with KPIs, intent, and context of their business persona.
Increasingly, enterprises are adopting focused, role-based AI agents over horizontal technical LLMs. These agents help with faster execution of everyday tasks, with more strategic KPI-aligned data insights for faster decision-making. Advanced large language models (LLMs), augmented with enterprise context, refine recommendations over time, adapting to new patterns and evolving operational needs.
BENEFITS
The future of application management is intelligent, adaptive, and deeply aligned with business goals and KPIs – and AI is at its core.
As organizations evolve in a digital-first world, the integration of AI and GenAI into AMS operations is no longer optional. Organizations are harnessing AI-driven insights and automation to realize smarter operations, proactive issue resolution, business assurance, hyper-personalized user experiences, and greater agility.
To become perpetually adaptive, enterprises will need to have an intelligent AI core governing AMS operations – this will allow them to stay ahead of the competition and grow, despite turbulent and uncertain times.
How supervised machine learning works for ticket analysis
Closed incidents with proven resolution are used as training data in a supervised method of service desk ticket classification.
Data pre-processing
The historical data to be used as training data should be critically analyzed to address any bias, missing data and noisy data.
Feature extraction
Feature extraction refers to the process of transforming raw data into numerical features that can be processed while preserving the information in the original data set. Before applying any machine learning algorithm an historical ticket dump needs to be converted into numerical representations. Python’s Scikit-learn's CountVectorizer is used to convert a collection of text documents to a vector of term/token counts.
To help with pattern recognition, an n-dimensional feature vector is created on ticket data using a frequency-inverse document frequency (TF-IDF) weightage process. In this numerical representation, each element of the vector, tagged with a TF-IDF value, represents distinctive words. It diminishes the weight of terms that occur very frequently in the document set and increases the weight of terms that occur rarely.
Classification models
Classification models like logistic regression, Random Forest classifier, Multinomial Naive Bayes, MLP and support vector machines, and LSVM are used along with ensemble models like stacking.
The architecture of a stacking model involves two or more base models, often referred to as level-0 models, and a meta-model that combines the predictions of the base models, referred to as a level-1 model. Linear models are often used as the meta-model, such as logistic regression for classification tasks.
Stacking models, trained on the predictions made by base models, can make predictions that have improved performance compared to any individual base model in the ensemble. The machine learning pipeline is used to automate the model building process.
Model evaluation
Stratified cross-validation is used to provide train/test splits in training a data set for evaluating each model. Finally, the model with best accuracy gets picked up for deployment.