Contact Us
We are taking you to another website now.

WHITE PAPER

 

Building Supervised AI Models with AI Assurance Framework 

Assuring safety of autonomous systems by standardizing AI algorithm testing 

AI is finding increased application across industries. However, it still lacks the ability to process complex situations and take decisions. AI also finds it challenging to rationalize whether a task is appropriate or ethical. For AI systems to be successful, testers need to define the operational boundaries of AI and monitor them periodically to pre-empt any breaches. Leveraging the proposed approach towards AI assurance utilizes both human expertise and technology monitoring to help drive superior AI performance. Key points to consider while deploying this approach include:

  • Pre-deployment phase - Choose a data set that closely resembles the production system, identify tools for feedback data, eliminate data biases, execute non-functional testing, and prioritize data sanity.
  • Post-deployment phase - Review output from continuous feedback, establish failure threshold, use AI-monitoring platform to identify code progressions, classify required level of changes, and identify new data parameters. 
Sayantan Datta

Research Analyst, Tata Consultancy Services

Ushasi Sengupta

Research Analyst, Tata Consultancy Services

Gokulaparthiban

Innovation Evangelist, Tata Consultancy Services

Dr. Rahul Agarwal

Innovation Evangelist, Tata Consultancy Services

×

Thank you for downloading

Your opinion counts! Let us know what you think by choosing one option below.