Skip to main content
Skip to footer
Contact Us
We are taking you to another website now.

Life Sciences industry is extremely excited about AI capabilities and rightly so, however there’s a regulatory picture that we need to keep in mind. After all, an organization is investing time, money and resources in AI adoption with a definite ROI in mind and it is of no good if the product/service does not get regulatory approval.

AI-based systems are quite dissimilar from traditional systems in the way that it is not always possible to explain how and why the system decided in a certain vein. Most of the ML algorithms work as a Black Box, which poses an ethical issue especially, when dealing with Personal Information (PI)/Sensitive PI. This does not sit well with the regulators who need to understand how a certain conclusion was arrived. A Black Box makes an AI solution less auditable and non-transparent.

It is imperative to rationally evaluate current Risk Management practices and policies, identify the shortcomings and plug the gaps with robust technical and procedural controls.

Manish Malik

Associate Consultant

×

Thank you for downloading

Your opinion counts! Let us know what you think by choosing one option below.