The Life Sciences (LS) industry handles large amounts of health and safety data from Research & Development, Clinical Trials, Post-marketing and Real-World Data. The volume of datasets and metadata has rapidly exceeded the capability for humans to process, handle, and review them. Despite this, LS has been slower than others to adopt and embrace AI based automation. Cognitive and intelligent platforms have a huge potential to drive safety case processing efficiencies and over time, replace the traditional PV systems.
There are multiple reasons, factors, and challenges that can be attributed to the slow adoption rate, but in the present article we will focus our attention on “TRANSPARENCY”, a factor that explains the interpretability and explainability of AI algorithms and determinations.
Transparency in AI models shall help understand the nature of errors, improve quality, efficiencies, human governance over machines, and facilitate corrective actions. We believe that truly transparent AI models have the ability to exponentially drive the adoption of AI/ML in pharmacovigilance.