In most industries, balance sheets and company financials are major contributors to a companys valuation. But in pharma, two other valuation parameters are important – patent count and protection periods. Leading pharma companies have witnessed declining revenue and bottom-line margins at the end of their patent protection periods. While in theory the 20-year patent life for drugs may seem good enough to cash-in, in reality its not. Simply because the period begins from the day the drug is invented, and not from the time it comes to market. The clinical trial process eats into the first few, patent protected, cash generating years. To derive maximum RoI from their R&D and patent investments, pharma companies need smart solutions that give quick results.
The complex drug development process, comprising several processes, applications, and approvals, generates large volumes of un-quantified and unstructured data from multiple systems, in varied forms. This includes data from clinical systems (such as physician notes, prescriptions, laboratory records, insurance claims, and administrative records), electronic patient records, parameters from patient monitoring machines, press and media mentions, and now in this age of digital default even data from social networks. This Big Data must be quickly processed and analyzed, to support scientific analytics, next-gen research, focused business outcomes, and accelerated go to market.
Big Data in healthcare is overwhelming, not only because of its volume but also because of the diversity of data types, the velocity at which its generated, and the speed at which it must be managed.
Pharmas Big Data is characterized by four aspects:
(i) Volume datasets exceeding terabytes, and running into petabytes, exabytes, and beyond,
(ii) Variety data not just in structured form from the companys internal IT systems, but also in unstructured form from a variety of internal and external sources,
(iii) Velocity data that approaches the companys processing environment and IT systems at break-neck speed, and
(iv) Veracity the uncertainty or noise in the datasets.
Traditional IT infrastructure simply cannot address these challenges. This Big Data requires an equally effective technology solution. Speedy digitization, clearly, is a game changer for pharma success. Lets explore how.
The drive (and need) is to understand as much about a patient, as early as possible, pick up warning signs of serious illness at an early enough stage, so that treatment is far more simple (and less expensive). The rising population, coupled with increase in average life span, have triggered dramatic changes in treatment delivery models. And these changes are data-driven. By digitizing, combining, and effectively using Big Data, not just pharma companies, but also physicians, multi-provider groups, large hospital networks, and healthcare organizations can realize significant benefits. Diseases can be detected and treated early, frauds can be prevented, epidemic outbreak can be predicted in time, preventable deaths can be reduced, and overall quality of life improved.
With the Big Data driven approach, patients can be involved in treatment options. The data can also facilitate disease prevention through the right set of lifestyle choices and changes. With consistent and timely data, patients can be matched with the best suited healthcare providers. With the same data, these healthcare providers can collaborate and work together to improve overall quality of treatment and patient care. Clinical trials and patient records can be quickly analyzed to discover adverse effects of drugs before they reach the market. The right set of statistical tools and algorithms can significantly improve clinical trial design and patient recruitment to better match treatments to individual patients, thus reducing trial failures and speeding new treatments to market. Predictive modeling can lower attrition and produce a leaner, faster, more targeted R&D pipeline for drugs and devices. And these use cases are just the tip of the iceberg. Big Datas repertoire for pharma has the potential to drive significant improvements and revolutions for the sector. Their implementation, though, can be fraught with challenges.
In addition to the four Vs Volume, Variety, Velocity, and Veracity, there are other, equally important challenges that must be addressed. This includes privacy and usability. From an assurance perspective, pharma QA units must go the extra mile to think beyond data warehousing. Adopting the same, standard approach for data warehousing and Big Data will, for sure, result in the wrong outcomes. Big Data testing requires test strategies that address structured, semi-structured, and unstructured data. Other pre-requisites include statistical tests on data sets, an optimal and scalable test environment, and the competency to work with non-relational databases. To summarize, we need an efficient assurance framework to address Big Data challenges, and realize its full potential. Ill propose one in my next post.