- Generative AI has the potential to revolutionize the healthcare ecosystem as it is capable of fostering human-machine interactions.
- Healthcare organizations can leverage Generative AI to enhance patient–member experience, streamline processes by empowering healthcare professionals, and expedite drug approval and launch.
- There are a few ethical and technological concerns that must be addressed by healthcare organizations before they embark on their Generative AI journey. Some key considerations include rethinking the data architecture, focusing on information quality management, and risk assessment.
The world has been taken by storm with the rapid evolution of Generative AI (Gen AI). Current models bring together coding, logic, language, visuals, and the power of artificial intelligence in a seamless manner. The culmination of cognition and intelligence brings Generative AI closer to human-like interactions and behavior.
For a field like healthcare in which many of the pervasive challenges can be attributed to ineffective human-machine interactions, Generative AI has the power to bridge that gap effectively, and truly democratize healthcare. However, like any new technology, Generative AI promises tremendous potential, and requires understanding of the deep impact on informatics and data practices, privacy concerns as well as ethical aspects once the technology is put to use.
Let's explore the significant benefits of implementing Generative AI in the healthcare ecosystem as organizations transition from experimentation to a more structured approach.
Empowering healthcare professionals:
Electronic Medical Records (EMRs) have contributed towards safer and more longitudinal healthcare. However, it also adds cognitive overload to the most skilled and scarce talent – the physicians and nurses. These professionals need to constantly navigate between their narrative-based understanding of patient histories and symptoms, and EMRs' structured data presentation. Bridging this gap has often relied on time-consuming clinical note documentation, contributing to physician burnout, which leads to impersonal interactions with patients. Generative AI can summarize patient history, incorporate organizational knowledge and global research, and alleviate the need for extensive browsing and EMR searches. The use of Generative AI can be used to automate manual tasks, and frees up valuable time for healthcare professionals
Improving patient journey and experience:
Payers, as the central orchestrator of the healthcare ecosystem, leverage workflow-driven systems to interact with members, providers, and pharma companies to ensure optimal care, cost, and experience for patients. Preauthorization of workflows at the start of a patient care journey includes the facilitation of human judgment processing for benefits, network contracts, formulary coverage, etc. This involves human readable exchanges like requests, authorizations, denials, appeals, and grievances. Complementing traditional AI’s processing capabilities with Generative AI’s cognitive abilities can enhance the process to be more accurate and the interaction more compassionate.
Augmenting drug approval and launch:
Evidence generation is central to life sciences organizations for getting regulatory approvals for their drugs. This involves large volumes of information, including clinical trial data, safety information with individual-level narratives, formulation, and manufacturing SoPs, etc., all of which need to be reviewed for quality and accuracy. Despite the availability of technology, the process of compiling, creating, and reviewing data remains a predominantly manual process. Generative AI can help speed up generation and quality control process, and augment the efforts of the authors, and reviewers aiding authors and reviewers, ultimately reducing the time to bring drugs to market.
These use cases represent the potential of democratised cognition to solve many existing healthcare challenges. That said, Generative AI also presents its own share of ethical and technological concerns, such as systemic biases, privacy and confidentiality, the impact of hallucinations, and so on. Here are a few considerations for healthcare organisations to keep in mind as they embark on their Generative AI journey:
Rethink organizational data architecture and management practices:
Generative AI’s summarization capabilities elevate the role of unstructured data in deriving actionable insights. This departure from the conventional practice of relying on structured datasets requires a significant overhaul of data architecture and management practices:
- Revamp characterization and classification of unstructured data assets based on authenticity, sensitivity, validity, and recency of the information captured. For example, a peer-reviewed article in a journal needs to be graded higher on authenticity in comparison to a blog or a news article. A recent or up-to-date document ought to be scored higher on a validity or quotability scale. The data policy should provide a granular level of control and access depending on the sensitivity of documents to avoid unauthorized or accidental access to patient data.
- Enhance document data capabilities of data warehouses and lakes to store, manage, curate, and retire unstructured data assets from enterprise document sources. With the aggregation of document assets, key data management processes can be implemented that includes metadata and access management, as well as the creation and management of data for training LLMs.
- Develop a vector data management architecture. Large Language Models (LLMs) come with several limitations that include the size of documents for learning, information controls as well as hallucinations. Implementing multiple vector databases, segregating information at source, and implementing access controls with a RAG (Retrieval Augmented Generation) layer can serve the end prompts with relevant data and can filter out the sensitive data sources from end results.
Transition from data quality management to information quality management:
The golden rule of garbage in, garbage out (GIGO) applies to Generative AI even more than it does traditional AI owing to the minimum exception management capabilities of LLMs. Moving from experiments to large-scale use of Generative AI will require the curation of information across documents to provide a clean input for acceptable output.
The initial information curation and quality controls may require manual effort, but eventually, organizations will need to develop a set of procedures and test cases to achieve the desired quality of input data. Generative AI’s summarizing capabilities, combined with a team of highly trained prompt experts, can identify information quality issues and enhance the information curation process.
Consider servitization of Generative AI with the right context injection:
As organizations start deploying Generative AI platforms as a general-purpose technology, the end user applications will require highly contextualized and controlled interactions. For example, an employee engagement application requires HR policies as the organizational context, as against a patient engagement application, which requires clinical pathways and care plans as the organizational context. And you certainly do not want HR policy questions asked through patient engagement applications.
A potential solution is developing a services architecture, encapsulating generative AI models into a set of services relevant to the business domain and providing a defined scope of interactions with end users. This architecture enables flexibility in deployment, and selecting the right backend language model for the intended purpose with the right level of data, training pipelines, and access controls.
Include risk assessment in your processes:
With concerns of hallucinations, privacy, ethical use and IP, organizations need to define and implement a risk-based approach to Generative AI use cases. The risk in adopting Generative AI, including organizational knowledge, IP exposures, and those pertaining to the ethical use of patient data, are some of the top risk factors that should be evaluated in the scoping of application functionality. Additionally, software development life cycles should also be tuned to review these risks continuously by the right stakeholders to tweak the scope of applications and contextual data to meet the acceptable end-user and organizational risk levels
Generative AI has the potential to revolutionize the healthcare industry by creating a cognitive layer between complex, voluminous, and varied data, patients, physicians, and researchers to humanize healthcare. Moreover, it also opens avenues to interoperate and converge with other emerging technologies, which can further transform healthcare. However, as the technology stands today, the risks associated with it are also real and consequential. So, as healthcare organizations foray into the fascinating world of Generative AI, it is imperative to take an inside-out view – to strengthen capabilities, processes, and systems to make the organization ready to scale with Generative AI.