Bridging Gaps or Creating Divides? The Role of Large Language Models in Healthcare
Artificial intelligence (AI) integration in healthcare has accelerated in recent years, with large language models (LLMs) emerging as promising tools. These tools, such as ChatGPT, have the potential to reshape how healthcare providers, researchers, and patients interact with medical information. However, while these tools show great potential, their challenges must be carefully addressed to ensure equitable and ethical use.
What are LLMs and How Do They Work?
LLMs are advanced AI systems trained on extensive datasets from various sources. During training, this technology analyses vast amounts of text and uses statistical associations between words, learning patterns, grammar, and nuances. Through this, LLMs process and predict sequences of words based on context, allowing them to produce coherent and relevant responses to queries.
How are LLMs Being Used in Healthcare Now?
LLMs are already making significant contributions to various sectors of healthcare:
- Clinical support: LLM assistants allow clinicians to streamline workflows by summarising patient records, analysing complex medical literature, and drafting clinical notes and emails to patients.
- Patient Interaction: LLM chatbots can also interact directly with patients. These tools offer advice on healthy lifestyle changes, health education, and mental health support.
- Error Checking: LLMs have been used by clinicians to detect errors in prescriptions, such as incorrect dosages or harmful drug interactions by analysing patient data and cross-referencing medications.
Benefits and Negatives of LLMs in Healthcare
The use of LLMs in healthcare offers clear advantages:
- Efficiency: Automating administrative tasks reduces the burden on healthcare providers, allowing them to focus more on patient care.
- Scalability: LLMs can cater to large populations, making healthcare information more accessible, particularly for underserved populations.
- Cost Savings: LLMs can assist in handling routine consultations for minor issues and providing preventative health advice. This can alleviate pressure on healthcare systems such as the NHS, ultimately driving down costs.
However, the technology also has its limitations:
- Accuracy Concerns: LLMs can sometimes generate misleading or incorrect information, which can have serious consequences in healthcare settings.
- Privacy Risks: Handling sensitive patient data poses ethical challenges, particularly if systems are not securely designed.
- Dependence on Training Data: The quality of an LLM's output is dependent on the data it is trained on. If the training data is outdated, incomplete, or biased, the model may produce an inaccurate or harmful recommendation.
Issues Concerning Inequality Gaps
While LLMs hold promise, if not utilised correctly, their deployment risks exacerbating existing inequalities:
- Digital Divide: Access to LLM tools often requires reliable internet and digital literacy. This can hinder access for those with lower socioeconomic status and those unfamiliar with technology, such as older populations.
- Bias in AI Systems: LLMs are trained on existing datasets skewed toward Western or privileged communities. This can lead to unrepresentative outputs, such as recommendations that are less relevant or inclusive, reinforcing health disparities.
- Cost of Implementation: The expense of LLM technology may give an edge to private healthcare systems, which may adopt these innovations more rapidly, creating disparities in care quality between public systems and better-funded private institutions.
How These Issues Can Be Resolved?
To harness the potential of LLMs without deepening inequities, several steps can be taken:
1. Improving Accessibility
- Governments and healthcare providers must prioritise expanding digital infrastructure in underserved areas.
- LLM-based tools should also be designed with intuitive interfaces, accommodating individuals with limited technological literacy.
2. Addressing Bias
- Datasets should include medical knowledge from various regions, cultures, and demographics ensuring the models are representative.
- Collaboration with healthcare practitioners, community representatives, and policymakers can ensure that LLMs are designed with inclusivity in mind.
- Regularly reviewing and testing LLMs can identify biases in real-world applications, allowing developers to make adjustments.
3. Lowering Costs
- Open-source development can make LLMs more affordable and accessible for public healthcare systems. Additionally, partnerships between governments and private companies can subsidise implementation costs, ensuring that innovations are accessible to public health providers like the NHS.
Holly Health’s Perspective on LLMs in Healthcare
LLMs are transforming healthcare by improving efficiency, improving access to medical information, and alleviating pressures on healthcare providers. These tools have the potential to make healthcare more cost-effective and prevention-focused. However, addressing challenges such as bias, accessibility, and ethical concerns is critical. With careful implementation, LLMs have the potential to bridge, not widen, healthcare gaps and make preventative healthcare more widely accessible.
At Holly Health, we believe in using innovative technologies like LLMs to provide accessible, preventative, and cost-effective support, whilst recognising that AI and LLMs are designed to aid, streamline and enhance, not replace human support where necessary. Currently, Holly Health is testing the inclusion of an LLM-powered chatbot feature to provide personalised, everyday behaviour change coaching. We are collaborating with experts to ensure these tools are safe, effective and reliable, reflecting our commitment to delivering high-quality care.
Written by Alessandro Chincotta, MSc. A recent graduate in Health Psychology from UCL, Alessandro has a strong interest in health technology and its role in advancing healthcare accessibility. His thesis focused on the use of Large Language Models (LLMs) to promote bowel cancer screening.