Page 28 - AIH-1-2
P. 28
Artificial Intelligence in Health LLMs-Healthcare: Application and challenges
communication to scheduling follow-up appointments, transcripts. The dataset encompassed expert evaluations
highlighting a thorough approach to patient care. Despite using instruments like the 8-item Patient Health
its advanced diagnostics, the current system presents Questionnaire (PHQ-8) and the post-traumatic stress
several limitations, such as failing to detect potential bone disorder (PTSD) Checklist Civilian Version (PCL-C). The
loss, which represent further research and development to study intended to gauge the severity of PTSD using the
enhance its effectiveness in dental diagnostics. PCL-C while employing the PHQ-8 to assess depression
and anxiety levels. The evaluation process involved
5.1. Challenges associated with dental care extracting from Med-PALM 2 clinical scores, the rationale
The accuracy of LLMs like ChatGPT depends on for such scores, and the model’s confidence in its derived
the availability of high-quality, relevant dental data. results. The gold standard for this evaluation was the DSM
A significant hurdle in designing and training LLMs for 5 (Diagnostic and Statistical Manual of Mental Disorders,
dental care is limited access to the dental records owned Fifth Edition). The researchers’ rigorous testing process
by private dental clinics and concerns over patient privacy, involved the analysis of 46 clinical case studies, 115 PTSD
which hamper the access to comprehensive and most evaluations, and 145 depression instances. These were
updated datasets. LLMs’ development and effectiveness in probed using prompts to identify diagnostic information
dentistry must navigate these challenges, ensuring access and clinical scores. The rigorous assessment also saw
to extensive, up-to-date information while addressing Med-PaLM 2 fine-tuned through many natural language
privacy and ownership issues to avoid biases and maintain applications and a substantial textual database. Notably,
data integrity. research-quality clinical interview transcripts were
employed as inputs when assessing the model’s efficacy.
The potential of LLMs in dental healthcare seems
promising and can revolutionize how dental professionals Med-PaLM 2 demonstrated its prowess in evaluating
psychiatric states across various psychiatric conditions.
diagnose, treat, and manage patient care today. LLMs could Remarkably, when tasked with predicting psychiatric risk
significantly improve diagnostic precision by leveraging from clinician and patient narratives, the model showcased
the vast amounts of data available in patient records and an impressive accuracy rate ranging between 80% and 84%.
imaging, allowing for early detection and intervention in
dental conditions. Furthermore, the ability of LLMs to Another study evaluated the performance of various
generate personalized treatment plans and educational LLMs, including Alpaca and its variants, FLAN-T5,
materials tailored to individual patient needs could enhance GPT-3.5, and GPT-4, across different mental health
the effectiveness of patient care. This personalization and prediction tasks such as mental state (depressed,
31
the model’s ability to process and analyze data swiftly stressed, or risk actions like suicide) using online text.
could lead to more efficient and patient-centered dental Through extensive experimentation, including zero-
health-care practices. As LLMs continue to evolve, their shot, few-shot, and instruction fine-tuning methods, it
integration into dental healthcare is expected to deepen, was found that instruction fine-tuning notably enhances
offering innovative solutions to longstanding challenges LLMs’ effectiveness across all tasks. Notably, the fine-
and improving patient outcomes worldwide. tuned models, Mental-Alpaca and Mental-FLAN-T5,
demonstrated superior performance over larger models
6. Mental health (psychiatry and psychology) like GPT-3.5 and GPT-4 and matched the accuracy of task-
Mental health disorders, which affect millions globally, specific models.
significantly reduce the life quality of individuals and The use of conversational agents based on LLMs for
their families. In the realm of psychiatry, LLMs have the mental well-being support is growing; yet, the effects
potential to refine diagnostic precision, optimize treatment of such applications still need to be fully understood.
outcomes, and enable more tailored patient care, moving A qualitative study by Ma et al. of 120 Reddit posts and
32
beyond traditional, subjective diagnostic approaches prone 2917 comments from a subreddit dedicated to mental
to inaccuracies. By leveraging AI to analyze extensive health support apps like Replika reveals mixed outcomes.
patient data, it is possible to uncover patterns not easily While Replika offers accessible, unbiased support that can
detectable by humans, thereby improving diagnosis. 28,29 enhance confidence and self-exploration, it may potentially
Galatzer-Levy et al. delved into exploring the potential exacerbate social isolation due to content moderation,
30
role of LLMs in psychiatry. Their primary investigation tool consistent interactions, memory retention, and increased
was Med-PALM 2, an LLM equipped with comprehensive dependence on the app.
medical knowledge. The model was trained and tested Following the advancements with ChatGPT, research
using a blend of clinical narratives and patient interview into automated therapy using AI’s latest technologies
Volume 1 Issue 2 (2024) 22 doi: 10.36922/aih.2558

