FRIDAY, June 28, 2024 (HealthDay News) -- Large language model (LLM)-based classifiers can accurately detect guardian authorship of messages sent from an adolescent patient portal, according to a research letter published online June 25 in JAMA Network Open.
April S. Liang, M.D., from the Stanford University School of Medicine in Palo Alto, California, and colleagues examined the ability of a LLM to detect guardian authorship of messages originating from adolescent patient portals. Messages from adolescent patient portal accounts at Stanford Children's Health were sampled and manually reviewed for authorship. Two prompts were iteratively engineered on a random subset of 20 messages until perfect performance was achieved: one focusing on authorship identification (single task) and one that generated response to the message and identified authorship (multitask). Both prompts were tested on remaining messages.
Of the 2,088 test messages, 71.8 and 28.2 percent were labeled as parent- or guardian-authored and patient-authored, respectively. The researchers found that the single-task LLM achieved sensitivity and specificity of 98.1 and 88.4 percent, respectively, while the multitask LLM achieved sensitivity and specificity of 98.3 and 88.9 percent, respectively. This corresponded to a positive predictive value and negative predictive value above 95 percent for multitask LLM. Statistically identical performance was seen for the single-task and multitask classifiers.
"Ultimately, reliable identification of nonpatient-authored messages has implications beyond adolescent medicine. Among adults, care partners commonly access patient portals using the patient's credentials, especially relevant for geriatric patients or individuals with developmental differences," the authors write. "Our results found that this study's LLM has potential in improving safeguards for patient confidentiality."
One author disclosed ties to nference.