From text to code – Leveraging machine learning for neurology outpatient clinical coding - 19/01/26
, Michael George 1, Matthew Stammers 1, 2, Christopher Kipps 1, 2Abstract |
Background |
Most neurological care is delivered in outpatient settings without mandated clinical coding. The clinical records remain stored as unstructured text with inconsistent formatting. There is a significant opportunity to increase the value of these data through automated clinical coding utilising natural language processing (NLP). While existing models for full ICD-10 clinical coding lack sufficient accuracy for clinical use, 60% of neurology outpatient cases fall into just five diagnostic categories. This suggests that a simplified coding system could enhance feasibility and serve as a foundation for more complex coding schemes.
Objective |
We propose a simplified coding system of 29 codes for neurology outpatient episodes. We evaluate several machine learning methods in a supervised single-label classification task on real-world outpatient care notes.
Methods |
We collected outpatient care notes created between 15 November 2018 and 2 December 2022. The training dataset included 14,917 care notes, most of which were annotated with ICD-10 codes during routine care and subsequently mapped to 29 simplified diagnostic categories. An external validation set of 1,042 randomly selected encounters was retrospectively coded.
Models included logistic regression, support vector machine, bidirectional LSTM, BERT-based models (DistilBERT, RoBERTa), and a generative large language model (LLM), Mistral 7B. All but the LLM were trained via 10-fold stratified cross-validation; final models were trained on the complete dataset.
Results |
DistilBERT and RoBERTa outperformed traditional models, with F1-scores of 81.73 (95% CI: 79.02–84.13) and 81.16 (95% CI: 78.84–83.76), respectively. The LLM–DistilBERT hybrid performed worse than all but BiLSTM and produced “medical hallucinations,” making it unsuitable for clinical use. The training data were highly imbalanced. BERT-based models showed strong performance on high-frequency categories, with F1-scores over 85% for the top five classes. At a 0.85 confidence threshold, DistilBERT achieved 96% accuracy on 64% of the external validation set.
Conclusions |
BERT-based NLP models perform well in classifying neurology outpatient clinic notes when a reduced set of diagnostic categories is used. In a human-in-the-loop workflow, such models can meaningfully reduce the manual coding workload while preserving accuracy. To our knowledge, this is the first applied study of automated clinical coding in neurology outpatient care.
Le texte complet de cet article est disponible en PDF.Highlights |
• | Developed and evaluated open-source NLP models for automated clinical coding of neurology outpatient letters. |
• | Introduced a simplified coding system of 29 diagnostic categories, addressing data sparsity and class imbalance. |
• | Fine-tuned DistilBERT achieved 81.7% F1-score, with 82.4% accuracy in external validation. |
• | A human-in-the-loop workflow could reduce manual coding workload while matching average human coding accuracy; for example, operating with a 0.85 confidence threshold, DistilBERT can automatically code 64% of records with 96% accuracy, while low-confidence cases route to manual coding. |
• | BERT-based models outperformed traditional methods (logistic regression, SVM, BiLSTM) for clinical text classification. |
• | Large Language Models (LLMs) like Mistral 7B underperformed due to medical hallucinations and lack of output structure. |
• | First applied example of automated clinical coding in neurology outpatient care, with potential for broader specialty adoption. |
• | Future work should explore multi-label classification and cross-institution validation to enhance generalisability. |
Keywords : neurology, clinical coding, clinical text classification, natural language processing
Plan
Bienvenue sur EM-consulte, la référence des professionnels de santé.
