Original Article on Application of Artificial Intelligence to Radiology

Detecting insertion, substitution, and deletion errors in radiology reports using neural sequence-to-sequence models

John Zech, Jessica Forde, Joseph J. Titano, Deepak Kaji, Anthony Costa, Eric Karl Oermann


Background: Errors in grammar, spelling, and usage in radiology reports are common. To automatically detect inappropriate insertions, deletions, and substitutions of words in radiology reports, we proposed using a neural sequence-to-sequence (seq2seq) model.
Methods: Head CT and chest radiograph reports from Mount Sinai Hospital (MSH) (n=61,722 and 818,978, respectively), Mount Sinai Queens (MSQ) (n=30,145 and 194,309, respectively) and MIMIC-III (n=32,259 and 54,685) were converted into sentences. Insertions, substitutions, and deletions of words were randomly introduced. Seq2seq models were trained using corrupted sentences as input to predict original uncorrupted sentences. Three models were trained using head CTs from MSH, chest radiographs from MSH, and head CTs from all three collections. Model performance was assessed across different sites and modalities. A sample of original, uncorrupted sentences were manually reviewed for any error in syntax, usage, or spelling to estimate real-world proofreading performance of the algorithm.
Results: Seq2seq detected 90.3% and 88.2% of corrupted sentences with 97.7% and 98.8% specificity in same-site, same-modality test sets for head CTs and chest radiographs, respectively. Manual review of original, uncorrupted same-site same-modality head CT sentences demonstrated seq2seq positive predictive value (PPV) 0.393 (157/400; 95% CI, 0.346–0.441) and negative predictive value (NPV) 0.986 (789/800; 95% CI, 0.976–0.992) for detecting sentences containing real-world errors, with estimated sensitivity of 0.389 (95% CI, 0.267–0.542) and specificity 0.986 (95% CI, 0.985–0.987) over n=86,211 uncorrupted training examples.
Conclusions: Seq2seq models can be highly effective at detecting erroneous insertions, deletions, and substitutions of words in radiology reports. To achieve high performance, these models require site- and modality-specific training examples. Incorporating additional targeted training data could further improve performance in detecting real-world errors in reports.

Download Citation