Article Data

  • Views 826
  • Dowloads 170

Original Research

Open Access

Effect of an artificial-intelligent chest radiographs reporting system in an emergency department

  • Do Hyeok Yoon1
  • Sejin Heo1,2
  • Jae Yong Yu3
  • Se Uk Lee1
  • Sung Yeon Hwang1
  • Hee Yoon1
  • Tae Gun Shin1
  • Gun Tak Lee1
  • Jong Eun Park1
  • Hansol Chang1,2
  • Taerim Kim1
  • Won Chul Cha1,2,*,

1Department of Emergency Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 06351 Seoul, Republic of Korea

2Department of Digital Health, Samsung Advanced Institute for Health Science & Technology (SAIHST), Sungkyunkwan University, 06355 Seoul, Republic of Korea

3Department of Biomedical Systems Informatics, Yonsei University College of Medicine, 03722 Seoul, Republic of Korea

DOI: 10.22514/sv.2023.108 Vol.19,Issue 6,November 2023 pp.144-151

Submitted: 09 March 2023 Accepted: 25 April 2023

Published: 08 November 2023

*Corresponding Author(s): Won Chul Cha E-mail:


Though chest radiography is a first-line diagnostic tool in the emergency department (ED), interpretation has a high error rate. We aimed to evaluate the usability and acceptability of deep learning-based computer-aided detection for chest radiography (DeepCADCR) in an ED environment. We conducted a single-institution survey of emergency physicians (EPs) who had used DeepCADCR (Lunit INSIGHT Chest Xray (CXR), version as part of their ED workflow for at least three months. We developed 22 questions that assessed the subscales of effectiveness, efficiency, safety, satisfaction, and reliability. A seven-point Likert agreement scale was used to rate the responses. A total of 23 EPs who completed the survey was enrolled in the study. When averaged by subscale, satisfaction scores were highest (mean 4.71, standard deviation (SD) 1.43), and safety scores were lowest (mean 4.3, SD 0.72). When scores were converted to acceptability, the total average acceptance of DeepCADCR was 86.0%, with higher scores in ED residents than ED specialists for all subscales. Use of DeepCADCR in the ED workflow was well accepted by EPs.


Artificial intelligence; Deep learning; Chest radiography; Emergency department; Survey; Computer-aided detection

Cite and Share

Do Hyeok Yoon,Sejin Heo,Jae Yong Yu,Se Uk Lee,Sung Yeon Hwang,Hee Yoon,Tae Gun Shin,Gun Tak Lee,Jong Eun Park,Hansol Chang,Taerim Kim,Won Chul Cha. Effect of an artificial-intelligent chest radiographs reporting system in an emergency department. Signa Vitae. 2023. 19(6);144-151.


[1] Aronchick J, Epstein D, Gefter WB, Miller WT. Evaluation of the chest radiograph in the emergency department patient. Emergency Medicine Clinics of North America. 1985; 3: 491–505.

[2] Chung JH, Duszak R, Hemingway J, Hughes DR, Rosenkrantz AB. Increasing utilization of chest imaging in us emergency departments from 1994 to 2015. Journal of the American College of Radiology. 2019; 16: 674–682.

[3] Donald JJ, Barnard SA. Common patterns in 558 diagnostic radiology errors. Journal of Medical Imaging and Radiation Oncology. 2012; 56: 173–178.

[4] Al aseri Z. Accuracy of chest radiograph interpretation by emergency physicians. Emergency Radiology. 2009; 16: 111–114.

[5] Kim JH, Kim JY, Kim GH, Kang D, Kim IJ, Seo J, et al. Clinical validation of a deep learning algorithm for detection of pneumonia on chest radiographs in emergency department patients with acute febrile respiratory illness. Journal of Clinical Medicine. 2020; 9: 1981.

[6] Hwang EJ, Nam JG, Lim WH, Park SJ, Jeong YS, Kang JH, et al. Deep learning for chest radiograph diagnosis in the emergency department. Radiology. 2019; 293: 573–580.

[7] Jin KN, Jae HJ, Shin CI, Chai JW, Chun SR, Shin SD, et al. Overnight preliminary interpretations of CT and MR images by radiology residents in ER: how accurate are they? Journal of The Korean Society of Emergency Medicine. 2008; 19: 205–210.

[8] Sellers A, Hillman BJ, Wintermark M. Survey of after-hours coverage of emergency department imaging studies by US academic radiology departments. Journal of the American College of Radiology. 2014; 11: 725–730.

[9] Hwang EJ, Park S, Jin K, Kim JI, Choi SY, Lee JH, et al. Development and validation of a deep learning—based automatic detection algorithm for active pulmonary tuberculosis on chest radiographs. Clinical Infectious Diseases. 2019; 69: 739–747.

[10] Nam JG, Park S, Hwang EJ, Lee JH, Jin K, Lim KY, et al. Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology. 2019; 290: 218–228.

[11] Nam JG, Kim M, Park J, Hwang EJ, Lee JH, Hong JH, et al. Development and validation of a deep learning algorithm detecting 10 common abnormalities on chest radiographs. The European Respiratory Journal. 2021; 57: 2003061.

[12] Kozuka T, Matsukubo Y, Kadoba T, Oda T, Suzuki A, Hyodo T, et al. Efficiency of a computer-aided diagnosis (CAD) system with deep learning in detection of pulmonary nodules on 1-mm-thick images of computed tomography. Japanese Journal of Radiology. 2020; 38: 1052–1061.

[13] Summers RM. Improving the accuracy of CTC interpretation: computer-aided detection. Gastrointestinal Endoscopy Clinics of North America. 2010; 20: 245–257.

[14] van Zelst JC, Tan T, Mann RM, Karssemeijer N. Validation of radiologists’ findings by computer-aided detection (CAD) software in breast cancer detection with automated 3D breast ultrasound: a concept study in implementation of artificial intelligence software. Acta Radiologica. 2020; 61: 312–320.

[15] Wani IM, Arora S. Computer-aided diagnosis systems for osteoporosis detection: a comprehensive survey. Medical & Biological Engineering & Computing. 2020; 58: 1873–1917.

[16] Fujita R, Iwasawa T, Aoki T, Iwao Y, Ogura T, Utsunomiya D. Detection of the usual interstitial pneumonia pattern in chest CT: effect of computer-aided diagnosis on radiologist diagnostic performance. To be published in Acta Radiologica. 2020. [Preprint].

[17] Seah JCY, Tang CHM, Buchlak QD, Holt XG, Wardman JB, Aimoldin A, et al. Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase study. The Lancet Digital Health. 2021; 3: e496–e506.

[18] Jones CM, Danaher L, Milne MR, Tang C, Seah J, Oakden-Rayner L, et al. Assessment of the effect of a comprehensive chest radiograph deep learning model on radiologist reports and patient outcomes: a real-world observational study. BMJ Open. 2021; 11: e052902.

[19] Kim JH, Han SG, Cho A, Shin HJ, Baek S. Effect of deep learning-based assistive technology use on chest radiograph interpretation by emergency department physicians: a prospective interventional simulation-based study. BMC Medical Informatics and Decision Making. 2021; 21: 311.

[20] Pinto A, Reginelli A, Pinto F, Lo Re G, Midiri F, Muzj C, et al. Errors in imaging patients in the emergency setting. The British Journal of Radiology. 2016; 89: 20150914.

[21] Thakkalpalli M. Reducing diagnostic errors in emergency department with the help of radiographers. Journal of Medical Radiation Sciences. 2019; 66: 152–153.

[22] Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ digital medicine. 2020; 3: 17.

[23] Yoo H, Kim EY, Kim H, Choi YR, Kim MY, Hwang SH, et al. Artificial intelligence-based identification of normal chest radiographs: a simulation study in a multicenter health screening cohort. Korean Journal of Radiology. 2022; 23: 1009–1018.

[24] Chabi M, Borget I, Ardiles R, Aboud G, Boussouar S, Vilar V, et al. Evaluation of the accuracy of a computer-aided diagnosis (CAD) system in breast ultrasound according to the radiologist’s experience. Academic Radiology. 2012; 19: 311–319.

[25] Seagull FJ, Bailey JE, Trout A, Cohan RH, Lypson ML. Residents’ ability to interpret radiology images. Academic Radiology. 2014; 21: 909–915.

[26] Civaner MM, Uncu Y, Bulut F, Chalil EG, Tatli A. Artificial intelligence in medical education: a cross-sectional needs assessment. BMC Medical Education. 2022; 22: 772.

[27] Stead WW, Haynes RB, Fuller S, Friedman CP, Travis LE, Beck JR, et al. Designing medical informatics research and library—resource projects to increase what is learned. Journal of the American Medical Informatics Association. 1994; 1: 28–33.

Abstracted / indexed in

Science Citation Index Expanded (SciSearch) Created as SCI in 1964, Science Citation Index Expanded now indexes over 9,200 of the world’s most impactful journals across 178 scientific disciplines. More than 53 million records and 1.18 billion cited references date back from 1900 to present.

Journal Citation Reports/Science Edition Journal Citation Reports/Science Edition aims to evaluate a journal’s value from multiple perspectives including the journal impact factor, descriptive data about a journal’s open access content as well as contributing authors, and provide readers a transparent and publisher-neutral data & statistics information about the journal.

Chemical Abstracts Service Source Index The CAS Source Index (CASSI) Search Tool is an online resource that can quickly identify or confirm journal titles and abbreviations for publications indexed by CAS since 1907, including serial and non-serial scientific and technical publications.

Index Copernicus The Index Copernicus International (ICI) Journals database’s is an international indexation database of scientific journals. It covered international scientific journals which divided into general information, contents of individual issues, detailed bibliography (references) sections for every publication, as well as full texts of publications in the form of attached files (optional). For now, there are more than 58,000 scientific journals registered at ICI.

Geneva Foundation for Medical Education and Research The Geneva Foundation for Medical Education and Research (GFMER) is a non-profit organization established in 2002 and it works in close collaboration with the World Health Organization (WHO). The overall objectives of the Foundation are to promote and develop health education and research programs.

Scopus: CiteScore 1.0 (2022) Scopus is Elsevier's abstract and citation database launched in 2004. Scopus covers nearly 36,377 titles (22,794 active titles and 13,583 Inactive titles) from approximately 11,678 publishers, of which 34,346 are peer-reviewed journals in top-level subject fields: life sciences, social sciences, physical sciences and health sciences.

Embase Embase (often styled EMBASE for Excerpta Medica dataBASE), produced by Elsevier, is a biomedical and pharmacological database of published literature designed to support information managers and pharmacovigilance in complying with the regulatory requirements of a licensed drug.

Submission Turnaround Time