リケラボ論文検索は、全国の大学リポジトリにある学位論文・教授論文を一括検索できる論文検索サービスです。

リケラボ 全国の大学リポジトリにある学位論文・教授論文を一括検索するならリケラボ論文検索大学・研究所にある論文を検索できる

リケラボ 全国の大学リポジトリにある学位論文・教授論文を一括検索するならリケラボ論文検索大学・研究所にある論文を検索できる

大学・研究所にある論文を検索できる 「Computer-aided diagnosis of chest X-ray for COVID-19 diagnosis in external validation study by radiologists with and without deep learning system」の論文概要。リケラボ論文検索は、全国の大学リポジトリにある学位論文・教授論文を一括検索できる論文検索サービスです。

コピーが完了しました

URLをコピーしました

論文の公開元へ論文の公開元へ
書き出し

Computer-aided diagnosis of chest X-ray for COVID-19 diagnosis in external validation study by radiologists with and without deep learning system

Miyazaki, Aki Ikejima, Kengo Nishio, Mizuho Yabuta, Minoru Matsuo, Hidetoshi Onoue, Koji Matsunaga, Takaaki Nishioka, Eiko Kono, Atsushi Yamada, Daisuke Oba, Ken Ishikura, Reiichi Murakami, Takamichi 神戸大学

2023.10.16

概要

To evaluate the diagnostic performance of our deep learning (DL) model of COVID-19 and investigate whether the diagnostic performance of radiologists was improved by referring to our model. Our datasets contained chest X-rays (CXRs) for the following three categories: normal (NORMAL), non-COVID-19 pneumonia (PNEUMONIA), and COVID-19 pneumonia (COVID). We used two public datasets and private dataset collected from eight hospitals for the development and external validation of our DL model (26,393 CXRs). Eight radiologists performed two reading sessions: one session was performed with reference to CXRs only, and the other was performed with reference to both CXRs and the results of the DL model. The evaluation metrics for the reading session were accuracy, sensitivity, specificity, and area under the curve (AUC). The accuracy of our DL model was 0.733, and that of the eight radiologists without DL was 0.696 ± 0.031. There was a significant difference in AUC between the radiologists with and without DL for COVID versus NORMAL or PNEUMONIA (p = 0.0038). Our DL model alone showed better diagnostic performance than that of most radiologists. In addition, our model significantly improved the diagnostic performance of radiologists for COVID versus NORMAL or PNEUMONIA.

この論文で使われている画像

参考文献

1. WHO, “Novel Coronavirus—China,” 2020. https://w

​ ww.w

​ ho.i​ nt/c​ sr/d

​ on/1​ 2-j​ anuar​ y-2​ 020-n

​ ovel-c​ orona​ virus-c​ hina/e​ n/. Accessed

7 June 2022.

2. WHO Coronavirus (COVID-19) Dashboard | WHO Coronavirus (COVID-19) Dashboard with Vaccination Data. https://​covid​

19.​who.​int/. Accessed 7 June 2022.

3. Fang, Y. et al. Sensitivity of Chest CT for COVID-19: Comparison to RT-PCR. Radiology 296(2), E115–E117. https://​doi.​org/​10.​

1148/​RADIOL.​20202​00432 (2020).

4. Hao, W. & Li, M. Clinical diagnostic value of CT imaging in COVID-19 with multiple negative RT-PCR testing. Travel Med. Infect.

Dis. 34, 101627. https://​doi.​org/​10.​1016/j.​tmaid.​2020.​101627 (2020).

5. Jacobi, A., Chung, M., Bernheim, A. & Eber, C. Portable chest X-ray in coronavirus disease-19 (COVID-19): A pictorial review.

Clin. Imaging 64, 35. https://​doi.​org/​10.​1016/J.​CLINI​MAG.​2020.​04.​001 (2020).

6. Ozturk, T. et al. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 121,

103792. https://​doi.​org/​10.​1016/J.​COMPB​IOMED.​2020.​103792 (2020).

7. Gudigar, A. et al. Role of artificial intelligence in COVID-19 detection. Sensors (Basel) 21(23), 8045. https://d

​ oi.o

​ rg/1​ 0.3​ 390/S​ 2123​

8045 (2021).

8. Fleet, R. et al. Rural versus urban academic hospital mortality following stroke in Canada. PLoS ONE 13(1), e0191151. https://​doi.​

org/​10.​1371/​journ​al.​pone.​01911​51 (2018).

9. Minaee, S., Kafieh, R., Sonka, M., Yazdani, S. & Jamalipour, S. G. Deep-COVID: Predicting COVID-19 from chest X-ray images

using deep transfer learning. Med. Image Anal. 65, 101794. https://​doi.​org/​10.​1016/J.​MEDIA.​2020.​101794 (2020).

10. Qaid, T. S. et al. Hybrid deep-learning and machine-learning models for predicting COVID-19. Comput. Intell. Neurosci. 3(2021),

9996737. https://​doi.​org/​10.​1155/​2021/​99967​37 (2021).

11. Okolo, G. I., Katsigiannis, S., Althobaiti, T. & Ramzan, N. On the use of deep learning for imaging-based COVID-19 detection

using chest X-rays. Sensors (Basel) 21(17), 5702. https://​doi.​org/​10.​3390/​S2117​5702 (2021).

12. Zhang, R. et al. Diagnosis of coronavirus disease 2019 pneumonia by using chest radiography: Value of artificial intelligence.

Radiology 298(2), E88–E97. https://​doi.​org/​10.​1148/​radiol.​20202​02944 (2021).

13. Murphy, K. et al. COVID-19 on chest radiographs: A multireader evaluation of an artificial intelligence system. Radiology 296(3),

E166–E172. https://​doi.​org/​10.​1148/​radiol.​20202​01874 (2020).

14. Wehbe, R. M. et al. DeepCOVID-XR: An artificial intelligence algorithm to detect COVID-19 on chest radiographs trained and

tested on a large U.S. Clinical data set. Radiology 299(1), E167–E176. https://​doi.​org/​10.​1148/​RADIOL.​20202​03511 (2021).

15. Wang, L., Lin, Z. Q. & Wong, A. COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19

cases from chest X-ray images. Sci. Rep. 10(1), 19549. https://​doi.​org/​10.​1038/​s41598-​020-​76550-z (2020).

16. Bustos, A., Pertusa, A., Salinas, J. M. & de la Iglesia-Vayá, M. PadChest: A large chest X-ray image dataset with multi-label annotated

reports. Med. Image Anal. 66, 101797. https://​doi.​org/​10.​1016/j.​media.​2020.​101797 (2020).

17. Vayá, M. D. L. I., Saborit, J. M., Montell, J. A. et al. BIMCV COVID-19+: A large annotated dataset of RX and CT images from

COVID-19 patients. 2020. http://​arxiv.​org/​abs/​2006.​01174.

18. Nishio, M., Noguchi, S., Matsuo, H. & Murakami, T. Automatic classification between COVID-19 pneumonia, non-COVID-19

pneumonia, and the healthy on chest X-ray image: Combination of data augmentation methods. Sci. Rep. 10(1), 17532. https://​

doi.​org/​10.​1038/​s41598-​020-​74539-2 (2020).

19. Nishio, M. et al. Deep learning model for the automatic classification of COVID-19 pneumonia, non-COVID-19 pneumonia, and

the healthy: A multi-center retrospective study. Sci. Rep. 12(1), 8214. https://​doi.​org/​10.​1038/​s41598-​022-​11990-3 (2022).

20. Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D. & Batra, D. Grad-CAM: Visual explanations from deep networks

via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision. 2017, 618–626https://​

doi.​org/​10.​1109/​ICCV.​2017.​74

21. Chattopadhay, A., Sarkar, A., Howlader, P. & Balasubramanian, V. N. Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks. In Proceedings—2018 IEEE Winter Conference on Applications of Computer Vision, WACV

2018. 2018, 839–847 (2018). https://​doi.​org/​10.​1109/​WACV.​2018.​00097.

22. Smith, B. J. & Hillis, S. L. Multi-reader multi-case analysis of variance software for diagnostic performance comparison of imaging

modalities. Proc. SPIE Int. Soc. Opt. Eng. 11316, 113160K. https://​doi.​org/​10.​1117/​12.​25490​75 (2020).

23. Rangarajan, K. et al. Artificial Intelligence-assisted chest X-ray assessment scheme for COVID-19. Eur. Radiol. 31(8), 6039–6048.

https://​doi.​org/​10.​1007/​s00330-​020-​07628-5 (2021).

24. Garcia Santa Cruz, B., Bossa, M. N., Sölter, J. & Husch, A. D. Public Covid-19 X-ray datasets and their impact on model bias—A

systematic review of a significant problem. Med. Image Anal. 74, 102225. https://​doi.​org/​10.​1016/j.​media.​2021.​102225 (2021).

Acknowledgements

We thank Yoshiaki Watanabe (Nishinomiya Watanabe Hospital) for his cooperation.

Author contributions

Conceptualization: M.N. Data curation: A.M., K.I., K.O., R.I. Formal analysis: A.M., M.N. Funding acquisition:

M.N. Investigation: A.M., M.N. Methodology: A.M., M.N. Project administration: M.N. Resources: A.M., K.I.,

M.N., M.Y., T.M., E.N., A.K., D.Y., K.O. Software: M.N., H.M. Supervision: T.M. Validation: A.M., M.N., H.M.

Visualization: A.M., M.N., H.M. Writing—original draft: A.M., M.N. Writing—review and editing: All authors.

Funding

This work was supported by JST Adaptable and Seamless Technology Transfer Program through Target-driven

R&D (A-STEP) (Grant No.: JPMJTM20QL). In addition, this work was partly supported by JSPS KAKENHI

(Grant No.: JP19K17232 and 22K07665).

Competing interests The authors declare no competing interests.

Additional information

Supplementary Information The online version contains supplementary material available at https://​doi.​org/​

10.​1038/​s41598-​023-​44818-9.

Scientific Reports |

(2023) 13:17533 |

https://doi.org/10.1038/s41598-023-44818-9

11

Vol.:(0123456789)

www.nature.com/scientificreports/

Correspondence and requests for materials should be addressed to M.N.

Reprints and permissions information is available at www.nature.com/reprints.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and

institutional affiliations.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International

License, which permits use, sharing, adaptation, distribution and reproduction in any medium or

format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the

Creative Commons licence, and indicate if changes were made. The images or other third party material in this

article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the

material. If material is not included in the article’s Creative Commons licence and your intended use is not

permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from

the copyright holder. To view a copy of this licence, visit http://​creat​iveco​mmons.​org/​licen​ses/​by/4.​0/.

© The Author(s) 2023

Scientific Reports |

Vol:.(1234567890)

(2023) 13:17533 |

https://doi.org/10.1038/s41598-023-44818-9

12

...

参考文献をもっと見る

全国の大学の
卒論・修論・学位論文

一発検索!

この論文の関連論文を見る