Location Fusion and Data Augmentation for Thoracic Abnormalites Detection in Chest X-Ray Images
Abstract
The application of deep learning in medical image diagnosis has been widely studied recently. Unlike general
objects, thoracic abnormalities in chest X-ray radiographs are much harder to label consistently by domain experts. The problem’s difficulty and inconsistency in data labeling lead to the downgraded performance of the robust deep learning models. This paper presents two methods to improve the accuracy of thoracic abnormalities detection in chest X-ray images. The first method is to fuse the locations of the same abnormality marked differently by radiologists. The second method is applying mosaic data augmentation in the training process to enrich the training data. Experiments on the VinDrCXR chest X-ray data show that combining the two methods helps improve the predictive performance by up to 8% for F1- score and 9% for the mean average precision (MAP) score.
References
L. Jiao, F. Zhang, F. Liu, S. Yang, L. Li, Z. Feng, and R. Qu, “A survey of deep learning-based object detection,” IEEE Access, vol. 7, pp. 128 837–128 868, 2019.
G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. van der Laak, B. van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60–88, Dec. 2017.
S. Yao, Y. Chen, X. Tian, and R. Jiang, “Pneumonia detection using an improved algorithm based on faster r-CNN,” Computational and Mathematical Methods in Medicine, vol. 2021, pp. 1–13, Apr. 2021.
T. Liu, Q. Guo, C. Lian, X. Ren, S. Liang, J. Yu, L. Niu, W. Sun, and D. Shen, “Automated detection and classification of thyroid nodules in ultrasound images using clinicalknowledge-guided convolutional neural networks,” Medical Image Analysis, vol. 58, p. 101555, Dec. 2019.
D. M. Ibrahim, N. M. Elshennawy, and A. M. Sarhan, “Deep-chest: Multi-classification deep learning model for diagnosing COVID-19, pneumonia, and lung cancer chest diseases,” Computers in Biology and Medicine, vol. 132, p. 104348, May 2021.
Y. Tang, Y.-B. Tang, Y. Peng, K. Yan, M. Bagheri, B. A. Redd, C. J. Brandon, Z. Lu, M. Han, J. Xiao, and R. M. Summers, “Automated abnormality classification of chest radiographs using deep convolutional neural networks,” NPJ Digital Medicine, vol. 3, 2020.
A. Majkowska, S. Mittal, D. F. Steiner, J. J. Reicher, S. M. McKinney, G. E. Duggan, K. Eswaran, P.-H. C. Chen, Y. Liu, S. R. Kalidindi, A. Ding, G. S. Corrado, D. Tse, and S. Shetty, “Chest radiograph interpretation with deep learning models: Assessment with radiologist-adjudicated reference standards and population-adjusted evaluation.” Radiology, p. 191293, 2019.
J. Irvin, P. Rajpurkar, M. Ko, Y. Yu, S. Ciurea-Ilcus, C. Chute, H. Marklund, B. Haghgoo, R. Ball, K. Shpanskaya, J. Seekins, D. A. Mong, S. S. Halabi, J. K. Sandberg, R. Jones, D. B. Larson, C. P. Langlotz, B. N. Patel, M. P. Lungren, and A. Y. Ng, “Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison,” CoRR, vol. abs/1901.07031, 2019.
X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “ChestX-ray: Hospital-scale chest x-ray database and benchmarks on weakly supervised classification and localization of common thorax diseases,” in Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics. Springer International Publishing, 2019, pp. 369–392.
H. Q. Nguyen, K. Lam, L. T. Le, H. H. Pham, D. Q. Tran, D. B. Nguyen, D. D. Le, C. M. Pham, H. T. T. Tong, D. H. Dinh, C. D. Do, L. T. Doan, C. N. Nguyen, B. T. Nguyen, Q. V. Nguyen, A. D. Hoang, H. N. Phan, A. T. Nguyen, P. H. Ho, D. T. Ngo, N. T. Nguyen, N. T. Nguyen, M. Dao, and V. Vu, “Vindr-cxr: An open dataset of chest x-rays with radiologist’s annotations,” Scientific Data, vol. 429, 2022.
J. Yu, Y. Jiang, Z. Wang, Z. Cao, and T. Huang, “UnitBox,” in Proceedings of the 24th ACM international conference on Multimedia. ACM, Oct. 2016.
R. Solovyev, W. Wang, and T. Gabruseva, “Weighted boxes fusion: Ensembling boxes from different object detection models,” Image and Vision Computing, vol. 107, p. 104117, Mar. 2021.
A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A. A. Kalinin, “Albumentations: Fast and flexible image augmentations,” Information, vol. 11, no. 2, p. 125, Feb. 2020.
A. Alexey Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” CoRR, vol. abs/2004.10934, 2020.
C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, no. 1, Jul. 2019.
K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition.” Springer International Publishing, 2014, pp. 346–361.
M. Tan, R. Pang, and Q. V. Le, “EfficientDet: Scalable and efficient object detection,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Jun. 2020.
T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Jul. 2017.
J. Redmon and A. Farhadi, “Yolov3: An incrementalimprovement,” CoRR, vol. abs/1804.02767, 2018. [Online]. Available: http://arxiv.org/abs/1804.02767
J. Hosang, R. Benenson, and B. Schiele, “Learning nonmaximum suppression,” CoRR, vol. abs/1705.02950, 2017. [Online]. Available: http://arxiv.org/abs/1705.02950