A Lightweight Model to Skin Disease Recognition

Một mô hình nhẹ để nhận biết bệnh về da

  • To Huu Nguyen
  • T. Thu Hong Ma
  • Thanh Mai Do
  • T. Thu Trang Phung
  • An Dang
  • Duc-Quang Vu
Keywords: Skin disease, skin lesion classification, lightweight network, MobileNet

Abstract

Skin disease has become increasingly prevalent, emerging as one of the most widespread conditions. It significantly affects human health and even causes skin cancer and death. Therefore, there are many methods have been proposed to solve this issue recently, especially deep learning-based methods. However, these state-of-the-art methods seem to only focus on how to achieve better performance and ignore the issue of inference time. Specifically, deep learning-based methods usually build very deep with a huge of model size and computational cost. As a result, this becomes very difficult to deploy these models on devices with no GPU support. In this study, we introduce a proficient and lightweight model designed to address this issue, leveraging the Mobilenet architecture. Our experimental findings demonstrate that the suggested network delivers comparable performance to contemporary cutting-edge techniques across diverse benchmark datasets, including HAM10000, International Skin Imaging Collaboration 2017, and International Skin Imaging Collaboration 2019. Notably, our approach utilizes merely 0.2 million parameters and 0.3 GFLOPs for image classification. This attribute holds substantial importance for deploying the model on edge devices lacking GPU support.

Author Biographies

To Huu Nguyen

To Huu Nguyen received the Bachelor’s
Education of Information Technology at
Thai Nguyen University of Education in
2003 and Master’s degree in Computer Science at Thainguyen University in 2008. He
received the PhD degree at the Academy
of Science and Technology - Vietnam
Academy of Sciences in 2021. He worked
as a lecturer at the Faculty of Information Technology, School
of Information and Communication Technology, Thai Nguyen
University from 2004. Now, he is a researcher at the Institute
of Information Technology, Academic Institute of Science and
Technology, Vietnam.
Email: thnguyen@ictu.edu.vn

T. Thu Hong Ma

Ma T. Hong Thu received a master’s
degree in Computer Science in 2015 at the
Thai Nguyen University of Information and
Communications Technology. Research interests include: Machine learning and deep
learning.
Email:

Thanh Mai Do

Do Thanh Mai received the Bachelor of
Education in Information Technology at
Thai Nguyen University of Education in
2003 and the Master’s degree in Computer
Science at Thai Nguyen University in 2008.
She worked as a lecturer of Information
Technology in the Basic Sciences Department at the School of Foreign Languages
(SFL-TNU). Office address: School of Foreign Languages, Thai
Nguyen University, Thai Nguyen, Vietnam.
Email: dothanhmai.sfl@tnu.edu.vn

T. Thu Trang Phung

Trang Phung T. Thu was born in Bac
Ninh, Vietnam in 1991. She received a B.S.
degree in education in information technology from the Thai Nguyen University of
Education, Vietnam, in 2013 and an M.S.
degree from the Thai Nguyen University of
Information and Communication Technology (ICTU) in 2015.
Her research interests include machine learning, deep learning,
computer vision, speech processing, and bioinformatics.
Email: phungthutrang.sfl@tnu.edu.vn

An Dang

An Dang received her Ph.D. in Computer
Science and Information Engineering from
National Central University, Taiwan in
2022. She specializes in deep learning
applications for image processing and
audio/speech signal analysis.

Email:
uni.edu.vn an.dangthithuy@phenikaa

 

References

T. S. C. Foundation, “Skin cancer facts and statistics,” https://www.skincancer.org/skin-cancer-information/skin-cancer-facts/, access: 2023-06-30.

M. UK, “2020 melanoma skin cancer report,” https://www.melanomauk.org.uk/2020-melanoma-skin-cancer-report, access: 2023-06-30.

R. K. Voss, T. N. Woods, K. D. Cromwell, K. C. Nelson, and J. N. Cormier, “Improving outcomes in patients with melanoma: strategies to ensure an early diagnosis,” Patient related outcome measures, pp. 229–242, 2015.

H. Kittler, H. Pehamberger, K. Wolff, and M. Binder, “Diagnostic accuracy of dermoscopy,” The lancet oncology, vol. 3, no. 3, pp. 159–165, 2002.

T. J. Brinker, A. Hekler, A. H. Enk, J. Klode, A. Hauschild, C. Berking, B. Schilling, S. Haferkamp, D. Schadendorf, S. Frohling ¨ et al., “A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task,” European Journal of Cancer, vol. 111, pp. 148–154, 2019.

T. J. Brinker, A. Hekler, A. H. Enk, J. Klode, A. Hauschild, C. Berking, B. Schilling, S. Haferkamp, D. Schadendorf, T. Holland-Letz et al., “Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task,” European Journal of Cancer, vol. 113, pp. 47–54, 2019.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.

C. Nhan Duong, K. Luu, K. Gia Quach, and T. D. Bui, “Longitudinal face modeling via temporal deep restricted boltzmann machines,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 5772–5780.

R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.

D.-Q. Vu, N. Le, and J.-C. Wang, “Teaching yourself: A self-knowledge distillation approach to action recognition,” IEEE Access, vol. 9, pp. 105 711–105 723, 2021.

Q. V. Duc, T. Phung, M. Nguyen, B. Y. Nguyen, and T. H. Nguyen, “Self-knowledge distillation: an efficient approach for falling detection,” in International Conference on Artificial Intelligence and Big Data in Digital Era. Springer, 2021, pp. 369–380.

H. M. Tan, D.-Q. Vu, C.-T. Lee, Y.-H. Li, and J.-C. Wang, “Selective mutual learning: An efficient approach for single channel speech separation,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 3678–3682.

H. M. Tan, D.-Q. Vu, and J.-C. Wang, “Selinet: A lightweight model for single channel speech separation,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023, pp. 1–5.

T. Phung, V. T. Nguyen, T. H. T. Ma, and Q. V. Duc, “A (2+ 1) d attention convolutional neural network for video prediction,” in International Conference on Artificial Intelligence and Big Data in Digital Era. Springer, 2021, pp. 395–406.

D.-Q. Vu and T. P. T. Thu, “Simultaneous context and motion learning in video prediction,” Signal, Image and Video Processing, pp. 1–10, 2023.

J. Zhang, Y. Xie, Y. Xia, and C. Shen, “Attention residual learning for skin lesion classification,” IEEE transactions on medical imaging, vol. 38, no. 9, pp. 2092–2103, 2019.

S. K. Datta, M. A. Shaikh, S. N. Srihari, and M. Gao, “Soft attention improves skin cancer classification performance,” in Interpretability of Machine Intelligence in Medical Image Computing, and Topological Data Analysis and Its Applications for Medical Data: 4th International Workshop, iMIMIC 2021, and 1st International Workshop, TDA4MedicalData 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings 4. Springer, 2021, pp. 13–23.

D. M. Nguyen, T. T. Nguyen, H. Vu, Q. Pham, M.-D. Nguyen, B. T. Nguyen, and D. Sonntag, “Tatl: Task agnostic transfer learning for skin attributes detection,” Medical Image Analysis, vol. 78, p. 102359, 2022.

A. T. Huynh, V.-D. Hoang, S. Vu, T. T. Le, and H. D. Nguyen, “Skin cancer classification using different backbones of convolutional neural networks,” in International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems. Springer, 2022, pp. 160–172.

A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.

C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 31, no. 1, 2017.

N. Gessert, M. Nielsen, M. Shaikh, R. Werner, and A. Schlaefer, “Skin lesion classification using ensembles of multi-resolution efficientnets with meta data,” MethodsX, vol. 7, p. 100864, 2020.

P. Tschandl, C. Rosendahl, and H. Kittler, “The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions,” Scientific data, vol. 5, no. 1, pp. 1–9, 2018.

E. D. Cubuk, B. Zoph, J. Shlens, and Q. V. Le, “Randaugment: Practical automated data augmentation with a reduced search space,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 702–703.

F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, and X. Tang, “Residual attention network for image classification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3156–3164.

J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.

Published
2024-05-27