Abstract
Skin cancer is one of the most common types of cancer in people. Doctors usually find it through visual inspections, starting with a clinical screening, then followed by a biopsy and lab tests. We can improve the prediction of skin cancer and determine if it is melanoma or benign by using automated systems that categorize images of skin lesions. This is made possible by machine learning and artificial intelligence (AI).) approaches. This chapter describes Spotting skin cancer early can make a big difference in treatment and recovery! using Spark and a deep neural network. To find the best algorithm for skin cancer prediction, a comparison study of the several algorithms currently in use has also been conducted. Based on findings gathered from several iterations, skin lesion photographs might be categorized using a CNN approach. Then, many transfer learning models were employed for fine-tuning, This study focuses on several advanced image recognition models, specifically Resnet50, InceptionV3, and Inception Resnet, to improve how we analyze skin lesions. One of the key contributions of this research is the use of a technique called ESRGAN to enhance images before they are processed by these models. By applying this preprocessing step, we aimed to boost the accuracy of the results.We tested multiple models, including our customized model and a standard CNN, to see how well they performed. Interestingly, both the standard pre-trained model and our own produced similar results, indicating that our approach is effective. The effectiveness of our method was shown through simulations using the ISIC 2020 skin lesion dataset, a well-known collection of images used for this kind of research. We found that our CNN model achieved an impressive accuracy of 89.2%, showcasing the potential of our approach in helping to improve skin lesion analysis.
References
- World Health Organization. The Global Health Observatory. Geneva, Switzerland: World Health Organization, 2022. [Google Scholar]
- C Kishor Kumar Reddy, Anisha P R, Marlia Mohd Hanafiah, Srinath Doss, Kari J Lipert, “Intelligent Systems and Industrial Internet of Things for Sustainable Development", Sustainability in Industry 5.0: Theory and applications, CRC Press, Taylor & Francis, 2024.
- Choi, K.Y., and Han, H.S. "Developments in Photothermal Cancer Treatments Mediated by Nanomaterials: A Path Toward Clinical Use." Biomedicines, 2021, 9:305. doi:10.3390/biomedicines9030305. [DOI] [Free article from PMC] [Med] [Google Scholar]
- Vadaparampil, S.T., Lake, P.W., Christy, S.M., Perkins, R.B., and Fuzzell, L.N. "American Cervical Cancer Screening: Issues and Possible Remedies for Underscreened Populations." Preventive Medicine, 2021, 144:106400. doi:10.1016/j.ypmed.2020.106400. [DOI] [Med].
- Poole S. et al. Indicators of internal cancer in the skin: II. Environmental carcinogens and paraneoplastic dermatoses. Dermatology Journal American Academy.
- Migden, M.R. et al. Patients with locally advanced or metastatic basal cell carcinoma (BOLT) receiving treatment with two distinct dosages of sonidegib: a multicenter, randomized, doubleblind phase 2 trial. Lancet Oncol.
- Immunotherapy with checkpoint inhibitors and current treatment for basal cell carcinoma (Clin Skin Cancer, 2017) by S.E. Fenton et al.
- Baum C.L. et al. J Am Acad Dermatol (2018) present a novel evidence-based risk stratification method for cutaneous squamous cell carcinoma into low, middle, and high-risk categories with therapeutic implications.
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large-scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset is a large collection of multisource dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Alenezi, F.; Armghan, A.; Polat, K. Wavelet transform-based deep residual neural network and ReLU-based Extreme Learning Machine for skin lesion classification. Expert Syst. Appl. 2023, 213, 119064. [Google Scholar] [CrossRef]
- Shinde, R.K.; Alam, M.S.; Hossain, M.B.; Md Imtiaz, S.; Kim, J.; Padwal, A.A.; Kim, N. Squeeze-MNet: Precise Skin Cancer Detection Model for Low Computing IoT Devices Using Transfer Learning. Cancers 2022, 15, 12. [Google Scholar] [CrossRef] [PubMed]
- Alenezi, F.; Armghan, A.; Polat, K. A multi-stage melanoma recognition framework with deep residual neural network and hyperparameter optimization-based decision support in dermoscopy images. Expert Syst. Appl. 2023, 215, 119352. [Google Scholar] [CrossRef]
- Redmon, J. Darknet: Open Source Neural Networks in C. 2013–2016. Available online: http://pjreddie.com/darknet/ (accessed on 1 May 2023).
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
- Abbas, Q.; Gul, A. Detection and Classification of Malignant Melanoma Using Deep Features of NASNet. SN Comput. Sci. 2022, 4, 21. [Google Scholar] [CrossRef]
- W. Gouda, N. U. Sama, G. Al-Waakid, M. Humayun, and N. Z. Jhanjhi, "Detection of Skin Cancer Based on Skin Lesion Images Using Deep Learning," Healthcare, vol. 10, no. 7, Jul. 2022, Art. no. 1183, https://doi.org/10.3390/healthcare10071183.
- I. Kousis, I. Perikos, I. Hatzilygeroudis, and M. Virvou, "Deep Learning Methods for Accurate Skin Cancer Recognition and Mobile Application," Electronics, vol. 11, no. 9, Jan. 2022, Art. no. 1294, https://doi.org/10.3390/electronics11091294.