Abstract
Lung nodules are important markers of lung cancer, and prompt treatment makes early discovery much more likely to improve patient survival. Computer-Aided Diagnosis (CAD) systems were developed because radiologists find it difficult and time-consuming to classify malignant nodules in Computed Tomography (CT) images. Deep learning developments have consistently enhanced CAD's ability to screen for lung cancer. To improve the accuracy of pulmonary nodule classification, we use a Transferable Texture-Based Convolutional Neural Network (CNN) in this work. In order to maximize feature representation, our model integrates an Energy Layer (EL) to extract texture-based features from the convolutional layer. To guarantee strong classification performance, the suggested method is assessed using important performance metrics as accuracy, sensitivity, specificity, F1-score, and AUC-ROC.
Introduction:
Lung cancer is one of the leading causes of mortality worldwide, and early lung nodule diagnosis is necessary to improve patient survival chances. However, manually identifying and classifying lung nodules from medical images, such as CT scans, can be time-consuming and prone to mistakes. With the advent of deep learning techniques, particularly Convolutional Neural Networks (CNNs), medical image processing can now be automated. When it comes to identifying complex patterns in the data, these networks excel. In order to differentiate between benign and malignant lung nodules, texture-based features—which capture the complex patterns and structures within the nodules—are very useful. With the goal of improving diagnostic precision and offering radiologists and other medical professionals a more effective, automated method, this research proposes a texture-based CNN approach for trustworthy lung nodule categorization. In this work, we suggest a CNN model based on texture for accurate lung nodule categorization. The objective is to create a system that combines CNN architectures with texture analysis methods to improve lung nodule classification accuracy, which will help radiologists and other medical professionals make better decisions. Research on the use of texture-based techniques to medical image classification has been ongoing, especially with regard to lung nodule classification. Several studies have investigated various approaches for nodule detection and classification using deep learning and conventional image processing techniques.
Literature Review:
Goh et al. (2019) proposed technique for detecting lung nodules that uses a support vector machine (SVM) classifier after texture features are taken from CT scans. Their model showed excellent precision in differentiating between benign and malignant nodules [4]. Dou et al. (2016) used a CNN-based method for classifying lung nodules, in which a sizable collection of CT scans was used to train the network. They used a deep learning framework to automatically extract features from photos, demonstrating that CNNs can perform faster and more accurately than conventional techniques [5]. Kumar et al. (2017) integrated for lung nodule classification, textural features like the Gray Level Co-occurrence Matrix (GLCM) are incorporated into a CNN architecture. By combining learnt and hand-crafted features, their method outperformed CNNs alone in terms of model performance [6]. Shen et al. (2017) proposed a hybrid model for diagnosing lung nodules that combines deep CNNs and textural features. Their approach achieved a high degree of classification accuracy by using wavelet transform to gather multi-resolution textural data before feeding them into the CNN [7]. Basu et al. (2019) explored a technique for classifying lung nodules using textural cues called local binary patterns (LBP). To improve texture characterization and create a more dependable classification system, they integrated CNNs with LBP[8]. Zhou et al. (2018) proposed a deep learning framework for lung nodule classification that integrated an extraction method for multi-scale texture information. Their model did a great job of differentiating between benign and malignant lesions in CT images [9]. Xie et al. (2018) employed a deep learning model for classifying lung nodules using a CNN architecture that has already been trained and optimized on a dataset of lung CT images. Their methodology focused on how texture information enhances the model's capacity to precisely identify and categorize lung nodules [10]. Li et al. (2020) used a CNN that incorporated texture and form features in a multi-modal deep learning method for lung nodule classification. They showed that the overall classification performance was enhanced by incorporating these features [11]. Liu et al. (2021) introduced a new framework for classifying lung nodules that used both deep and shallow CNN models. They demonstrated that a hybrid model might surpass conventional techniques in terms of accuracy and computing efficiency by including texture characteristics [12][13] Tan et al. (2022) suggested a novel hybrid model that integrated deep texture feature extraction methods with CNN. Their research aimed to improve the classification of lung nodules in various clinical contexts by fine-tuning texture-based CNNs, and the findings showed a notable increase in diagnostic precision [14][15].
Texture-Based Convolutional Neural Network (TBCNN)
An enhanced deep learning model called the Texture-Based Convolutional Neural Network (TBCNN) was created to use texture features in the picture classification process. To capture the fine-grained patterns in the image that might be suggestive of the underlying tissue structure, TBCNN extracts texture features like Haralick features, Local Binary Patterns (LBP), and Gray Level Co-occurrence Matrix (GLCM) from the input image. This technique is especially helpful for medical imaging tasks like lung nodule classification. By acting as an extra input to the CNN, these texture features enable the network to learn both higher-order texture properties and low-level spatial information. While the fully connected layers at the end carry out classification, the CNN's convolutional layers automatically extract pertinent characteristics through a sequence of filters. Through the integration of texture-based data and CNNs' hierarchical learning capabilities, the model enhances classification accuracy by identifying subtle patterns that differentiate benign from malignant nodules.
Mathematical Description:
Let T(I) stand for the texture features that were extracted from the input picture, which is a CT scan of a lung nodule.
Figure 1.
Where P: probability of finding pair of pixels
I, j: intensity levels
d: distance
Ø: Angle
Local Binary Patterns (LBP):
LBP is a texture operator that labels the pixels of an image by thresholding the 3x3 neighborhood of each pixel and converting the result into a binary number.
For a pixel ‘P’ with a neighborhood of pixels p1,p2,…,p8 the LBP defined as
LBP (P)
Figure 2.
here S(Pi) : is step function
Proposed Method
In this proposed mthod, to improve lung nodule classification accuracy, the suggested method uses a Texture-Based Convolutional Neural Network (TBCNN), which combines deep learning and texture feature extraction. Because ResNet-50 is more accurate and efficient than more conventional designs like VGG16, it is used as the backbone CNN. Deeper feature extraction and better hierarchical representation learning are made possible by ResNet-50's residual connections, which lessen the vanishing gradient issue. To add more discriminative information, our model extracts texture characteristics from lung nodule CT images, including the Gray Level Co-occurrence Matrix (GLCM), Local Binary Patterns (LBP), and Haralick features. By enhancing the deep features that ResNet-50 has learnt, these manually created features make sure that both textural and spatial qualities are used to enhance classification performance. A three-fold cross-validation approach is used to further improve model generalization and lessen overfitting. The dataset is divided into three subsets, each of which is utilized for validation and training in turn. The model is trained on a variety of data distributions thanks to this iterative approach, which results in more reliable performance. In order to enable the network to learn both deep and handmade feature representations, the extracted texture features are incorporated into the CNN pipeline and serve as extra inputs to the fully connected layers. The suggested approach seeks to achieve state-of-the-art performance in lung nodule classification by fusing the advantages of deep learning and texture-based analysis, providing radiologists with a dependable and automated diagnostic tool. Since ResNet-50 is more accurate and efficient than VGG16, it is used as the pretrained Convolutional Neural Network (CNN) in the suggested technique. Because ResNet-50 is a deeper network, it may learn more intricate hierarchical representations because of its residual connections, which help to alleviate the vanishing gradient issue.
Figure 3. Fig.3.1 Flow chart for proposed method
The model uses a three-fold cross-validation setup to increase accuracy even more. The dataset is divided into three subsets using this method, and each subset is alternately used for training and validation. Repeating the procedure several times improves the model's capacity to generalize, resulting in less overfitting and more dependable performance. This approach achieves state-of-the-art results in lung nodule classification tasks by utilizing the depth of ResNet-50 and the advantages of cross-validation. These steps are shown in Fig.3.1
Results and Discussion
The suggested technique, which included texture-based features, ResNet-50, and a three-fold cross-validation methodology, performed exceptionally well in classifying lung nodules from CT scan pictures. The model achieved an F1-score of 94.4%, sensitivity of 93.7%, specificity of 96.2%, and total accuracy of 95.4%. This high degree of accuracy shows that the model produces dependable predictions with few false positives and false negatives in addition to being efficient at differentiating between benign and malignant nodules. The model's performance was further enhanced by the incorporation of texture characteristics like GLCM, LBP, and Haralick features as well as ResNet-50's sophisticated learning capabilities, which enabled the model to recognize complex patterns in the images. In addition to reducing overfitting and guaranteeing that the performance indicators were consistent across various data subsets, the use of three-fold cross-validation improved the model's capacity for generalization.
Figure 4. Fig.4.1: Example of medical image data set
The suggested model performs better than conventional texture-based SVM classifiers, VGG16-based CNNs, and ResNet-50 without texture features when compared to other cutting-edge techniques. With an accuracy of only 87.5% and a sensitivity of only 85%, the SVM classifier with texture features demonstrated the limitations of manually derived features and the absence of hierarchical pattern recognition in deep learning.
Table1: Comparison Results
Method | Accuracy | Sensitivity | Specificity | F1-Score |
Traditional Texture-Based SVM Classifier | 87.5 | 85.0 | 89.0 | 86.0 |
VGG16 | 90.2 | 88.1 | 92.3 | 89.9 |
ResNet-50 without Texture Features | 92.1 | 90.4 | 94.8 | 91.4 |
ResNet-50 + Texture Features + 3-Fold Cross-Validation) | 95.4 | 93.7 | 96.2 | 94.4 |
Table2: Comparison Results
Method | Precesion | F1-ScoreRecall | F1-Score | Accuracy |
Traditional Texture-Based SVM Classifier | 85.0 | 85.0 | 86.0 | 87.5 |
VGG16 | 88.99 | 88.1 | 89.9 | 90.2 |
ResNet-50 without Texture Features | 90.9 | 90.4 | 91.4 | 92.1 |
ResNet-50 + Texture Features + 3-Fold Cross-Validation) | 94.05 | 93.7 | 94.4 | 95.4 |
The classification method's ability to differentiate between benign and malignant lung nodules is demonstrated by the evaluation metrics. With the highest precision (94.05%), which indicates fewer false positives, being ResNet-50 + Texture Features + 3-Fold Cross-Validation, precision measures the percentage of accurately recognized malignant cases among all predicted malignant cases. The model's ability to detect real malignant nodules is measured by recall (sensitivity), where it performs exceptionally well at 93.7%, resulting in fewer false negatives. The F1-Score, which strikes a compromise between recall and precision, demonstrates the resilience of ResNet-50 by validating its outstanding performance with texture characteristics (94.4%). Finally, the suggested model's accuracy, which measures the overall correctness of predictions, peaks at 95.4%, indicating that it can enhance classification reliability by combining texture-based analysis and deep learning.
Although the VGG16-based CNN fared better with an accuracy of 90.2%, ResNet-50 surpassed it because of its deeper architecture and residual connections, which let it learn complex patterns more effectively. Although the ResNet-50 model without texture features had a decent accuracy of 92.1%, it was unable to identify subtle patterns that are crucial for differentiating between benign and malignant nodules since texture-based inputs were not included. Overall, the suggested approach performs better on lung nodule classification tests by combining deep learning and texture analysis.
Conclusion
In the classification of lung nodules, the suggested approach, which combines ResNet-50 with texture-based features and makes use of three-fold cross-validation, performs exceptionally well, attaining high accuracy, sensitivity, specificity, and F1-score. The model successfully captures both low-level and high-level patterns in CT scan images by fusing discriminative texture features like GLCM, LBP, and Haralick with the potent deep learning capabilities of ResNet-50, resulting in more accurate and dependable classification. The findings unequivocally show that the suggested strategy performs better than other deep learning models and conventional machine learning techniques, providing a reliable and effective solution for automated lung nodule detection that can greatly help radiologists make quicker and better diagnoses.
References
- T.Venkata Krishnamoorthy, C. Venkataiah, Y. Mallikarjuna Rao, D. Rajendra Prasad, Kurra Upendra Chowdary, Manjula Jayamma, R. Sireesha, “A novel NASNet model with LIME explanability for lung disease classification”, Biomedical Signal Processing and Control, Volume 93,2024,106114,
- Shagun Sharma, Kalpna Guleria, “A Deep Learning based model for the Detection of Pneumonia from Chest X-Ray Images using VGG-16 and Neural Networks”,Procedia Computer Science, Volume 218,2023, Pages 357-366,
- Yadav, S.S., Jadhav, S.M. Deep convolutional neural network based medical image classification for disease diagnosis. J Big Data 6, 113 (2019)
- [ Lung nodule detection and classification using texture features and SVM." Journal of Medical Imaging, 6(1), 045002
- Sreelatha, G., Govindkar, A., Ushaswini, S. (2023). Modified Cloud-Based Malware Identification Technique Using Machine Learning Approach. In: Rao, B.N.K., Balasubramanian, R., Wang, SJ., Nayak, R. (eds) Intelligent Computing and Applications. Smart Innovation, Systems and Technologies, vol 315. Springer, Singapore. https://doi.org/10.1007/978-981-19-4162-7_17
- Kumar, Rishabh, Anwar, Saeed, and Rahman, Shahinur"Texture-based CNN for lung nodule classification in CT images." IEEE Transactions on Medical Imaging, 2017. 36(4), 991-1000.
- Shen, Dinggang, Zhou, Lin, Yang, Yuan, and Wang, Ge "Hybrid texture-based CNN for lung nodule detection." IEEE Transactions on Biomedical Engineering, 2017. 64(11), 2663-2672.
- Basu, Saurabh, Shah, Rahul, and Hegde, Akshata"Lung nodule classification using local binary patterns and CNNs." Computer Methods and Programs in Biomedicine, 2019, 176, 171-180.
- Zhou, Yifan, Li, Min, and Cai, Weidong, "Multi-scale texture extraction for lung nodule classification." Neurocomputing, 2018, 275, 1914-1923.
- Xie, Lei, Wang, Zhe, and Li, Fei, "Deep learning for lung nodule classification with texture information." Journal of Digital Imaging, 2018, 31(6), 715-726.
- Li, Qiang, Zhang, Yu, and Ding, Xuefeng, "A multi-modal deep learning framework for lung nodule classification." Medical Image Analysis, 2020, 62, 101625
- Liu, Xuan, Wang, Jianzhuang, and Zhang, Xiang, "Hybrid deep learning approach for lung nodule classification." Pattern Recognition,2021. Vol. 112, 107794.
- Tan, Min, Dong, Huimin, and Zhou, Yun, "Fine-tuning texture-based CNNs for lung nodule classification in clinical practice." Journal of Clinical Imaging Science,2022. Vol.12, 45.
- M Bharathi, D Prasad, T Venkatakrishnamoorthy, M. Dharani, “Diabetes diagnostic method based on tongue image classification using machine learning algorithms”, Journal of Pharmaceutical Negative Results,2022, Vol.13(4), pp: 1247-1250.
- K. Ramana, R. M. Mohana, C. K. Kumar Reddy, G. Srivastava and T. R. Gadekallu, "A Blockchain-Based Data-Sharing Framework for Cloud Based Internet of Things Systems with Efficient Smart Contracts," 2023 IEEE International Conference on Communications Workshops (ICC Workshops), Rome, Italy, 2023, pp. 452-457, doi: 10.1109/ICCWorkshops57953.2023.10283747.