Abstract
Weather detection systems (WDS) are essential in enhancing decision-making for autonomous vehicles, specifically under challenging and adverse weather conditions. Autonomous systems can effectively classify outdoor weather scenarios using deep learning (DL) techniques, allowing seamless adaptation to dynamic environmental changes. This study introduces a robust DL-driven framework to classify diverse weather conditions and aid autonomous vehicle navigation in typical and extreme scenarios. The proposed framework utilizes advanced transfer learning methods alongside a high-performance Nvidia GPU to evaluate the efficiency of three convolutional neural networks (CNNs): MobileNetV2, DenseNet121, and VGG-16. The experiments were conducted using two comprehensive weather imaging datasets, DAWN2020 and MCWRD2018, combined to classify six distinct weather categories: cloudy, rainy, snowy, sandy, sunny, and sunrise. Experimental outcomes showed outstanding performance for all models, with the MobileNetV2-based system performing the highest detection accuracy, precision, and sensitivity of 97.92%, 97.88%, and 97.95%, respectively. Furthermore, the framework achieved a rapid inference time, with an average processing speed of 7 milliseconds per inference using the GPU. Comparative analysis with existing models of the effectiveness of the presented approach showcases advancements in classification accuracy by a margin of 0.3% to 19.8%. These outcomes provide the framework's practicality classification to facilitate reliable decision-making for autonomous vehicles in diverse conditions.
Introduction
The efficiency of vehicle detection plays a crucial role in traffic monitoring and intelligent surveillance, mainly in autonomous driving systems [1]. Recent developments in sensors, GPUs, Deep Learning (DL) algorithms, and artificial intelligence-based autonomous applications make self-driving technology a game-changer in the recent automobile sector [2]. Autonomous vehicles can accurately and instantly identify road objects to enable the best decision-making and safety [3]. For efficient object detection in their surroundings, these vehicles usually use a variety of sensors, including cameras and LiDAR.
However, the performance of these sensors and the quality of captured images can be significantly compromised under adverse weather conditions such as dense fog, heavy rain, snowstorms, dusty winds, and low-light scenarios [4]. These conditions expand the possibility of traffic accidents by lowering visibility and making it challenging to detect vehicles precisely. Developing dependable image enhancement methods that enhance the visual quality of the input data is vital to resolving these problems. Improved pictures can significantly improve vehicle recognition systems' effectiveness, allowing autonomous cars to track and identify objects precisely, even in difficult situations [5].
DL has emerged as a cornerstone of vehicle detection strategies in recent years, particularly for autonomous and intelligent surveillance systems [6]. Deep neural networks have demonstrated exceptional capabilities in object recognition tasks, outperforming traditional machine learning methods [7]. Convolutional neural networks (CNNs) have proven instrumental in delivering high detection accuracy for traffic objects [8]. Despite these advancements, achieving a balance between real-time detection and high precision in adverse weather remains a critical challenge. Existing solutions often fail to maintain this trade-off, limiting their application in scenarios requiring robust performance under extreme weather conditions.
This study introduces a DL-based detection system for autonomous vehicles operating in adverse weather environments. Leveraging the power of transfer learning and high-performance GPUs, we developed a framework to classify six distinct weather conditions: cloudy, rainy, snowy, sandy, sunny, and sunrise. The framework evaluates the performance of three state-of-the-art deep CNN architectures: MobileNetV2, DenseNet121, and VGG-16. These architectures were chosen for their efficiency, precision, and lightweight design, making them suitable for deployment in autonomous vehicles.
Our proposed model combines three key subsystems: data preparation, model training, and performance evaluation. The system's performance is evaluated by employing comprehensive metrics, including accuracy, precision, sensitivity, and F1-score. Experimental outcomes present that the proposed framework delivers superior detection accuracy and faster processing speeds than existing approaches, even on medium-cost hardware. These attributes make the system highly adaptable for integration into autonomous vehicles, enhancing their ability to make informed decisions promptly and reliably in adverse weather conditions.
The remainder of this paper is structured as follows: Section 2 reviews relevant research and related work. Section 3 details the architecture of the proposed detection system. Section 4 discusses the evaluation methodology and experimental results. Finally, Section 5 concludes the paper and outlines future research directions.
Literature Review
The automatic identification of meteorological conditions from visual imagery has attracted much interest lately, thanks to developments in deep learning methods, especially CNNs. Numerous studies have investigated this field using a variety of approaches and datasets to demonstrate the potential of deep learning models in weather classification tasks.
Elhoseiny et al. [9] addressed the weather classification problem using CNNs, employing fine-tuning techniques on a pre-trained network to adapt it for weather classification. Their model comprised five convolutional and pooling layers, with three fully connected layers, including the final output layer designed to classify images into two weather categories: sunny and cloudy. Their approach, trained on a dataset of 10,000 images, achieved a classification accuracy of 82.2% using normalized metrics and 91.1% with standard metrics.
Chu et al. [10] proposed a model to estimate weather conditions using large-scale visual datasets and metadata. They developed a dataset of over 180,000 images labeled with weather-related features such as weather type, temperature, and humidity. To eliminate irrelevant images, they incorporated an SVM-based indoor/outdoor classifier with a reported accuracy of 98%. Additionally, geotags and timestamps associated with the images were utilized to retrieve weather information, and a random forest model was employed for weather classification. Their approach achieved a 58% classification accuracy across different weather types and was extended to a weather-aware landmark classification system.
In another study, Xia et al. [11] proposed ResNet-15, a simplified version of the ResNet-50 architecture, for weather recognition. By reducing the network's complexity, they efficiently classified weather conditions like foggy, rainy, snowy, and sunny. Their dataset, consisting of around 5,000 traffic-related weather images, demonstrated high classification accuracies of 96.4%, 97.3%, 94.7%, and 95.1% for the respective categories.
Ibrahim et al. [12] introduced a DL framework capable of extracting weather information from street-level images. Their approach employed multiple CNN models to detect visibility conditions (e.g., dawn/dusk, day, and night) and weather phenomena such as rain and snow. Recognition accuracies ranged between 91% and 95.6%, emphasizing the effectiveness of unified DL models in this domain.
Xiao et al. [13] developed MeteCNN, a deep CNN designed explicitly for weather classification tasks. Their dataset consisted of 6,877 images categorized into 11 weather types: hail, rainbow, snow, and rain. Using visual features such as shapes and colors, the proposed model incorporated 13 convolutional layers, six pooling layers, and a SoftMax classifier, achieving an overall accuracy of approximately 92%.
Roser et al. [14] utilized histogram-based features and an SVM classifier to enhance driver assistance systems and classify images according to weather conditions, such as clear, light, and heavy rain. Their handcrafted feature approach yielded competitive results, with around 95% classification accuracy when feature selection was optimized. Similarly, Kang et al. designed a weather recognition framework to mitigate unfavorable weather effects on vision-based driver assistance systems. They experimented with AlexNet and GoogLeNet, achieving superior results compared to handcrafted features for weather conditions such as haze, rain, and snow.
Guerra et al. [15] created a dataset for three weather categories rain, snow, and fog and proposed an algorithm using superpixel masks for data augmentation. Their study evaluated CNN models like CaffeNet, PlacesCNN, ResNet-50, and VGGNet16, reporting classification accuracies between 68% and 81%, with ResNet-50 performing the best overall.
These works demonstrate the efficiency of CNNs and transfer learning in weather classification. They emphasize the crucial role of strong datasets, computational effectiveness, and model optimization in high-performance weather recognition systems.
Methods and Materials
This research aims to develop an effective system utilizing cutting-edge computational techniques to identify and classify weather situations. This system is predicted to be included in driverless cars, allowing them to detect current weather conditions and adjust to their surroundings. The system is organized into three main subsystems, as displayed in Figure 1. The Data Preparation (DP) subsystem manages the collecting and preprocessing of weather-related image datasets, ensuring the data is well-structured and suitable for further analysis. The Learning Models (LM) subsystem trains, validates and tests these datasets using computational algorithms to identify distinct weather patterns. The Evaluation and Deployment (ED) subsystem employs various performance measures to evaluate the system's accuracy and efficiency. It utilizes a multiclass classification approach to guarantee precise classifications of meteorological conditions and enable real-time applications in dynamic environments.
Figure 1. Figure 1. Graphical representation of overall research flow
Dataset Preparation and Preprocessing System
This research integrates two widely used weather condition datasets, DAWM2020 and MCWCD2018, to construct a consolidated dataset comprising 1,656 image samples [16]. These images are categorized into six distinct weather classes: cloudy (300 images), rainy (215 images), snowy (204 images), sandy (319 images), sunny (253 images), and sunrise (365 images). In Figure 2, some pictures that define these classifications are shown. Six labelled folders, each representing a different weather category, are used to arrange the photos methodically.
The platform's image datastore (IMD) is then used to effectively manage the images when the dataset is imported into MATLAB2021b. To ensure consistency, all images go through preprocessing procedures before analysis. Initially, all images are converted to the JPG format, then resized into a standardized 3D matrix (RGB format) with dimensions 224 × 224 × 3. This resizing ensures compatibility with the input requirements of the selected deep convolutional neural networks: MobileNetV2, DenseNet121, and VGG-16.
Figure 2. Figure 2. Sample of the dataset
Once the image type and dimensions are standardized, data augmentation expands the training dataset. Augmentation operations include resizing, cropping, rotation, reflection, and distortions. Augmentation generated approximately 5,000 additional images, improving the dataset's diversity and richness. Figure 3 illustrates examples of augmented images.
Figure 3. Figure 3. Sample of the augmented images
After augmentation, all images (original and augmented) are stored in the IMD, shuffled randomly to prevent any biases arising from the order of images (e.g., clustering similar weather conditions). The dataset is split into two subsets: 80% for training and 20% for testing. To further validate the models' reliability, a 5-fold cross-validation strategy is employed. In this approach, the dataset is divided into five folds. During each iteration, one-fold performs as the test set, while the remaining four are employed for training.
B. Learning Model
This subsystem trains and validates models to classify weather conditions using advanced CNNs. Following the collection, labeling, and pre-processing of images, the data progresses through the feedforward layers of deep CNNs to generate classification outputs. Figure 4 illustrates the development of the learning models.
Figure 4. Figure 4. Development stages of the learning models
The key stages of the learning subsystem are described below:
· Input Layer: Preprocessed RGB images with 224 × 224 x 3 dimensions produced by the data preparation subsystem enter through this layer. It ensures conformance with the input requirements of the selected CNN architectures.
· Processing Layer: This layer is responsible for feature extraction and classification. Transfer learning is used to improve efficiency and performance. This approach eliminates the need to train models from scratch by leveraging pre-trained weights from CNNs trained on the large-scale ImageNet dataset. Fine-tuning is applied to adapt the pre-trained models to the specific classification task.
For this study, we employ MobileNetV2, DenseNet121, and VGG-16 architectures. Transfer learning efficiently employs these networks' pre-trained features, cutting down on training time while maintaining accurate categorization.
· Output Layer: The output layer manages the trainable parameters (weights) from each CNN and adjusts them to this study's specific requirements. It establishes fully connected layers between the final feature maps of the pre-trained models and the six weather condition classes.
i. MobileNetV2: The final layer of MobileNetV2 (1280 features) is fully connected to the six output classes.
ii. DenseNet121: The last feature output (1024 features) is similarly connected to the six classes.
iii. VGG-16: The final output (4096 features) is reduced to six classes through a fully connected layer.
The output probabilities are generated using the SoftMax activation function, where the class with the highest probability represents the predicted weather condition. This systematic approach ensures precise and efficient classification results.
Result and Discussion
This section discusses the performance metrics employed to evaluate MobileNetV2, DenseNet121, and VGG-16 for weather classification, as shown in Table 1. The primary evaluation metrics include accuracy, precision, sensitivity (recall), and F1-score. These metrics offer valuable insights into how well each model identifies the six weather categories: Cloudy, rainy, Snowy, sandy, Shine, and Sunrise.
Table 1: Performance Metrics Comparison
Model | Accuracy (%) | Precision (%) | Sensitivity (%) | F1-Score (%) |
MobileNetV2 | 97.92 | 97.88 | 97.95 | 97.91 |
DenseNet121 | 97.45 | 97.41 | 97.50 | 97.46 |
VGG-16 | 96.89 | 96.82 | 96.90 | 96.85 |
MobileNetV2 demonstrated the highest overall performance across all metrics, making it the most effective model for weather classification. With an accuracy of 97.92%, it outperformed DenseNet121 (97.45%) and VGG-16 (96.89%), mainly due to its efficient depth wise separable convolutions, which enhance feature extraction while maintaining computational efficiency. In terms of precision, MobileNetV2 again led with 97.88%, meaning it had the lowest false positive rate and was more reliable in distinguishing visually similar weather classes such as "Cloudy" and "Rainy." DenseNet121 followed with 97.41%, though it exhibited some misclassifications in classes like "Sandy" and "Snowy," whereas VGG-16 had the lowest precision (96.82%), indicating a slightly higher rate of false positives. When evaluating sensitivity (recall), MobileNetV2 achieved the highest score at 97.95%, showcasing its strong capability to correctly identify weather conditions without missing relevant instances. VGG-16 had the lowest recall (96.90%), underscoring its limits in identifying subtle weather variances, whereas DenseNet121, with a 97.50% performance, struggled slightly with texturally similar weather types. Lastly, considering the F1-score, which compromises recall and precision, MobileNetV2 held its lead with 97.91%, demonstrating its classification robustness. DenseNet121 came in second with 97.46%, while VGG-16 trailed behind with 96.85%, indicating that it was less adept at managing minute variations in texture and illumination. The best model overall is MobileNetV2, which has the highest accuracy, precision, recall, and F1 score. This makes it ideal for real-time weather categorization, even in contexts with limited resources.
According to the evaluation, MobileNetV2 is the top-performing model with better accuracy, precision, recall, and F1 score. Because of its lightweight form, it is perfect for real-time applications, particularly in automated weather forecasting, environmental monitoring, and meteorology. MobileNetV2 is advised for deployment in resource-constrained contexts, including edge devices, because of its ability to balance speed and accuracy. DenseNet121 is
Figure 5. Figure 5. Performance of the models
a strong substitute, especially when more feature reuse may be advantageous. It does, however, demand additional processing power. Although competitive, VGG-16 is less efficient than DenseNet121 and MobileNetV2. It may be suitable for applications where interpretability is more critical than computational efficiency. This study highlights the advantages of using MobileNetV2, DenseNet121, and VGG-16 for weather classification, with MobileNetV2 achieving the best overall performance depicted in Figure 5. Future work could explore further optimizations using hybrid models or attention mechanisms to improve classification in challenging weather conditions.
Figure 6. Figure 6. Confusion metrics of the learning models
The confusion matrices illustrate model performance by showing where misclassifications occurred presented in Figure 6. MobileNetV2 demonstrated the least number of misclassifications, particularly excelling in distinguishing "Snowy" from "Sandy," which was a common challenge for the other models. DenseNet121 had slight confusion between "Sandy" and "Snowy," likely due to overlapping color and texture features in specific images. However, it still maintained substantial classification accuracy. VGG-16 exhibited more frequent confusion between "Cloudy" and "Rainy" conditions, as well as some misclassifications in "Shine" and "Sunrise." This suggests that the model's feature extraction process may not be as effective as MobileNetV2 in handling complex weather variations.
Table 2: Comparison with existing works
Reference | Models | Number of classes | Accuracy |
[17] | ResNet-18 | 4classes (sunny, cloudy, shine, and sunrise) | 98.20% |
[18] | ResNet-101, and DenseNet121 | 9 classes (rain, dew, snow, frost, fog, ice, rain, fog, and hail) | 81.25% |
[19] | Stacked ensemble | 4 classes (cloudy, rainy, shine, and sunrise) | 86.00% |
[13] | Deep MeteCNN | 11 classes (rain, dew, snow, frost, fog, ice, rain, fog, hail, shine, and sunrise) | 92.00% |
Our | MobileNetV2 | 6 classes (cloudy, rainy, snowy, sandy, sunny, and sunrise. ) | 97.92% |
Our proposed MobileNetV2-based model, designed to classify six distinct weather conditions—cloudy, rainy, snowy, sandy, sunny, and sunrise—achieved an impressive 97.92% accuracy, outperforming several state-of-the-art models exhibited in Table 2.
The ResNet-18 model [16]classified four weather categories (sunny, cloudy, shine, and sunrise) and gained 98.20% accuracy. While this result is marginally higher than ours, it is essential to note that their model handled fewer weather classes, making classification inherently more straightforward. Higher accuracy in a four-class system does not necessarily imply superior performance in a more complex six-class problem, where distinguishing between rainy and cloudy or snowy and sandy presents more significant challenges.
In contrast, [18] implemented ResNet-101 and DenseNet121 for a more diverse nine-class weather dataset, which included conditions such as rain, dew, snow, frost, fog, ice, and hail, achieving 81.25% accuracy. While the model attempted to classify a broader spectrum of weather conditions, the lower accuracy suggests difficulties in distinguishing between closely related phenomena, such as fog and frost or rain and ice, which share similar visual characteristics.
Another study by [19][20][21] used a stacked ensemble model for a four-class classification task (cloudy, rainy, shine, and sunrise), acquiring 86.00% accuracy. Despite ensemble models typically offering performance gains, the model struggled with class separability, likely due to overlapping weather features and dataset limitations.
Deep MeteCNN, proposed by [13] handled the most complex dataset, classifying 11 weather conditions with 92.00% accuracy. While the model performed well, it still fell short compared to MobileNetV2’s performance. Including fine-grained weather distinctions, such as dew, frost, and fog, likely introduced classification challenges that contributed to the slight accuracy drop.
While several weather classifications models have performed favorable results, our MobileNetV2 model offers a unique blend of high accuracy, computational efficiency, and robust generalization. Compared to existing models, it outperforms deeper architectures while maintaining a lightweight structure, making it ideal for real-time and edge-based weather prediction systems. By effectively balancing accuracy with efficiency, MobileNetV2 stands as a strong contender in modern weather classification tasks.
Conclusion
In this study, we proposed, developed, and evaluated an intelligent deep learning-based weather detection system to support autonomous vehicle decision-making. Our approach leveraged three convolutional neural networks—MobileNetV2, DenseNet121, and VGG-16—to classify six weather conditions: cloudy, rainy, snowy, sandy, sunny, and sunrise. The system was trained and tested on a combined dataset from MCWDS2018 and DAWN2020, ensuring diverse weather representations and enhanced generalization. Extensive experiments demonstrated that our MobileNetV2-based model achieved superior accuracy, precision, and recall, with 97.92%, 97.88%, and 97.95%, respectively. The model is well-suited for real-time deployment in autonomous systems because of its effective depth wise separable convolutions, allowing excellent performance with minimal computing overhead. MobileNetV2 was the most dependable and effective, outperforming DenseNet121 and VGG-16, producing impressive results in identifying visually similar weather situations. Our method incorporates a larger dataset, offering a more thorough and scalable solution for practical applications than earlier research concentrating on smaller datasets or fewer weather categories. To improve detection accuracy and resilience, future research will investigate refining the model for edge computing devices, adding more meteorological phenomena to the dataset, and further integrating multi-modal sensory inputs.
References
- M. Bakirci, “Enhancing vehicle detection in intelligent transportation systems via autonomous uav platform and yolov8 integration,” Applied Soft Computing, vol. 164, p. 112015, 2024.
- S. Gulati, “Application of artificial intelligence in wastewater treatment,” 2024.
- H. Yang, Y. Zhou, J. Wu, H. Liu, L. Yang, and C. Lv, “Human-guided continual learning for personalized decision-making of autonomous driving,” IEEE Transactions on Intelligent Transportation Systems, 2025.
- M. Z. Uddin, “Enhancing road infrastructure monitoring: Integrating drones for weather-aware pothole detection,” 2024.
- J. Yu, “Object detection techniques in autonomous driving,” in ITM Web of Conferences, vol. 70. EDP Sciences, 2025, p. 01019
- S. Abbasi and A. M. Rahmani, “Artificial intelligence and software modeling approaches in autonomous vehicles for safety management: a systematic review,” Information, vol. 14, no. 10, p. 555, 2023.
- R. Sasirekha, V. Surya, P. Nandhini, T. Bhanushree, G. Hanithamet al., “Ensemble of fast r-cnn with bi-lstm for object detection,” in 2025 6th International Conference on Mobile Computing and Sustainable Informatics (ICMCSI). IEEE, 2025, pp. 1200–1206.
- N. Triki, M. Karray, and M. Ksantini, “A real-time traffic sign recognition method using a new attention-based deep convolutional neural network for smart vehicles,” Applied Sciences, vol. 13, no. 8, p. 4793, 2023.
- M. Elhoseiny, S. Huang, and A. Elgammal, “Weather classification with deep convolutional neural networks,” in 2015 IEEE international conference on image processing (ICIP). IEEE, 2015, pp. 3349–3353.
- W.-T. Chu, X.-Y. Zheng, and D.-S. Ding, “Camera as weather sensor: Estimating weather information from single images,” Journal of Visual Communication and Image Representation, vol. 46, pp. 233–249, 2017.
- J. Xia, D. Xuan, L. Tan, and L. Xing, “Resnet15: weather recognition on traffic road with deep convolutional neural network,” Advances in Meteorology, vol. 2020, no. 1, p. 6972826, 2020.
- M. R. Ibrahim, J. Haworth, and T. Cheng, “Weathernet: Recognising weather and visual conditions from street-level images using deep residual learning,” ISPRS International Journal of Geo-Information, vol. 8, no. 12, p. 549, 2019.
- H. Xiao, F. Zhang, Z. Shen, K. Wu, and J. Zhang, “Classification of weather phenomenon from images by using deep convolutional neural network,” Earth and Space Science, vol. 8, no. 5, p. e2020EA001604, 2021.
- M. Roser and F. Moosmann, “Classification of weather situations on single color images,” in 2008 IEEE intelligent vehicles symposium. IEEE, 2008, pp. 798–803.
- J. C. V. Guerra, Z. Khanam, S. Ehsan, R. Stolkin, and K. McDonald-Maier, “Weather classification: A new multi-class dataset, data augmentation approach and comprehensive evaluations of convolutional neural networks,” in 2018 NASA/ESA Conference on Adaptive Hardware and Systems (AHS). IEEE, 2018, pp. 305–310.
- Q. A. Al-Haija, M. Gharaibeh, and A. Odeh, “Detection in adverse weather conditions for autonomous vehicles via deep learning,” Ai, vol. 3, no. 2, pp. 303–317, 2022.
- Q. A. Al-Haija, M. A. Smadi, and S. Zein-Sabatto, “Multi-class weather classification using resnet-18 cnn for autonomous iot and cps applications,” in 2020 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, 2020, pp. 1586–1591.
- Y. Wang and Y. Li, “Research on multi-class weather classification algorithm based on multi-model fusion,” in 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), vol. 1. IEEE, 2020, pp. 2251–2255.
- Reddy, C. K. K., Anisha, P. R., & Mohana, R. M. (2021). Assessing wear out of tyre using OpenCV & convolutional neural networks. Journal of Physics: Conference Series, 2089(1), 012001. https://doi.org/10.1088/1742-6596/2089/1/012001 pp. 135–140.
- A. G. Oluwafemi and W. Zenghui, “Multi-class weather classification from still image using said ensemble method,” in 2019 Southern African universities power engineering conference/robotics and mechatronics/pattern recognition association of South Africa (SAUPEC/RobMech/PRASA). IEEE, 2019.
- Sreelatha, G., Vinaya Babu, A., Midhunchakkarvarthy, D. (2021). Extended Equilibrium-Based Transfer Learning for Improved Security in Cloud Environment. In: Suma, V., Chen, J.IZ., Baig, Z., Wang, H. (eds) Inventive Systems and Control. Lecture Notes in Networks and Systems, vol 204. Springer, Singapore. https://doi.org/10.1007/978-981-16-1395-1_41