Implementation of Ensemble Deep Learning for Brain MRI Classification in Tumor Detection
Abstract
Introduction: Brain tumor detection from MRI images is critical for early diagnosis and treatment planning. While individual deep learning models have shown high accuracy in medical image classification, combining multiple models can potentially enhance performance. This study aims to develop an ensemble deep learning framework using ResNet18 and DenseNet121 to improve the accuracy of brain tumor classification. Methods: A dataset of 7,023 brain MRI images categorized into four classes—glioma, meningioma, no tumor, and pituitary tumor—was used. Pre-processing included resizing to 224×224 pixels, normalization, and augmentation (random flipping and rotation). ResNet18 and DenseNet121 models were fine-tuned separately using the Adam optimizer with a learning rate of 0.001. The ensemble method was implemented by averaging the softmax outputs of both models to generate final predictions. Results: When evaluated individually, ResNet18 and DenseNet121 achieved validation accuracies of 97.72% and 97.79%, respectively. The ensemble model significantly outperformed both, reaching a validation accuracy of 99.36%. This result demonstrates that integrating both architectures effectively reduces misclassification and enhances overall robustness. Confusion matrix analysis confirmed high classification accuracy across all four tumor categories. Conclusions: The proposed ensemble deep learning approach successfully leverages the strengths of ResNet18 and DenseNet121, achieving superior classification accuracy for brain tumor detection in MRI images. This method holds promise as a reliable tool in clinical diagnostic workflows. Future research should focus on integrating additional architectures, advanced augmentation strategies, and hyperparameter optimization to further improve performance
Downloads
References
Jyotismita Chaki, Brain Tumor MRI Dataset. 2024, doi: 10.21227/1jny-g144.
S. R. Kempanna et al., “Revolutionizing brain tumor diagnoses: a ResNet18 and focal loss approach to magnetic resonance imaging-based classification in neuro-oncology,” Int. J. Electr. Comput. Eng., vol. 14, no. 6, p. 6551, Dec. 2024, doi: 10.11591/ijece.v14i6.pp6551-6559.
Z. Zhu, M. Attique Khan, S.-H. Wang, and Y.-D. Zhang, “RBEBT: A ResNet-Based BA-ELM for Brain Tumor Classification,” Comput. Mater. Contin., vol. 74, no. 1, pp. 101–111, 2023, doi: 10.32604/cmc.2023.030790.
S. Sal Sabila and H. Peni Agustin Tjahyaningtyas, “Multiple Brain Tumor with Modified DenseNet121 Architecture Using Brain MRI Images,” J. Ilm. Kursor, vol. 12, no. 3, pp. 147–158, Jul. 2023, doi: 10.21107/kursor.v12i3.379.
S. M. Shuvo et al., “Multi-class Brain Tumor Classification with DenseNet-Based Deep Learning Features and Ensemble of Machine Learning Approaches,” 2024, pp. 559–573.
O. N. Belaid, M. Loudini, and A. Nakib, “Brain tumor classification using DenseNet and U-net convolutional neural networks,” in 2024 8th International Conference on Image and Signal Processing and their Applications (ISPA), Apr. 2024, pp. 1–6, doi: 10.1109/ISPA59904.2024.10536704.
M. Masab, M. U. Rehman, Z. Rafi, and W. T. Toor, “A Comparative Study of DenseNet121, VGG16, and Custom CNNs for Brain Tumor Classification using MRI Images,” in 2024 3rd International Conference on Emerging Trends in Electrical, Control, and Telecommunication Engineering (ETECTE), Nov. 2024, pp. 1–6, doi: 10.1109/ETECTE63967.2024.10824003.
A. A. S. Ali, “Brain Tumor Classification Using A Hybrid Deep Learning Model: Leveraging Densenet121 And Inceptionv2 Architectures,” Electron. J. Univ. Aden Basic Appl. Sci., vol. 5, no. 4, pp. 455–463, Dec. 2024, doi: 10.47372/ejua-ba.2024.4.402.
K. Gayatri, “Classification of Muti-Labeled Retinal Diseases in Retinal Fundus Images Using CNN Model ResNet18,” Lecture Notes in Networks and Systems, vol. 1165. pp. 163–177, 2025, doi: 10.1007/978-981-97-8336-6_13.
S. Sunkari, “A refined ResNet18 architecture with Swish activation function for Diabetic Retinopathy classification,” Biomed. Signal Process. Control, vol. 88, 2024, doi: 10.1016/j.bspc.2023.105630.
M. Zulhusni, “Implementation of DenseNet121 Architecture for Waste Type Classification,” Adv. Sustain. Sci. Eng. Technol., vol. 6, no. 3, 2024, doi: 10.26877/asset.v6i3.673.
D. Anitha, “Oral Cancer Detection and Classification Using Deep Learning with DenseNet121-CatBoost Classifier,” 2nd IEEE International Conference on Networks, Multimedia and Information Technology, NMITCON 2024. 2024, doi: 10.1109/NMITCON62075.2024.10698836.
S. Vishwakarma, “Classification of Corn Leaf Disease using Resnet18, Alexnet and VGG16,” 15th International Conference on Advances in Computing, Control, and Telecommunication Technologies, ACT 2024, vol. 2. pp. 1333–1339, 2024.
Z. Zheng, “Assessment of Damage Degree in High-Rise Building Structures Based on Continuous Wavelet Transform and Improved ResNet18,” IEEE International Conference on Signal, Information and Data Processing, ICSIDP 2024. 2024, doi: 10.1109/ICSIDP62679.2024.10868831.
J. Potsangbam, “Classification of Breast Cancer Histopathological Images Using Transfer Learning with DenseNet121,” Procedia Computer Science, vol. 235. pp. 1990–1997, 2024, doi: 10.1016/j.procs.2024.04.188.
S. Tiwari, “Ensemble Deep Learning for DR Identification: Integrating DenseNet121 and VGG19 Architectures,” 1st International Conference on Innovative Engineering Sciences and Technological Research, ICIESTR 2024 - Proceedings. 2024, doi: 10.1109/ICIESTR60916.2024.10798174.
S. S. Sadiq, “Improving CBIR Techniques with Deep Learning Approach: An Ensemble Method Using NASNetMobile, DenseNet121, and VGG12,” J. Robot. Control, vol. 5, no. 3, pp. 863–874, 2024, doi: 10.18196/jrc.v5i3.21805.
O. Gonzalez-Ortiz, “Evaluating DenseNet121 Neural Network Performance for Cervical Pathology Classification,” Proceedings - IEEE Symposium on Computer-Based Medical Systems. pp. 297–302, 2024, doi: 10.1109/CBMS61543.2024.00056.
B. A. Dabwan, “Hand Gesture Classification for Individuals with Disabilities Using the DenseNet121 Model,” International Conference on Advancements in Power, Communication and Intelligent Systems, APCI 2024. 2024, doi: 10.1109/APCI61480.2024.10616504.
A. Raza, “Enhancing brain tumor classification with transfer learning: Leveraging DenseNet121 for accurate and efficient detection,” Int. J. Imaging Syst. Technol., vol. 34, no. 1, 2024, doi: 10.1002/ima.22957.
A. Bajpai, “Enhanced Potato Leaf Disease Detection through Multi-Modal Fusion of Graph Neural Networks and ResNet18,” Proceedings - 2024 IEEE 16th International Conference on Communication Systems and Network Technologies, CICN 2024. pp. 1498–1504, 2024, doi: 10.1109/CICN63059.2024.10847571.
J. Naz, “A Comparative Analysis of Optimization Algorithms for Gastrointestinal Abnormalities Recognition and Classification Based on Ensemble XcepNet23 and ResNet18 Features,” Biomedicines, vol. 11, no. 6, 2023, doi: 10.3390/biomedicines11061723.
L. Ma, “Multi-Plant Disease Identification Based on Lightweight ResNet18 Model,” Agronomy, vol. 13, no. 11, 2023, doi: 10.3390/agronomy13112702.
G. Pallavi, “Brain tumor detection with high accuracy using random forest and comparing with thresholding method,” AIP Conference Proceedings, vol. 2853, no. 1. 2024, doi: 10.1063/5.0198189.
G. Pallavi, “Brain tumor detection with high accuracy using random forest and comparing with thresholding method,” AIP Conference Proceedings, vol. 2853, no. 1. 2024, doi: 10.1063/5.0198189.

Copyright (c) 2025 Indonesian Journal of Data and Science

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
License and Copyright Agreement
By submitting a manuscript to the Indonesian Journal of Data and Science (IJODAS), the author(s) confirm and agree to the following:
- All co-authors have given their consent to enter into this agreement.
- The submitted manuscript has not been formally published elsewhere, except as an abstract, thesis, or in the context of a lecture, review, or overlay journal.
- The manuscript is not currently under review or consideration by another journal or publisher.
- All authors have approved the manuscript and its submission to IJODAS, and where applicable, have received institutional approval (tacit or explicit) from affiliated organizations.
- The authors have secured appropriate permissions to reproduce any third-party material included in the manuscript that may be under copyright.
- The authors agree to abide by the licensing and copyright terms outlined below.
Copyright Policy
Authors who publish in IJODAS retain the copyright to their work and grant the journal the right of first publication. The published work is simultaneously licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0) , which permits others to share and adapt the work for non-commercial purposes, with proper attribution to the authors and the initial publication in this journal.
Reuse and Distribution
- Authors may enter into separate, additional contractual arrangements for non-exclusive distribution of the journal-published version of the article (e.g., institutional repositories, book chapters), provided there is proper acknowledgment of its initial publication in IJODAS.
- Prior to and during the submission process, we encourage authors to archive preprints and accepted versions of their work on personal websites or institutional repositories. This method supports scholarly communication, visibility, and early citation.
For more details on the terms of the Creative Commons license used by IJODAS, please visit the official license page.