International Journal of Computer
Trends and Technology

Research Article | Open Access | Download PDF
Volume 73 | Issue 10 | Year 2025 | Article Id. IJCTT-V73I10P103 | DOI : https://doi.org/10.14445/22312803/IJCTT-V73I10P103

Comparative Study on Supervised Machine Learning Algorithms using Rapid Miner and the Weka tool


Puneet Kour, Rakshit Khajuria, Jewan Jot

Received Revised Accepted Published
22 Aug 2025 26 Sep 2025 13 Oct 2025 29 Nov 2025

Citation :

Puneet Kour, Rakshit Khajuria, Jewan Jot, "Comparative Study on Supervised Machine Learning Algorithms using Rapid Miner and the Weka tool," International Journal of Computer Trends and Technology (IJCTT), vol. 73, no. 10, pp. 15-24, 2025. Crossref, https://doi.org/10.14445/22312803/IJCTT-V73I10P103

Abstract

The performance of various supervised machine learning approaches was compared in this paper, utilizing a variety of visualization tools, including Orange, Weka, and RapidMiner. In addition, machine learning methods such as logistic regression, decision trees, support vector machines, linear regression, and Classification (Naïve Bayes) are used to analyze bacterial cell data and predict the outcome of bacterial cell detection on an agar plate. Furthermore, we use the RapidMiner tool to examine the outputs of various classifiers and determine which one works better than the others. With an 80:20 ratio, decision trees perform 92% more accurately than the alternative method. 

Keywords

Machine Learning, Classification, RapidMiner, Artificial Intelligence, Supervised Learning.

References

[1] Mochammad Faid, Moh Jasri, and Titasari Rahmawati, “Perbandingan Kinerja Tool Data Mining Weka Dan RapidMiner Dalam Algoritma Klasifikasi,” Teknika- Journal of Information and Communication Technology, vol. 8, no. 1, pp. 11‑16, 2019.
[
CrossRef] [Google Scholar] [Publisher Link]

[2] Ainurrohmah, “Akurasi Algoritma Klasifikasi Pada Software RapidMiner Dan Weka,” PRISMA, Prosiding Seminar Nasional Matematika, vol. 4, pp. 493‑499, 2021.
[Google Scholar]

[3] Diego Raphael Amancio et al., “A Systematic Comparison of Supervised Classifiers,” arXiv:1311.0202, 2013.
[
CrossRef] [Publisher Link]

[4] Rohit Arora, Suman Suman, “Comparative Analysis of Classification Algorithms on Different Datasets using WEKA,” International Journal of Computer Applications, vol. 54, no. 13, pp. 21‑25, 2012.
[
CrossRef] [Google Scholar] [Publisher Link]

[5] Kawsar Ahmed, and Tasnuba Jesmin, “Comparative Analysis of Data Mining Classification Algorithms in Type‑2 Diabetes Prediction Data using WEKA Approach,” International Journal of Science and Engineering, vol. 7, no. 2, pp. 155‑160, 2014.
[
CrossRef] [Google Scholar] [Publisher Link]

[6] Madhuri T. Sathe, and Amol C. Adamuthe, “Comparative Study of Supervised Algorithms for Prediction of Students’ Performance,” International Journal of Modern Education and Computer Science (IJMECS), pp. 1-21, 2021.
[
CrossRef] [Google Scholar]

[7] N.F. Sulaiman, M.Z. Jali, and K. Zainal, “An Analysis of Various Algorithms for Text Spam Classification and Clustering using RapidMiner and Weka,” International Journal of Computer Science & Information Security (IJCSIS), vol. 13, no. 3, 2015.
[
Google Scholar]

[8] Amrita Naik, and Lilavati Samant, “Correlation Review of Classification Algorithm using Data Mining Tool: WEKA, RapidMiner, Tanagra, Orange and Knime,” Procedia Computer Science, vol. 85, pp. 662‑668, 2016.
[
CrossRef] [Google Scholar] [Publisher Link]

[9] Ganesan Kavitha, “Comparative Study of Machine Learning Algorithms to Measure the Students’ Performance,” International Journal of Computer (IJC), vol. 28, no. 1, pp. 143-153, 2018.
[
CrossRef] [Google Scholar] [Publisher Link]

[10] Sakshi Goel, Neeraj Kumar, and Saharsh Gera, “Comparative Analysis of Classification Algorithms using Weka,” International Journal of Trend in Scientific Research and Development (IJTSRD), vol. 6, no. 5, pp. 858‑869, 2022.
[
Publisher Link]

[11] Luis C. Borges, Viriato M. Marques, and Jorge Bernardino, “Comparison of Data Mining Techniques and Tools for Data Classification,” C3S2E’13 – Proceedings of the International C* Conference on Computer Science & Software Engineering, pp. 113-116, 2013.
[
CrossRef] [Google Scholar] [Publisher Link]

[12] Ida Moghimipour, and Malihe Ebrahipour, “Comparing Decision Tree Method over Three Data Mining Software,” International Journal of Statistics and Probability, vol. 3, no. 3, 2014.
[
CrossRef] [Google Scholar] [Publisher Link]

[13] Rohit Ranjan, Swati Agarwal, and Dr. S. Venkatesan, “Detailed Analysis of Data Mining Tools,” International Journal of Engineering Research & Technology (IJERT), vol. 6, no. 5, pp. 785-789, 2017.
[
Google Scholar] [Publisher Link]

[14] Igiri Chinwe Peace, “An Analytical Review of Data Mining Tools,” International Journal of Engineering Research & Technology (IJERT), vol. 4, no. 4, 2015.
[
Google Scholar] [Publisher Link]

[15] G. Naga Rama Devi, “Comparative Study on Machine Learning Algorithms using WEKA,” International Journal of Engineering Research & Technology (IJERT), vol. 2, no. 15, 2014.
[
CrossRef] [Google Scholar] [Publisher Link]

[16] D.C. Asogwa et al., “Text Classification using Hybrid Machine Learning Algorithms on Big Data,” arXiv:2103.16624, 2021.
[
CrossRef] [Google Scholar] [Publisher Link]

[17] M.P. Basgalupp et al., “An Extensive Experimental Evaluation of Automated Machine Learning Methods for Recommending Classification Algorithms,” Evolutionary Intelligence, vol. 14, pp. 1895-1914, 2021.
[
CrossRef] [Google Scholar] [Publisher Link]

[18] Mahmudur Rahman Khan et al., “Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository,” arXiv: 1809.06186, 2018.
[
CrossRef] [Google Scholar] [Publisher Link]

[19] Qamar Parvez Rana, and Parminder Kaur, “Comparison of Various Tools for Data Mining,” International Journal of Engineering Research & Technology (IJERT), vol. 3, no. 10, 2014.
[
CrossRef] [Publisher Link]

[20] Ajay Kumar, and Preeti Sondhi, “Performance Evaluation of Ensemble Learning Algorithms for Various Classifiers,” Journal of Emerging Technologies and Innovative Research, vol. 8, no. 10, 2021.
[
Publisher Link]

[21] Soodeh Hosseini, and Saman Rafiee Sardo, “Data Mining Tools − A Case Study for Network Intrusion Detection,” Multimedia Tools and Applications, vol. 80, pp. 4999‑5019, 2021.
[
CrossRef] [Google Scholar] [Publisher Link]

[22] Compare RapidMiner vs. Weka, Slashdot, 2025. [Online]. Available: https://slashdot.org/software/comparison/RapidMiner-vs-Weka/

[23] Msin, LibSVM Classification Results Differ between Weka and RapidMiner, ALTAIR only Forward, 2024. [Online]. Available: https://community.altair.com/discussion/59512/libsvm-classification-results-differ-between-weka-and-rapidminer?tab=all

[24] Alexander Craik, Yongtian He, and Jose L. Contreras-Vidal, “Deep Learning for Electroencephalogram (EEG) Classification Tasks: A Review,” Journal of Neural Engineering, vol. 16, no. 3, 2009.

[CrossRef] [Google Scholar] [Publisher Link]

[25] Handwritten Digit Recognition, Nearest Neighbour Classifier. [Online]. Available: https://www.robots.ox.ac.uk/~dclaus/digits/neighbour.htm

[26] P. Keerthana, B.G. Geetha, P. Kanmani, “Crustose Using Shape Features and Color Histogram with K Nearest Neighbor Classifiers,” International Journal of Innovations in Scientific and Engineering Research (IJISER), vol. 4, no. 9, pp. 199-203, 2017. [Google Scholar]

[27] M. Narashimha Murty, and V. Susheela Devi, “Nearest Neighbour Based Classifiers,” Pattern Recognition, pp. 48-85, 2011.
[
CrossRef] [Google Scholar] [Publisher Link]

[28] Guo-Xun Yuan, Chia-Hua Ho, and Chih-Jen Lin, “Recent Advances of Large-Scale Linear Classification,” Proceedings of the IEEE, vol. 100, no. 9, pp. 2584-2603, 2012.
[
CrossRef] [Google Scholar] [Publisher Link]

[29] Henrik Madsen, and Poul Thyregod, Introduction to General and Generalized Linear Models, Chapman & Hall/CRC, 2010.
[
CrossRef] [Google Scholar] [Publisher Link]

[30] Altair Rapid Miner Empowers Organizations. [Online]. Available: https://altair.com/altair-rapidminer

[31] Kamran Kowsari et al., “Hdltex: Hierarchical Deep Learning for Text Classification,” 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), 2017.
[
CrossRef] [Google Scholar] [Publisher Link]

[32] Xiaodong Liu et al., “Stochastic Answer Networks for Machine Reading Comprehension,” Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp. 1694-1704, 2018.
[
CrossRef] [Google Scholar] [Publisher Link]

[33] Rupesh Srivastava, Klaus Greff, and Jurgen Schmidhuber, “Training Very Deep Networks,” Advances in Neural Information Processing Systems, 2015.
[Google Scholar] [Publisher Link]

[34] Kaiming He et al., “Deep Residual Learning for Image Recognition,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.
[
Google Scholar] [Publisher Link]

[35] Yoon Kim et al., “Character-Aware Neural Language Models,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, 2016.
[
CrossRef] [Google Scholar] [Publisher Link]

[36] Julian Georg Zilly et al., “Recurrent Highway Networks,” Proceedings of the 34th International Conference on Machine Learning, pp. 4189-4198, 2017.
[
Google Scholar] [Publisher Link]

[37] Ying Wen et al., “Learning Text Representation using Recurrent Convolutional Neural Network with Highway Layers,” arXiv preprint arXiv:1606.06905, 2016.
[
CrossRef] [Google Scholar] [Publisher Link]

[38] Ronan Collobert et al., “Natural Language Processing (Almost) from Scratch,” Journal of Machine Learning Research, vol. 12, pp. 2493–2537, 2011.
[
Google Scholar] [Publisher Link]

[39] Alec Radford et al., “Language Models are Unsupervised Multitask Learners,” OpenAI Blog, vol. 1, no. 8, 2019.
[
Google Scholar] [Publisher Link]

[40] XiPeng Qiu et al., “Pre-trained Models for Natural Language Processing: A Survey,” Science China Technological Sciences, vol. 63, pp. 1872-1897, 2020.
[
CrossRef] [Google Scholar] [Publisher Link]

[41] Yinhan Liu et al., “Roberta: A Robustly Optimized BERT Pretraining Approach,” arXiv preprint arXiv:1907.11692, 2019.
[
CrossRef] [Google Scholar] [Publisher Link]

[42] Zhenzhong Lan et al., “Albert: A Lite BERT for Self-supervised Learning of Language Representations,” arXiv preprint arXiv:1909.11942, 2019.
[
CrossRef] [Google Scholar] [Publisher Link]

[43] Victor Sanh et al., “DistilBERT, A Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter,” arXiv preprint arXiv:1910.01108, 2019.
[
CrossRef] [Google Scholar] [Publisher Link]

[44] Mandar Joshi et al., “Spanbert: Improving Pre-training by Representing and Predicting Spans,” Transactions of the Association for Computational Linguistics, vol. 8, pp. 64-77, 2020.
[
CrossRef] [Google Scholar] [Publisher Link]

[45] Kevin Clark et al., “Electra: Pre-training text Encoders as Discriminators Rather Than Generators,” arXiv preprint arXiv:2003.10555, 2020.
[
CrossRef] [Google Scholar] [Publisher Link]

[46] Yu Sun et al., “Ernie: Enhanced Representation through Knowledge Integration,” arXiv preprint arXiv:1904.09223, 2019.
[
CrossRef] [Google Scholar] [Publisher Link]

[47] Yu Sun et al., “Ernie 2.0: A Continual Pre-training Framework for Language Understanding,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 5, pp. 8968–8975, 2020.
[
CrossRef] [Google Scholar] [Publisher Link]

[48] Siddhant Garg, Thuy Vu, and Alessandro Moschitti, “TANDA: Transfer and Adapt Pre-trained Transformer Models for Answer Sentence Selection,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 5, pp. 7780-7788, 2019.
[
CrossRef] [Google Scholar] [Publisher Link]

[49] Chi Sun et al., “How to Fine-tune BERT for Text Classification?,” Chinese Computational Linguistics, pp. 194-206, 2019.
[
CrossRef] [Google Scholar] [Publisher Link]