Predictive Ensemble Modelling - Experimental Comparison of Boosting Implementation Methods

V.F. Adegoke, D. Chen, S. Banissi, E. Banissi

Research output: Contribution to Book/ReportChapterpeer-review

Abstract

This paper presents the empirical comparison of boosting implementation by reweighting and resampling methods. The goal of this paper is to determine which of the two methods performs better. In the study, we used four algorithms namely: Decision Stump, Neural Network, Random Forest and Support Vector Machine as base classifiers and AdaBoost as a technique to develop various ensemble models. We applied 10-fold cross validation method in measuring and evaluating the performance metrics of the models. The results show that in both methods the average of the correctly classified and incorrectly classified are relatively the same. However, average values of the RMSE in both methods are insignificantly different. The results further show that the two methods are independent of the datasets and the base classier used. Additionally, we found that the complexity of the chosen ensemble technique and boosting method does not necessarily lead to better performance.
Original languageEnglish
Title of host publication2017 European Modelling Symposium on Computer Modelling and Simulation (EMS)
PublisherInstitute of Electrical and Electronic Engineers
Pages11-16
Number of pages6
Volume2017
ISBN (Electronic)978-1-5386-1410-5
ISBN (Print)978-1-5386-1411-2
DOIs
Publication statusPublished - 21 Nov 2017

Publication series

NameEuropean Modelling Symposium (EMS)

Bibliographical note

Copyright © 2017, IEEE

Keywords

  • AdaBoost
  • ensemble based system
  • machine learning
  • resampling
  • reweighting

Fingerprint

Dive into the research topics of 'Predictive Ensemble Modelling - Experimental Comparison of Boosting Implementation Methods'. Together they form a unique fingerprint.

Cite this