What is Ensemble Learning

November 19, 2017 Author: virendra
Print Friendly, PDF & Email

Ensemble learning typically refers to methods that generate several models which are combined to make a prediction, either in classification or regression problems. This approach has been the object of a significant amount of research in recent years and good results have been reported. This section introduced basic of the ensemble learning of classification.

Ensemble Learning : Overview





Ensemble learning is a machine learning paradigm where multiple learners are trained to solve the same problem. In contrast to ordinary machine learning approaches which try to learn one hypothesis from training data, ensemble methods try to construct a set of hypotheses and combine them to use. An ensemble contains a number of learners which are usually called base learners. The generalization ability of an ensemble is usually much stronger than that of base learners. Actually, ensemble learning is appealing because that it is able to boost weak learners which are slightly better than random guess to strong learners which can make very accurate predictions. So, “base learners” are also referred as “weak learners”. It is noteworthy, however, that although most theoretical analyses work on weak learners, base learners used in practice are not necessarily weak since using not-so-weak base learners often results in better performance. Ensemble learning is the process by which multiple models, such as classifiers or experts, are strategically generated and combined to solve a particular computational intelligence problem. Ensemble learning is primarily used to improve the (classification, prediction, function approximation, etc.) performance of a model, or reduce the likelihood of an unfortunate selection of a poor one.

The Ensemble Framework





A typical ensemble method for classification tasks contains the following building blocks:

  • Training set— a labeled dataset used for ensemble training. The training set can be described in a variety of languages. Most frequently, the instances are described as attribute-value vectors. We use the notation A to denote the set of input attributes containing n attributes: \( A={(a_1,… a_i,….a_n)} \)and y to represent the class variable or the target attribute.
  • Base Inducer— the inducer is an induction algorithm that obtains a training set and forms a classifier that represents the generalized relationship between the input attributes and the target attribute. Let I represent an inducer. We use the notation M = I(S) for representing a classifier M which was induced by inducer I on a training set S.
  • Diversity Generator— this component is responsible for generating the diverse classifiers.
  • Combiner— the combiner is responsible for combining the classifications of the various classifiers.

An Ensemble Learning Classification

Figure 1: An Ensemble Learning Classification

It is useful to distinguish between dependent frameworks and independent frameworks for building ensembles. In a dependent framework the output of a classifier is used in the construction of the next classifier. Thus it is possible to take advantage of knowledge generated in previous iterations to guide the learning in the next iterations. Alternatively each classifier is built independently and their outputs are combined in some fashion.

Ensemble Learning Model

Figure 2: Ensemble Learning Model





Classifier ensemble is a set of learning machines whose decisions are combined to improve performance of the pattern recognition system. Much of the efforts in classifier combination research focus on improving the accuracy of difficult problems, managing weaknesses and strengths of each model in order to give the best possible decision taking into account all ensembles The use of combination of multiple classifiers was demonstrated to be effective, under some conditions, for several pattern recognition applications.

References

[1] Zhou, Zhi-Hua, “Ensemble learning”, Encyclopedia of biometrics (2015): pp. 411-416.

[2] Rokach, Lior, “Ensemble-based classifiers”, Artificial Intelligence Review 33.1 (2010): pp. 1-39.

[3] Ren, Ye, Le Zhang, and Ponnuthurai N. Suganthan, “Ensemble classification and regression-recent developments, applications and future directions.” IEEE Computational Intelligence Magazine 11.1 (2016): pp. 41-53.

 

One Comment

  • furtdso linopv September 2, 2018 at 12:14 am

    Once I initially commented I clicked the -Notify me when new feedback are added- checkbox and now every time a remark is added I get 4 emails with the same comment. Is there any manner you can remove me from that service? Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *

Insert math as
Block
Inline
Additional settings
Formula color
Text color
#333333
Type math using LaTeX
Preview
\({}\)
Nothing to preview
Insert