DMmultipleclassifiers.pdf

(1715 KB) Pobierz
Multiple classifiers
JERZY STEFANOWSKI
Institute of Computing Sciences
Poznań University of Technology
Zajęcia dla TPD - ZED 2009
Oparte na wykładzie dla
Doctoral School , Catania-Troina, April, 2008
Outline of the presentation
1. Introduction
2. Why do multiple classifiers work?
3. Stacked generalization – combiner.
4. Bagging approach
5. Boosting
6. Feature ensemble
7. Pairwise coupling
n
2
classifier for multi-class
problems
Machine Learning and Classification
Classification - assigning a decision class label to a set of objects
described by a set of attributes
Learning set
S
<x,y>
Learning
algorithm
LA
Classifier
C
<x,?>
classification
<x,y>
,
Set of learning examples S =
{
x
1
,
y
1
,
x
2
,
y
2
,
L
x
n
,
y
n
}
for some unknown classification function
f
:
y
=
f(x)
x
i
=<x
i1
,x
i2
,…,x
im
> example described by
m
attributes
y
– class label; value drawn from a discrete set of classes {Y
1
,…,Y
K
}
Approaches to learn single classifiers
Decision Trees
Rule Approaches
Logical statements (ILP)
Bayesian Classifiers
Neural Networks
Discriminant Analysis
Support Vector Machines
k-nearest neighbor classifiers
Logistic regression
Artificial Neural Networks
Genetic Classifiers
Why could we integrate classifiers?
• Typical research
create and evaluate a
single learning
algorithm;
compare performance of some algorithms.
• Empirical observations or applications
a given algorithm
may outperform all others for a specific subset of problems
• A complex problem can be decomposed into multiple sub-
problems that are easier to be solved.
• Growing research interest in combining a set of learning
algorithms / classifiers into one system
„Multiple learning systems try to exploit the local
different behavior of the base learners to enhance
the accuracy of the overall learning system”
-
G. Valentini, F. Masulli
• There is
no one algorithm
achieving the
best accuracy for all
situations!
[No free lunch!]
Zgłoś jeśli naruszono regulamin