0% found this document useful (0 votes)
89 views5 pages

Random Forest Classifier: Alexandru-Florin Gavril

Random forest classifiers are an ensemble learning method that constructs a collection of decision trees and uses a voting system to improve classification accuracy. It works by splitting a training set into subsets, constructing decision trees on each subset using randomly selected features each time, then having the trees vote on the correct class for new samples. This combines the ideas of random selection of features and bagging to create a model with low variance and reduced error compared to a single decision tree.

Uploaded by

gavrila_1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views5 pages

Random Forest Classifier: Alexandru-Florin Gavril

Random forest classifiers are an ensemble learning method that constructs a collection of decision trees and uses a voting system to improve classification accuracy. It works by splitting a training set into subsets, constructing decision trees on each subset using randomly selected features each time, then having the trees vote on the correct class for new samples. This combines the ideas of random selection of features and bagging to create a model with low variance and reduced error compared to a single decision tree.

Uploaded by

gavrila_1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 5

RANDOM FOREST

CLASSIFIER
ALEXANDRU-FLORIN GAVRIL
INTRODUCTION

Random forests are an ensemble learning method for classification


Constructs a large collection of single decorrelated decision trees and
uses a voting system on the output
Created by Tim Kam Ho, then extended by Leo Breiman and Adele Cutler
which combined the initial random selection features idea presented by
Ho, combined with Breimans bagging idea
Rationale

The combination of learning models increases the classification accuracy


The average noisy and unbiased models are used to create a model with
low variance
How it works?


Let , , ... , be the training set, with = [, , , ] a sample with k features,
each having a response (class)
Split the training set into M subsets of training samples
Construct decision trees with each of the M subsets by choosing a
random attribute every time.

The decision trees will each classify a sample and then will provide a
response which will enter a voting process and choose the
corresponding class.
How it works?

You might also like