Random Forest
Random Forest
samples from the training set, in parallel. The results of these base learners are then
learners sequentially such that a base model depends on the previously fitted
base models. All these base learners are then combined in a very adaptive way to
combining N decision tree, and second is to make predictions for each tree
The Working process can be explained in the below steps and diagram:
Step-2: Build the decision trees associated with the selected data points
(Subsets).
Step-3: Choose the number N for decision trees that you want to build.
Step-5: For new data points, find the predictions of each decision tree, and
assign the new data points to the category that wins the majority votes.
The working of the algorithm can be better
understood by the below example:
Disadvantages:
• Slower for real-time predictions.
• Complex and difficult to interpret.
• Requires more memory and computation.
Applications of Random Forest
o Medical Diagnosis (e.g., disease prediction).
o Fraud Detection in banking.
o Customer Segmentation in marketing.
o Product Recommendation Systems.
o Credit Scoring and Risk Analysis.