0% found this document useful (0 votes)
47 views2 pages

LSTM

LSTM networks are a type of recurrent neural network that can learn long-term dependencies in time series data. They have a chain-like structure with four interacting layers that allow them to remember previous inputs over long periods of time. Besides time series prediction, LSTMs are commonly used for speech recognition, music composition, and pharmaceutical development. Random forests are an ensemble learning method that constructs multiple decision trees during training and outputs the class that is the mode of the classes of the individual trees. It combines bagging, where samples are drawn with replacement from the training data, and random feature selection. Random forests have only two parameters and are easy to use. Each tree is constructed independently by selecting a random subset of features to
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views2 pages

LSTM

LSTM networks are a type of recurrent neural network that can learn long-term dependencies in time series data. They have a chain-like structure with four interacting layers that allow them to remember previous inputs over long periods of time. Besides time series prediction, LSTMs are commonly used for speech recognition, music composition, and pharmaceutical development. Random forests are an ensemble learning method that constructs multiple decision trees during training and outputs the class that is the mode of the classes of the individual trees. It combines bagging, where samples are drawn with replacement from the training data, and random feature selection. Random forests have only two parameters and are easy to use. Each tree is constructed independently by selecting a random subset of features to
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Deep learning

LONG SHORT TERM MEMORY NETWORKS (LSTMS)

LSTMs are a type of Recurrent Neural Network (RNN) that can learn and memorize long-term
dependencies. Recalling past information for long periods is the default behavior. LSTMs retain
information over time. They are useful in time-series prediction because they remember previous
inputs. LSTMs have a chain-like structure where four interacting layers communicate in a unique
way. Besides time-series predictions, LSTMs are typically used for speech recognition, music
composition, and pharmaceutical development. How Do LSTMs Work? First, they forget irrelevant
parts of the previous state Next, they selectively update the cell-state values Finally, the output of
certain parts of the cell state Below is a diagram of how LSTMs operate:

Machine learning

RANDOM FOREST

 Random forest is an ensemble classifier(methods that generate many classifiers and


aggregate their results) that consists of many decision trees and outputs the class that
is the mode of the class’s output by individual trees.
 The term came from random decision forests that was first proposed by Tin Kam Ho
of Bell Labs in 1995.
 The method combines Breiman’s “bagging” (randomly draw datasets with
replacement from the training data, each sample the same size as the original training
set) idea and the random selection of features
 It is very user friendly as it has only two parameters i.e number of variables and
number of trees.
 ALGORITHM:
 Each tree is constructed using the following algorithm:
 * Let the number of training cases be N, and the number of variables in the classifier
be M.
* We are told the number m of input variables to be used to determine the decision at
a node of the tree; m should be much less than M.
* Choose a training set for this tree by choosing n times with replacement from all N
available training cases (i.e. take a bootstrap sample).
* For each node of the tree, randomly choose m variables on which to base the
decision at that node. Calculate the best split based on these m variables in the
training set.
* Each tree is fully grown and not pruned (as may be done in constructing a normal
tree classifier).
For prediction a new sample is pushed down the tree. It is assigned the label of the
training sample in the terminal node it ends up in. This procedure is iterated over all
trees in the ensemble, and the average vote of all trees is reported as random forest
prediction

You might also like