Unit 5 Part 2
Unit 5 Part 2
Word2vec
• Word2vec is a technique in Natural Language Processing (NLP) for obtaining
vector representations of words. These vectors capture information about the
meaning of the word based on the surrounding words
• This indicates the level of semantic similarity between the words, so for
example the vectors for walk and ran are nearby, as are those
for but and however and Berlin and Germany.
Word2vec
• Word2Vec is a group of machine learning architectures that can find words
with similar contexts and group them together. When we say ‘context’, it
refers to the group of words that surrounds a given word in a sentence or a
document.
• Both models aim to reduce the dimensionality of the data and create dense
word vectors, but they approach the problem differently.
Word2vec
Continuous bag-of-words model
• The CBOW model predicts the target word from its surrounding context
words. In other words, it uses the surrounding words to predict the word in
the middle. This model takes all the context words, aggregates them, and uses
the resultant vector to predict the target word.
• For example, in the sentence “The cat sat on the mat,” if we use “cat” as our
target word, the CBOW model will take “The”, “sat”, “on”, “the”, “mat” as
context and predict the word “cat”. This model is beneficial when we have a
small dataset, and it’s faster than the Skip-Gram model.
Continuous skip-gram model:
• The Skip-Gram model predicts the surrounding context words from a target
word. In other words, it uses a single word to predict its surrounding context.
For example, if we again take the sentence “The cat sat on the mat,” the Skip-
Gram model will take “cat” as the input and predict “The”, “sat”, “on”, “the”,
“mat”.
• This model works well with a large dataset and with rare words. It’s
computationally more expensive than the CBOW model due to its task of
predicting multiple context words. However, it provides several advantages,
including an improved ability to capture semantic relationships, handle rare
words, and be flexible to linguistic context.
. Applications of Deep Learning Networks:
Joint Detection
• Deep Learning finds a lot of usefulness in the field of Biomedical and
Bioinformatics. Deep Learning algorithms can be used for detecting fractures
or anatomical changes in the human bones or in bone joints thereby early
prediction of various diseases like arteritis can be done which helps in early
curing of the so-called diseases also.
• Knee osteoarthritis (OA) is a very general joint disease that disturb many
people especially people over 60. The severity of pain caused by knee OA is
the most important portent to disable. Until now, the bad impact of
osteoarthritis on health care and public health systems is still increasing.
Joint Detection
• Thermography is the image which senses or captures the heat intensity
coming out from that particular region. Based on the patient’s pressure
points the color in the thermographs vary.
• Red regions denote more pressure locations and Yellow regions Denote less
pressure locations. So, from the thermogram we can understand the effects of
joint/Bone wear and tear or Damage occurred at particular spot.
Joint Detection
• Normal Neural Networks fails because of errors in the stages of Image
Segmentation and Feature Extractions. To avoid this we can build a
Convolution based model.
• The Convolution Filter is made to move over the image. The stride value here
considered for this case study is 1. We have used Max Pooling and in the fully
connected layer we have used Softmax aggregator.
Joint Detection
Steps
• Provide input image into convolution layer
• Choose parameters, apply filters with strides, padding if requires. Perform
convolution on the image and apply ReLU activation to the matrix.
• Perform pooling to reduce dimensionality size
• Add as many convolutional layers until satisfied
• Flatten the output and feed into a fully connected layer (FC Layer)
• Output the class using an activation function (Logistic Regression with cost
functions) and classifies images.
Other Applications:
• Similarly for the other Applications such as Facial Recognition and Scene
Matching applications appropriate Deep Learning Based Algorithms such as
AlexNet, VGG, Inception, ResNet and or Deep learning-based LSTM or RNN
can be used. These Networks has to be explained with necessary Diagrams
and appropriate Explanations.
WaveNet
• WaveNet is a deep generative model of raw audio waveforms. We show that
WaveNets are able to generate speech which mimics any human voice and
which sounds more natural than the best existing Text-to-Speech systems,
reducing the gap with human performance by over 50%.
• Allowing people to converse with machines is a long-standing dream of
human computer interaction. The ability of computers to understand natural
speech has been revolutionised in the last few years by the application of
deep neural networks. However, generating speech with computers — a
process usually referred to as speech synthesis or text to-speech (TTS) —
WaveNet
• Existing parametric models typically generate audio signals by passing their
outputs through signal processing algorithms known as vocoders.