Unit - 2
Unit - 2
w⋅x+b=0w⋅x+b=0
where ww is the weight vector, xx is the feature vector, and bb is the bias.
The objective is to maximize the margin defined as:
Margin=2∥w∥Margin=∥w∥2
3. Image Classification
In image processing, SVMs improve accuracy in classifying images
compared to traditional methods. They are used for object detection
and image retrieval, significantly enhancing search results in visual
databases
4. Bioinformatics
SVMs play a crucial role in biological data analysis, including protein
classification and cancer diagnosis. They help identify gene
expressions and classify patients based on genetic information,
aiding in personalized medicine
5. Handwriting Recognition
This application involves recognizing handwritten characters and is
widely used in postal services and document digitization. SVMs
analyze character features to enable accurate transcription of
handwritten text
6. Spam Detection
In natural language processing (NLP), SVMs are effective for filtering
spam emails by classifying messages based on their content,
improving email delivery systems like those used by Gmail
7. Financial Forecasting
SVMs are applied in the financial sector for stock market analysis
and fraud detection. Their ability to handle high-dimensional data
makes them suitable for predicting market trends and identifying
unusual patterns indicative of fraudulent activities
8. Medical Diagnosis
Beyond cancer detection, SVMs assist in diagnosing various
diseases by analyzing complex medical datasets, helping healthcare
professionals make informed decisions based on predictive analytics
9. Remote Homology Detection
In computational biology, SVMs are used to detect similarities
between protein structures, which is essential for understanding
biological functions and evolutionary relationships
Solving this problem directly is pretty difficult, so we can convert it into another
form that we can solve more easily. Let’s look at the inside of the previous
equation, the part inside the curly braces. Optimizing multiplications can be
nasty, so what we do is hold one part fixed and then maximize the other part. If
we set label*(wTx+b) to be 1 for the support vectors, then we can maximize ||
w||-1 and we’ll have a solution. Not all of the label*(wTx+b) will be equal to 1,
only the closest values to the separating hyper-plane. For values farther away
from the hyperplane, this product will be larger.
The optimization problem we now have is a constrained optimization problem
because we must find the best values, provided they meet some constraints.
Here, our constraint is that label*(wTx+b) will be 1.0 or greater. There’s a well-
known method for solving these types of constrained optimization problems,
using something called Lagrange multipliers. Using Lagrange multipliers, we can
write the problem in terms of our constraints. Because our constraints are our
data points, we can write the values of our hyperplane in terms of our data
points. The optimization function turns out to be
Instance-based learning
The Machine Learning systems which are categorized as instance-based
learning are the systems that learn the training examples by heart and then
generalizes to new instances based on some similarity measure. It is called
instance-based because it builds the hypotheses from the training instances. It is
also known as memory-based learning or lazy-learning (because they delay
processing until a new instance must be classified). The time complexity of this
algorithm depends upon the size of training data. Each time whenever a new
query is encountered, its previously stores data is examined. And assign to a
target function value for the new instance.
The worst-case time complexity of this algorithm is O (n), where n is the number
of training instances. For example, If we were to create a spam filter with an
instance-based learning algorithm, instead of just flagging emails that are
already marked as spam emails, our spam filter would be programmed to also
flag emails that are very similar to them. This requires a measure of resemblance
between two emails. A similarity measure between two emails could be the same
sender or the repetitive use of the same keywords or something else.
Advantages:
1. Instead of estimating for the entire instance set, local approximations can
be made to the target function.
2. This algorithm can adapt to new data easily, one which is collected as we
go .
Disadvantages:
1. Classification costs are high
2. Large amount of memory required to store the data, and each query
involves starting the identification of a local model from scratch.
Some of the instance-based learning algorithms are :
1. K Nearest Neighbor (KNN)
2. Self-Organizing Map (SOM)
3. Learning Vector Quantization (LVQ)
4. Locally Weighted Learning (LWL)
5. Case-Based Reasoning
k-Nearest Neighbor Learning
o K-Nearest Neighbour is one of the simplest Machine Learning algorithms
based on Supervised Learning technique.
o K-NN algorithm assumes the similarity between the new case/data and
available cases and put the new case into the category that is most similar
to the available categories.
o K-NN algorithm stores all the available data and classifies a new data point
based on the similarity. This means when new data appears then it can be
easily classified into a well suite category by using K- NN algorithm.
o K-NN algorithm can be used for Regression as well as for Classification but
mostly it is used for the Classification problems.
o K-NN is a non-parametric algorithm, which means it does not make any
assumption on underlying data.
o It is also called a lazy learner algorithm because it does not learn from the
training set immediately instead it stores the dataset and at the time of
classification, it performs an action on the dataset.
o KNN algorithm at the training phase just stores the dataset and when it
gets new data, then it classifies that data into a category that is much
similar to the new data.
Suppose we have a new data point and we need to put it in the required
category. Consider the below image
o Firstly, we will choose the number of neighbors, so we will choose the k=5.
o Next, we will calculate the Euclidean distance between the data points.
The Euclidean distance is the distance between two points, which we have
already studied in geometry. It can be calculated as:
o By calculating the Euclidean distance we got the nearest neighbors, as
three nearest neighbors in category A and two nearest neighbors in
category B. Consider the below image: