0% found this document useful (0 votes)
17 views24 pages

Chapter 06

PageRank is a measure of a web page's importance used by Google to rank search results, created by its founders to identify significant nodes in the internet's directed graph. It is calculated using a mathematical formula that considers the number of backlinks and their respective PageRanks, with a damping factor to ensure a normalized distribution. Additionally, the K-Nearest Neighbors (KNN) algorithm is introduced as a supervised learning method for classification and regression, which predicts the class of a test data point based on the closest training data points.

Uploaded by

seppriya123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views24 pages

Chapter 06

PageRank is a measure of a web page's importance used by Google to rank search results, created by its founders to identify significant nodes in the internet's directed graph. It is calculated using a mathematical formula that considers the number of backlinks and their respective PageRanks, with a damping factor to ensure a normalized distribution. Additionally, the K-Nearest Neighbors (KNN) algorithm is introduced as a supervised learning method for classification and regression, which predicts the class of a test data point based on the closest training data points.

Uploaded by

seppriya123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

CHAPTER 06

Why PageRank?

It is a value assigned to a web page as a measure of its popularity or importance,

used to determine the order in which search engine results are presented:
PageRank is important as it helps Google determine the value of a page
relative to other similar pages on the web. Among other factors, pages with
higher PageRank have higher chances of ranking.

PageRank was created by Google’s founders Larry Page and


Sergey Brin to rank web pages, treating the internet as a directed
graph. The goal is to identify the most central or interesting node
within this graph, based on the intuition that a node is important if it’s
connected to other important nodes.

Introduction

Page Rank is a topic much discussed by Search Engine Optimisation (SEO)


experts. At the heart of PageRank is a mathematical formula that seems scary to
look at but is actually fairly simple to understand.

Despite this many people seem to get it wrong! In particular “Chris Ridings of
www.searchenginesystems.net” has written a paper entitled “PageRank Explained:
Everything you’ve always wanted to know about PageRank”, pointed to by many
people, that contains a fundamental mistake early on in the explanation!
Unfortunately this means some of the recommendations in the paper are not quite
accurate.

1
By showing code to correctly calculate real PageRank I hope to achieve several
things in this response:

1. Clearly explain how PageRank is calculated.


2. Go through every example in Chris’ paper, and add some more of my own,
showing the correct PageRank for each diagram. By showing the code used
to calculate each diagram I’ve opened myself up to peer review - mostly in
an effort to make sure the examples are correct, but also because the code
can help explain the PageRank calculations.
3. Describe some principles and observations on website design based on these
correctly calculated examples.

Any good web designer should take the time to fully understand how PageRank
really works - if you don’t then your site’s layout could be seriously hurting your
Google listings!

[Note: I have nothing in particular against Chris. If I find any other papers on the
subject I’ll try to comment evenly]

How is PageRank Used?

PageRank is one of the methods Google uses to determine a page’s relevance or


importance. It is only one part of the story when it comes to the Google listing, but
the other aspects are discussed elsewhere (and are ever changing) and PageRank is
interesting enough to deserve a paper of its own.

PageRank is also displayed on the toolbar of your browser if you’ve installed the
Google toolbar (http://toolbar.google.com/). But the Toolbar PageRank only goes
from 0 – 10 and seems to be something like a logarithmic scale:

Toolbar PageRank Real PageRank


(log base 10)
0 0 - 10
1 100 - 1,000
2 1,000 - 10,000
3 10,000 - 100,000
4 and so on...

We can’t know the exact details of the scale because, as we’ll see later, the
maximum PR of all pages on the web changes every month when Google does its
re-indexing! If we presume the scale is logarithmic (although there is only
anecdotal evidence for this at the time of writing) then Google could simply give
the highest actual PR page a toolbar PR of 10 and scale the rest appropriately.

Also the toolbar sometimes guesses! The toolbar often shows me a Toolbar PR for
pages I’ve only just uploaded and cannot possibly be in the index yet!
2
What seems to be happening is that the toolbar looks at the URL of the page the
browser is displaying and strips off everything down the last “/” (i.e. it goes to the
“parent” page in URL terms). If Google has a Toolbar PR for that parent then it
subtracts 1 and shows that as the Toolbar PR for this page. If there’s no PR for the
parent it goes to the parent’s parent’s page, but subtracting 2, and so on all the way
up to the root of your site.� If it can’t find a Toolbar PR to display in this way,
that is if it doesn’t find a page with a real calculated PR, then the bar is greyed out.

Note that if the Toolbar is guessing in this way, the Actual PR of the page is 0 -
though its PR will be calculated shortly after the Google spider first sees it.

PageRank says nothing about the content or size of a page, the language it’s
written in, or the text used in the anchor of a link!

Definitions

I’ve started to use some technical terms and shorthand in this paper. Now’s as good
a time as any to define all the terms I’ll use:

PR: Shorthand for PageRank: the actual, real, page rank for
each page as calculated by Google. As we’ll see later
this can range from 0.15 to billions.
Toolbar PR: The PageRank displayed in the Google toolbar in your
browser. This ranges from 0 to 10.
Backlink: If page A links out to page B, then page B is said to
have a “backlink” from page A.

That’s enough of that, let’s get back to the meat…

So what is PageRank?

In short PageRank is a “vote”, by all the other pages on the Web, about how
important a page is. A link to a page counts as a vote of support. If there’s no link
there’s no support (but it’s an abstention from voting rather than a vote against the
page).

Quoting from the original Google paper, PageRank is defined like this:

We assume page A has pages T1...Tn which point to it (i.e., are citations).
The parameter d is a damping factor which can be set between 0 and 1. We
usually set d to 0.85. There are more details about d in the next section.
Also C(A) is defined as the number of links going out of page A. The
PageRank of a page A is given as follows:

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

3
Note that the PageRanks form a probability distribution over web pages, so
the sum of all web pages' PageRanks will be one.

PageRank or PR(A) can be calculated using a simple iterative algorithm,


and corresponds to the principal eigenvector of the normalized link matrix
of the web.

but that’s not too helpful so let’s break it down into sections.

1. PR(Tn) - Each page has a notion of its own self-importance. That’s


“PR(T1)” for the first page in the web all the way up to “PR(Tn)” for the
last page
2. C(Tn) - Each page spreads its vote out evenly amongst all of it’s outgoing
links. The count, or number, of outgoing links for page 1 is “C(T1)”,
“C(Tn)” for page n, and so on for all pages.
3. PR(Tn)/C(Tn) - so if our page (page A) has a backlink from page “n” the
share of the vote page A will get is “PR(Tn)/C(Tn)”
4. d(... - All these fractions of votes are added together but, to stop the other
pages having too much influence, this total vote is “damped down” by
multiplying it by 0.85 (the factor “d”)
5. (1 - d) - The (1 – d) bit at the beginning is a bit of probability math magic so
the “sum of all web pages' PageRanks will be one”: it adds in the bit lost by
the d(.... It also means that if a page has no links to it (no backlinks) even
then it will still get a small PR of 0.15 (i.e. 1 – 0.85). (Aside: the Google
paper says “the sum of all pages” but they mean the “the normalised sum” –
otherwise known as “the average” to you and me.

How is PageRank Calculated?

This is where it gets tricky. The PR of each page depends on the PR of the pages
pointing to it. But we won’t know what PR those pages have until the pages
pointing to them have their PR calculated and so on… And when you consider that
page links can form circles it seems impossible to do this calculation!

But actually it’s not that bad. Remember this bit of the Google paper:

PageRank or PR(A) can be calculated using a simple iterative algorithm,


and corresponds to the principal eigenvector of the normalized link matrix
of the web.

What that means to us is that we can just go ahead and calculate a page’s
PR without knowing the final value of the PR of the other pages. That seems
strange but, basically, each time we run the calculation we’re getting a closer
estimate of the final value. So all we need to do is remember the each value we

4
calculate and repeat the calculations lots of times until the numbers stop changing
much.

Lets take the simplest example network: two pages, each pointing to the other:

Each page has one outgoing link (the outgoing count is 1, i.e. C(A) = 1 and C(B) =
1).

Guess 1

We don’t know what their PR should be to begin with, so let’s take a guess at 1.0
and do some calculations:

d= 0.85
PR(A)= (1 – d) + d(PR(B)/1)
PR(B) = (1 – d) + d(PR(A)/1)

i.e.

PR(A)= 0.15 + 0.85 * 1


=1
PR(B) = 0.15 + 0.85 * 1
=1

Hmm, the numbers aren’t changing at all! So it looks like we started out with a
lucky guess!!!

Guess 2

No, that’s too easy, maybe I got it wrong (and it wouldn’t be the first time). Ok,
let’s start the guess at 0 instead and re-calculate:

PR(A)= 0.15 + 0.85 * 0



= 0.15
PR(B) = 0.15 + 0.85 * 0.15 NB. we’ve already calculated a
= 0.2775 “next best guess” at PR(A) so we
use it here

And again:

PR(A)= 0.15 + 0.85 * 0.2775

5
= 0.385875
PR(B) = 0.15 + 0.85 * 0.385875
= 0.47799375

And again

PR(A)= 0.15 + 0.85 * 0.47799375


= 0.5562946875
PR(B) = 0.15 + 0.85 * 0.5562946875
= 0.622850484375

and so on. The numbers just keep going up. But will the numbers stop increasing
when they get to 1.0? What if a calculation over-shoots and goes above 1.0?

Guess 3

Well let’s see. Let’s start the guess at 40 each and do a few cycles:

PR(A) = 40
PR(B) = 40

First calculation

PR(A)= 0.15 + 0.85 * 40


= 34.25
PR(B) = 0.15 + 0.85 * 0.385875
= 29.1775

And again

PR(A)= 0.15 + 0.85 * 29.1775


= 24.950875
PR(B) = 0.15 + 0.85 * 24.950875
= 21.35824375

Yup, those numbers are heading down alright! It sure looks the numbers will get to
1.0 and stop

Here’s the code used to calculate this example starting the guess at 0: Show the
code | Run the program

 Principle: it doesn’t matter where you start your guess, once the PageRank
calculations have settled down, the “normalized probability distribution”
(the average PageRank for all pages) will be 1.0

6

Getting the answer quicker

How many times do we need to repeat the calculation for big networks? That’s a
difficult question; for a network as large as the World Wide Web it can be many
millions of iterations! The “damping factor” is quite subtle. If it’s too high then it
takes ages for the numbers to settle, if it’s too low then you get repeated over-
shoot, both above and below the average - the numbers just swing about the
average like a pendulum and never settle down.

Also choosing the order of calculations can help. The answer will always come out
the same no matter which order you choose, but some orders will get you there
quicker than others.

I’m sure there’s been several Master’s Thesis on how to make this calculation as
efficient as possible, but, in the examples below, I’ve used very simple code for
clarity and roughly 20 to 40 iterations were needed!

K-Nearest Neighbor

7
Introduction

K-nearest neighbors (KNN) is a type of supervised learning


algorithm used for both regression and classification. KNN
tries to predict the correct class for the test data by
calculating the distance between the test data and all the
training points. Then select the K number of points which
is closet to the test data. The KNN algorithm calculates the
probability of the test data belonging to the classes of ‘K’
training data and class holds the highest probability will be
selected. In the case of regression, the value is the mean of
the ‘K’ selected training points.

Let see the below example to make it a better


understanding

8
Suppose, we have an image of a creature that looks similar
to cat and dog, but we want to know either it is a cat or
dog. So for this identification, we can use the KNN
algorithm, as it works on a similarity measure. Our KNN
model will find the similar features of the new data set to
the cats and dogs images and based on the most similar
features it will put it in either cat or dog category.

9
Why do we need a K-NN Algorithm?

Suppose there are two categories, i.e., Category A and


Category B, and we have a new data point x1, so this data
point will lie in which of these categories. To solve this
type of problem, we need a K-NN algorithm. With the help
of K-NN, we can easily identify the category or class of a
particular dataset. Consider the below diagram:

10
How does K-NN work?

The K-NN working can be explained on the basis of the


below algorithm:

 Step-1: Select the number K of the neighbors

 Step-2: Calculate the Euclidean distance of K


number of neighbors

 Step-3: Take the K nearest neighbors as per the


calculated Euclidean distance.

 Step-4: Among these k neighbors, count the


number of the data points in each category.

 Step-5: Assign the new data points to that category


for which the number of the neighbor is maximum.

 Step-6: Our model is ready.

Suppose we have a new data point and we need to put it in


the required category. Consider the below image:

11
 Firstly, we will choose the number of neighbors, so
we will choose the k=5.

 Next, we will calculate the Euclidean distance


between the data points. The Euclidean distance is
the distance between two points, which we have
already studied in geometry. It can be calculated
as:

12
 By calculating the Euclidean distance we got the
nearest neighbors, as three nearest neighbors in
category A and two nearest neighbors in category
B. Consider the below image:

13
 As we can see the 3 nearest neighbors are from
category A, hence this new data point must belong
to category A.

How to choose a K value?

14
Kvalue indicates the count of the nearest neighbors. We
have to compute distances between test points and trained
labels points. Updating distance metrics with every
iteration is computationally expensive, and that’s why KNN
is a lazy learning algorithm.

 As you can verify from the above image, if we


proceed with K=3, then we predict that test input
belongs to class B, and if we continue with K=7,
then we predict that test input belongs to class A.

 That’s how you can imagine that the K value has a


powerful effect on KNN performance.

Then how to select the optimal K value?

 There are no pre-defined statistical methods to find


the most favorable value of K.

15
 Initialize a random K value and start computing.

 Choosing a small value of K leads to unstable


decision boundaries.

 The substantial K value is better for classification


as it leads to smoothening the decision boundaries.

 Derive a plot between error rate and K denoting


values in a defined range. Then choose the K value
as having a minimum error rate.

Now you will get the idea of choosing the optimal K value
by implementing the model.

Calculating distance:

The first step is to calculate the distance between the new


point and each training point. There are various methods
for calculating this distance, of which the most commonly
known methods are — Euclidian, Manhattan (for
continuous) and Hamming distance (for categorical).

Euclidean Distance: Euclidean distance is calculated as


the square root of the sum of the squared differences
between a new point (x) and an existing point (y).

Manhattan Distance: This is the distance between real


vectors using the sum of their absolute difference.

16
Hamming Distance: It is used for categorical variables. If
the value (x) and the value (y) are the same, the distance D
will be equal to 0 . Otherwise D=1.

17
What is the KNN Classification
Algorithm?
KNN (K-Nearest Neighbors) is a simple, non-parametric method for
classification. Given a set of labeled data points, the KNN
classification algorithm finds the k data points in the training set
that are closest to the point to be classified. Then, it assigns the
label that is most common among those k data points. Here, we
need to specify the number of nearest neighbors, k, which is a user-
specified parameter. The basic idea behind the KNN algorithm is
that similar data points will have similar labels.

KNN classification is a type of instance-based and non-


parametric learning.

 It is termed instance based because the model doesn’t learn


an explicit mapping from inputs to outputs. Instead, it
memorizes the training instances and compares new, unseen
instances to the ones it has seen before.
 We call the KNN classification algorithm non-parametric
because it does not make any assumptions about the
underlying distribution of the data.

K-Nearest Neighbor Classification


Algorithm
KNN classification follows a simple algorithm. The algorithm works
as follows:

The inputs to the algorithm are:

 The dataset with labeled data points


 The number k i.e. the number of nearest neighbors that we use
to find the class of any new instance of data.
 The new data point.

Using the above inputs, we follow the below steps to classify any
data point.

1. First, we choose the number k and a distance metric. You can


take any distance metric such as Euclidean, Minkowski, or
Manhattan distance for numerical attributes in the dataset. You

18
can also specify your own distance metric if you have datasets
having categorical or mixed attributes.
2. For a new data point P, calculate its distance to all the existing
data points.
3. Select the k-nearest data points, where k is a user-specified
parameter.
4. Among the k-nearest neighbors, count the number of data
points in each class. We do this to select the class label with a
majority of data points in the k neighbors that we select.
5. Assign the new data point to the class with the majority class
label among the k-nearest neighbors.

Now that we have discussed the basic intuition and the algorithm for
KNN classification, let us discuss a KNN classification numerical
example using a small dataset.

KNN Classification Numerical


Example
To solve the numerical example on the K-nearest neighbor i.e. KNN
classification algorithm, we will use the following dataset.

Point Coordinates Class Label

A1 (2,10) C2

A2 (2, 6) C1

A3 (11,11) C3

A4 (6, 9) C2

A5 (6, 5) C1

A6 (1, 2) C1

A7 (5, 10) C2

A8 (4, 9) C2

A9 (10, 12) C3

A10 (7, 5) C1

A11 (9, 11) C3

A12 (4, 6) C1

19
A13 (3, 10) C2

A15 (3, 8) C2

A15 (6, 11) C2


KNN classification dataset
In the above dataset, we have fifteen data points with three class
labels. Now, suppose that we have to find the class label of the point
P= (5, 7).

For this, we will first specify the number of nearest neighbors i.e. k.
Let us take k to be 3. Now, we will find the distance of P to each data
point in the dataset. For this KNN classification numerical example,
we will use the euclidean distance metric. The following table shows
the euclidean distance of P to each data point in the dataset.

Point Coordinates Distance from P (5, 7)

A1 (2, 10) 4.24

A2 (2, 6) 3.16

A3 (11, 11) 7.21

A4 (6, 9) 2.23

A5 (6, 5) 2.23

A6 (1, 2) 6.40

A7 (5, 10) 3.0

A8 (4, 9) 2.23

A9 (10, 12) 7.07

A10 (7, 5) 2.82

A11 (9, 11) 5.65

A12 (4, 6) 1.41

A13 (3, 10) 3.60

A15 (3, 8) 2.23

A15 (6, 11) 4.12


Distance of each point for the new point

20
After finding the distance of each point in the dataset to P, we will
sort the above points according to their distance from P (5, 7). After
sorting, we get the following table.

Point Coordinates Distance from P (5, 7)

A12 (4, 6) 1.41

A4 (6, 9) 2.23

A5 (6, 5) 2.23

A8 (4, 9) 2.23

A15 (3, 8) 2.23

A10 (7, 5) 2.82

A7 (5, 10) 3

A2 (2, 6) 3.16

A13 (3, 10) 3.6

A15 (6, 11) 4.12

A1 (2, 10) 4.24

A11 (9, 11) 5.65

A6 (1, 2) 6.4

A9 (10, 12) 7.07

A3 (11, 11) 7.21


Sorted Points according to distance
As we have taken k=3, we will now consider the class labels
of three points in the dataset nearest to point P to classify P
In the above table, A12, A4, and A5 are the closest 3
neighbors of point P. Hence, we will use the class labels of
points A12, A4, and A5 to decide the class label for P.

Now, point A12, A4, and A5 have the class labels C1, C2, and
C1 respectively. Among these points, the majority class label
is C1. Therefore, we will specify the class label of point P =
(5, 7) as C1. Hence, we have successfully used KNN
classification to classify point P according to the given
dataset.

21
By studying the above KNN classification numerical example, you
can see that the algorithm is pretty straightforward and doesn’t
require any specific mathematical skills apart from distance
calculation and majority selection.

To implement K-Nearest Neighbors classification, you can read this


article on KNN classification using the sklearn module in python.

Now, we will discuss the advantages and disadvantages of the K-


nearest neighbors classification algorithm.

Advantages of the KNN Classification


Algorithm
1. Simple to implement: KNN is a simple and easy-to-
implement classification algorithm that requires no training.
2. Versatility: KNN can be used for both classification and
regression problems. Whether you need to perform binary
classification or multi-class classification, the K-nearest
neighbor algorithm works well. By defining distance metrics for
mixed and categorical data, you can also use KNN for the
classification of categorical and mixed data types.
3. Non-parametric: The KNN algorithm does not make any
assumptions about the underlying data distribution, so it is
well-suited for problems where the decision boundary is non-
linear.
4. Adaptive: KNN can adapt to changes in the training data,
making it suitable for dynamic or evolving systems. It doesn’t
remember the pattern in the previous dataset. While
classification, it calculates the instantly and then produces the
results. Hence, for dynamic systems with changing datasets,
KNN will work well.
5. Multi-class: You can use KNN classification to implement multi-
class classification tasks. It works in a similar manner as binary
classification and no extra calculations are required.
6. No Training: We don’t need to train the KNN classifier. It only
stores the data points and uses them for classifying new
instances of data.
7. Handling missing values: KNN is less sensitive to missing
values because the missing values can simply be ignored when
calculating the distance.
8. Handling noisy data: KNN is robust to noisy data and can
handle irrelevant or redundant features since it focuses on the
k closest neighbors. While calculating the class label for a data

22
point, it used the majority of the class labels of k nearest
neighbors. Hence, the noise in the data will become a minority
class and won’t affect the classification process.
9. Handling outliers: KNN can be robust to outliers since the
decision is based on the majority class among k-nearest
neighbors.

Note that the effectiveness of the KNN algorithm greatly depends on


the value of k and the distance metric chosen.

Disadvantages of the KNN


Classification Algorithm
1. Computationally expensive: KNN has a high computation cost
during the prediction stage, especially when dealing with large
datasets. The algorithm needs to calculate the distance
between the new data point and all stored data points for each
classification task. This can be slow and resource-intensive.
2. Memory-intensive: KNN stores all the training instances. This
can require a large amount of memory while dealing with large
datasets.
3. Sensitive to irrelevant features: KNN is sensitive to irrelevant
or redundant features since it uses all the input features to
calculate the distance between instances. Therefore, you first
need to perform data preprocessing and keep only relevant
features in the dataset.
4. Quality of distance function: The effectiveness of the KNN
algorithm greatly depends on the choice of the distance
function. The algorithm may not work well for all types of data.
For instance, we generally use dissimilarity measures for
categorical data. In such a case, finding the majority class will
become difficult as many of the data points will lie at the same
distance from the new data point.
5. High dimensionality: KNN may not work well when the number
of features is high. The curse of dimensionality can make it
hard to find meaningful patterns in the data.
6. Need to determine the value of k: The value of k is a user-
specified parameter that needs to be determined through trial
and error or using techniques such as cross-validation, which
can be time-consuming.A small value of k could lead to
overfitting as well as a big value of k can lead to underfitting.
7. Not good with categorical variable: KNN is not good when the
categorical variable is involved. It works well when the variable
is numerical. However, you can define distance measures for

23
categorical and mixed data types to perform KNN
classification.
8. Slow prediction: KNN is slow in prediction as it needs to
calculate the distance of the new point from each stored point.
This is a slow process and computationally expensive.

24

You might also like