0% found this document useful (0 votes)
5 views16 pages

UNIT-4 Information Retrieval Notes

Vector space classification is a method for classifying text by representing it as vectors in a multidimensional space, with classification based on vector similarity. The process involves text preprocessing, feature representation, and similarity measurement, using techniques like cosine similarity. Support Vector Machines (SVM) and flat clustering are also discussed as methods for document classification and organization, highlighting their advantages, challenges, and applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views16 pages

UNIT-4 Information Retrieval Notes

Vector space classification is a method for classifying text by representing it as vectors in a multidimensional space, with classification based on vector similarity. The process involves text preprocessing, feature representation, and similarity measurement, using techniques like cosine similarity. Support Vector Machines (SVM) and flat clustering are also discussed as methods for document classification and organization, highlighting their advantages, challenges, and applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

UNIT- 4 NOTES

What is Vector Space Classification?


Vector space classification is a method used to classify text (documents, queries, or sentences) by representing them
as vectors in a multidimensional space. Each dimension in this space corresponds to a unique term or feature derived
from the text. Classification is then performed by comparing the similarity between vectors or by assigning categories
based on their positions in the space.
Steps in Vector Space Classification
1. Text Preprocessing
Before converting text into vectors, it needs to be processed:
 Tokenization: Breaking text into individual words or terms.
 Stopword Removal: Removing common words like "the", "is", "and" that don’t carry significant meaning.
 Stemming/Lemmatization: Reducing words to their base form (e.g., "running" → "run").
 Building Vocabulary: Create a list of unique words across all documents.
2. Feature Representation
Each document or text is represented as a vector:
 Binary Representation: A value of 1 if the term is present in the document, otherwise 0.
 Term Frequency (TF): The number of times a term appears in a document.
 TF-IDF (Term Frequency-Inverse Document Frequency): Weights terms based on how important they are to
a document relative to the entire collection.
3. Measuring Similarity
To classify a document or query, we calculate its similarity with known categories or labeled documents.
Cosine Similarity
A common similarity metric used in the vector space model:

Example of Vector Space Classification


Dataset
Suppose we have the following labeled documents:
 Category A (Technology):
o Document 1: "Artificial intelligence and robotics."
o Document 2: "Machine learning advances."
 Category B (Sports):
o Document 3: "Football and basketball leagues."
o Document 4: "Olympic games and sports events."
Query
"AI in robotics."
Vocabulary
{artificial, intelligence, robotics, machine, learning, football, basketball, olympic, sports}
Term-Document Matrix
Each document and the query are represented as vectors:

Result: The query "AI in robotics" is most similar to Document 1 and is classified into Category A (Technology).
Diagram
Vector Space Representation:
A simplified 3D diagram with three terms (artificial, intelligence, robotics) as axes:
 Document 1 vector points towards all three axes.
 Document 2 vector points towards the "machine" and "learning" axes.
 Query vector aligns perfectly with Document 1.
Advantages of Vector Space Classification
1. Simple and Intuitive:
o Easy to understand and implement.
2. Flexible:
o Supports various weighting schemes (TF-IDF, binary).
3. Effective:
o Handles large text datasets well.
Limitations
1. High Dimensionality:
o For large vocabularies, the vector space can become sparse.
2. No Semantic Understanding:
o Ignores the context and relationships between words.
3. Sensitivity to Weighting:
o Performance depends on how terms are weighted.
Applications
 Email spam detection.
 Sentiment analysis.
 Topic categorization.
 Document retrieval systems.
Conclusion
Vector space classification is a powerful technique in text analysis and IR. It represents documents and queries in a
geometric space and classifies them based on similarity metrics like cosine similarity. While simple and effective, its
limitations in handling semantics can be mitigated by combining it with advanced approaches like word embeddings
or neural networks.

Support Vector Machines (SVM) and Machine Learning on Documents


Support Vector Machines (SVM) is a supervised machine learning algorithm widely used for text classification and
document analysis. It is particularly effective for high-dimensional data, like text, and works by finding the best
boundary (or hyperplane) that separates data into classes.
How SVM Works
1. Input Data: Represented as vectors in a high-dimensional space. In text analysis, this typically involves
features like term frequency (TF) or TF-IDF values for words in a document.
2. Hyperplane: A decision boundary that separates data points into different classes.
o For two classes, SVM tries to find the hyperplane that maximizes the margin (distance) between the
closest points (called support vectors) of each class.
3. Margin Maximization: SVM ensures the separation between classes is as large as possible, leading to better
generalization on unseen data.
Steps for Applying SVM to Document Classification
1. Preprocessing Text Data
 Tokenization: Split text into words or terms.
 Stopword Removal: Remove common, non-informative words (e.g., "is", "and").
 Stemming/Lemmatization: Reduce words to their root form (e.g., "running" → "run").
 Feature Extraction: Represent documents as vectors using:
o Binary (presence/absence of terms)
o Term Frequency (TF)
o TF-IDF
2. Training an SVM Classifier
 Input: Labeled documents as feature vectors (e.g., documents labeled as "Technology" or "Sports").
 Output: A hyperplane that separates classes.
 Kernel Trick: If data isn’t linearly separable, SVM can use kernels (e.g., polynomial, radial basis function) to
transform data into a higher-dimensional space where a linear boundary can be found.
3. Predicting Document Class
 A new document is vectorized and positioned in the same feature space.
 SVM determines on which side of the hyperplane the document lies, assigning it to the corresponding class.
Example of SVM in Document Classification
Dataset:
 Class 1 (Technology):
o Document 1: "Artificial intelligence and robotics."
o Document 2: "Machine learning applications."
 Class 2 (Sports):
o Document 3: "Football leagues and tournaments."
o Document 4: "Olympic games and events."
Steps:
1. Vocabulary:
{artificial, intelligence, robotics, machine, learning, football, olympic, games}
2. Feature Representation (TF-IDF values for simplicity):
Advantages of SVM in Document Classification
1. Effective for High-Dimensional Data: Works well with text data, which has many features.
2. Robust to Overfitting: Particularly when there is a clear margin between classes.
3. Flexibility with Kernels: Can classify non-linear data using kernel functions.
Challenges
1. Scalability: SVM can be computationally intensive for very large datasets.
2. Feature Selection: Requires careful preprocessing and feature engineering for best results.
3. Parameter Tuning: Performance depends on hyperparameters like the kernel type and regularization
parameter.
Machine Learning on Documents
Beyond SVM
Other machine learning techniques for document classification include:
1. Naive Bayes: Assumes feature independence and calculates probabilities for each class.
2. Decision Trees: Splits data into categories based on feature values.
3. Deep Learning Models: Use neural networks to capture complex patterns in text.
Example Comparison

 SVM:

o Suitable for smaller, high-dimensional datasets.

o Example: Categorizing emails as "Spam" or "Not Spam."

 Deep Learning:

o Better for large datasets with complex patterns.

o Example: Sentiment analysis using recurrent neural networks (RNNs).

Conclusion

Support Vector Machines are a robust and effective method for text and document classification, particularly when
combined with well-prepared feature representations like TF-IDF. While SVM excels in high-dimensional spaces,
modern alternatives like deep learning are also gaining popularity for more nuanced text analysis tasks.

Flat Clustering in Information Retrieval

Flat clustering is a method in information retrieval where documents are grouped into clusters based on their
similarities, without imposing any hierarchical structure. All clusters are treated as equals, and the clustering process
focuses on dividing the dataset into a predetermined number of groups.

Key Characteristics of Flat Clustering

1. Non-Hierarchical: Clusters are not organized into a tree-like structure.

2. Predefined Number of Clusters: The number of clusters (kkk) is usually specified in advance.
3. Similarity-Based Grouping: Documents are grouped based on their similarity in the feature space.

4. Iterative Refinement: Many flat clustering algorithms iteratively adjust the cluster assignments to improve
the quality.

Common Flat Clustering Algorithms

1. K-Means Clustering

K-means is the most popular flat clustering algorithm. It divides nnn documents into kkk clusters based on minimizing
the variance within clusters.

Steps of K-Means:

1. Initialize Cluster Centers: Randomly select kkk initial centroids (one for each cluster).

2. Assign Documents to Clusters: Assign each document to the nearest cluster based on a similarity measure
(e.g., cosine similarity or Euclidean distance).

3. Recalculate Centroids: Compute the centroid of each cluster based on the assigned documents.

4. Iterate: Repeat steps 2 and 3 until cluster assignments stabilize or a stopping criterion is met.

2. Expectation-Maximization (EM) for Gaussian Mixture Models

 In EM-based clustering, each document is assigned a probability of belonging to a cluster instead of a hard
assignment.
 The algorithm iteratively refines these probabilities and estimates the cluster parameters.

Advantages of Flat Clustering

1. Simplicity: Easy to implement and understand.


2. Scalability: Works well with large datasets.
3. Flexibility: Can handle various data types with different similarity measures.

Example of Flat Clustering

Dataset:

Documents:

1. "Artificial intelligence and robotics."


2. "Machine learning advances."
3. "Football leagues and tournaments."
4. "Olympic sports events."

Steps in K-Means Clustering:

1. Initialization:
 Select two initial centroids randomly (e.g., C1C1 and C2C2).
2. Assign to Clusters:
 Compute the similarity of each document to C1C1 and C2C2, assigning documents to the closest
centroid.
3. Update Centroids:
 Recalculate the centroids based on the mean position of documents in each cluster.
4. Repeat:
 Reassign documents and update centroids until convergence.

Final Clusters:

 Cluster 1 (Technology):
 "Artificial intelligence and robotics."
 "Machine learning advances."
 Cluster 2 (Sports):
 "Football leagues and tournaments."
 "Olympic sports events.
Applications in Information Retrieval

1. Topic Clustering:

o Group similar documents to identify topics.

2. Search Result Organization:

o Cluster search results to provide categorized outputs.

3. Recommender Systems:

o Group users or items for personalized recommendations.

Limitations of Flat Clustering

1. Requires kkk: The number of clusters must be known or estimated beforehand.

2. Sensitive to Initialization: Random initialization can lead to different results.

3. Difficulty with Overlapping Clusters: Flat clustering assumes distinct, non-overlapping clusters, which may
not always align with real-world data.

Conclusion

Flat clustering, especially with algorithms like K-means, is a fundamental approach in information retrieval for
grouping documents into similar categories. While simple and effective, it may require careful tuning and
preprocessing for optimal results.

Hierarchical Clustering in Simple Terms

Hierarchical clustering is a method of grouping data (like documents or items) into a tree-like structure, called a
dendrogram, which shows how similar or different the data points are. Instead of just dividing data into a fixed
number of groups like flat clustering, hierarchical clustering creates clusters at multiple levels.

Key Features

1. Tree Structure (Dendrogram): The results are represented as a tree, where each branch represents a cluster.

2. Two Main Types:

o Agglomerative (Bottom-Up): Start with each item as its own cluster and merge them step by step.
o Divisive (Top-Down): Start with all items in one cluster and split them step by step.

How Hierarchical Clustering Works

1. Agglomerative Clustering (Bottom-Up Approach):

 Step 1: Treat each data point (document) as its own cluster.

 Step 2: Find the two most similar clusters and merge them.

 Step 3: Repeat until all data points are merged into one large cluster.

2. Divisive Clustering (Top-Down Approach):

 Step 1: Treat all data points as one large cluster.

 Step 2: Split the cluster into smaller clusters based on their dissimilarity.

 Step 3: Repeat until each data point becomes its own cluster.

Similarity Measurement

The algorithm calculates how similar or different clusters are using distance measures like:

1. Euclidean Distance: The straight-line distance between two points.

2. Cosine Similarity: Commonly used for text data to measure the angle between vectors.

Example of Hierarchical Clustering

Dataset:

 Document 1: "Artificial intelligence and robotics."

 Document 2: "Machine learning advances."

 Document 3: "Football is a popular sport."

 Document 4: "Olympics include many sports."

Steps in Agglomerative Clustering:

1. Treat each document as its own cluster:

o Cluster 1: Doc 1

o Cluster 2: Doc 2

o Cluster 3: Doc 3

o Cluster 4: Doc 4

2. Find the two most similar documents and merge them:

o Doc 1 and Doc 2 (both about technology) → Cluster A

o Doc 3 and Doc 4 (both about sports) → Cluster B

3. Merge remaining clusters until one large cluster forms:

o Merge Cluster A (Tech) and Cluster B (Sports).

Result:

The dendrogram shows:


 At the lowest level, each document is its own cluster.

 At the next level, similar documents merge (e.g., Tech and Sports).

 At the top level, all documents form one large cluster.

Diagram: Dendrogram

A dendrogram is a tree-like diagram:

 The base shows individual items (documents).

 Branches show how clusters are formed.

 Height represents dissimilarity.

All Documents

/ \

Sports Technology

/ \ / \

Doc 3 Doc 4 Doc 1 Doc 2

Advantages of Hierarchical Clustering

1. No Need to Specify Clusters in Advance: Unlike flat clustering, you don’t need to decide the number of
clusters.

2. Hierarchy Provides Insights: You can see relationships at different levels (e.g., documents about sports versus
technology).

3. Flexible with Different Data Types: Works well with numerical or text data.

Disadvantages

1. Scalability: It can be slow for very large datasets.

2. Hard to Adjust: Once clusters are formed, they can’t be changed.

3. Sensitive to Noise: Outliers can affect the results.

Applications

1. Organizing Documents: Grouping news articles by topic or subtopic.

2. Biology: Creating phylogenetic trees for species relationships.

3. Customer Segmentation: Grouping customers based on purchasing behavior.

Matrix Decomposition and Latent Semantic Indexing (LSI)

Matrix decomposition is a mathematical technique for breaking down a matrix into simpler components, making it
easier to process and analyze data. Latent Semantic Indexing (LSI) is an application of matrix decomposition in
information retrieval to uncover hidden (latent) relationships between terms and documents.

Matrix Decomposition
Matrix decomposition refers to breaking a complex matrix into simpler, interpretable components. In information
retrieval, we often use the term-document matrix AAA, where:

 Rows represent terms (words).

 Columns represent documents.

 Each cell contains the frequency or weight of a term in a document (e.g., TF-IDF score).

Common Matrix Decomposition Technique: Singular Value Decomposition (SVD)

 SVD decomposes AAA into three matrices: A=UΣVT

o U: Represents the relationship between terms and concepts.

o Σ: A diagonal matrix containing singular values (importance of each concept).

o VT: Represents the relationship between documents and concepts.

Purpose of Matrix Decomposition in IR:

 Simplifies large, sparse matrices (many zeros) into smaller, dense ones.

 Captures the essential structure of the data while reducing noise.

Example of Matrix Decomposition

Term-Document Matrix (A)

Terms Doc 1 Doc 2 Doc 3


Machine 1 1 0
Learning 1 1 0
Artificial 1 0 0
Intelligence 1 0 0
Sports 0 0 1

After Applying SVD:

 U: A matrix representing terms and their relationships with concepts.


 Σ: Singular values highlighting the importance of concepts.
 VT: A matrix representing documents and their relationships with concepts.

Result: The original A can be approximated with fewer dimensions by keeping only the top kk singular values in Σ,
reducing noise and redundancy.

Latent Semantic Indexing (LSI)

LSI uses SVD to enhance information retrieval. By reducing the dimensionality of the term-document matrix, it
identifies hidden (latent) semantic relationships between terms and documents. This is especially helpful for dealing
with synonyms and polysemy.

Steps in LSI:

1. Create the Term-Document Matrix (AA):


 Each term’s occurrence or weight (e.g., TF-IDF) in each document is recorded.
2. Apply SVD to Decompose A:
 Decompose A into UΣVT.
3. Reduce Dimensionality:
 Keep only the top k singular values and their corresponding vectors in U and VT. This reduces the
matrix size and focuses on the most important patterns.
4. Query Mapping:
 Transform a user’s query into the reduced semantic space and retrieve relevant documents based
on their semantic similarity.

Example of Latent Semantic Indexing

Dataset:

1. Doc 1: "Machine learning improves technology."


2. Doc 2: "Artificial intelligence powers machine learning."
3. Doc 3: "Sports involve teamwork and strategy."

Term-Document Matrix (A):

Terms Doc 1 Doc 2 Doc 3


Machine 1 1 0
Learning 1 1 0
Artificial 0 1 0
Intelligence 0 1 0
Sports 0 0 1

SVD and Dimensionality Reduction:

 After applying SVD, we keep the top 2 singular values (k=2) for simplicity.
 The matrix now represents the key semantic concepts:
 Concept 1: Technology (Machine Learning, AI, etc.).
 Concept 2: Sports.

Query Example:

 Query: "Artificial learning."


 Transform the query into the reduced space.
 The reduced space shows that "Artificial learning" is most closely related to Concept 1 (Technology), so Doc 1
and Doc 2 are retrieved.

Advantages of LSI

1. Handles Synonymy:
 Captures relationships between similar terms (e.g., "AI" and "Artificial Intelligence").
2. Improves Retrieval:
 Finds semantically similar documents even if exact words don’t match.
3. Reduces Noise:
 Focuses on key patterns and removes irrelevant details.
Limitations of LSI

1. Computational Complexity:
 SVD can be slow for large datasets.
2. Fixed Dimensionality:
 Requires retraining if new documents are added.
3. Ambiguity:
 May struggle with polysemy (words with multiple meanings).

Introduction to Web Search Basics

Web search is the process of retrieving relevant information from the vast collection of resources on the internet in
response to a user’s query. Search engines like Google, Bing, and Yahoo are examples of systems that use web
search principles.

Key Components of Web Search

1. Crawling

Crawling is the process where search engines discover new and updated content on the web by following links using
automated programs called web crawlers (or bots).

 Example:
A crawler starts at a webpage (e.g., www.example.com) and follows all the links on that page to discover
other pages, and so on.

2. Indexing

Once pages are discovered, the information is processed and stored in a database called an index. This involves
analyzing the content, identifying keywords, and storing metadata.

 Example:
A page about "machine learning" is indexed under terms like "machine," "learning," "AI," and "technology."

3. Query Processing

When a user submits a search query, the search engine processes it to understand the intent and find the most
relevant documents in the index.

 Example:
Query: "Best laptops under $1000"
The search engine understands the keywords "best," "laptops," and "under $1000" to find relevant results.

4. Ranking

Search engines rank the results based on their relevance to the query and other factors, such as page quality,
authority, and user engagement.

 Example:
Results are ranked so that the most relevant and credible pages appear at the top.

5. Retrieval and Display


The final step involves retrieving the ranked results and displaying them to the user in an organized manner.

 Example:
A search result page includes titles, descriptions (snippets), and URLs.

Key Concepts in Web Search

1. Relevance

Relevance refers to how well a document matches the user’s query.

 Example:
For the query "How to bake a cake," a page containing a cake recipe is more relevant than one about the
history of cakes.

2. Precision and Recall

These metrics measure the effectiveness of a search engine.

 Precision: The fraction of retrieved documents that are relevant.


 Recall: The fraction of relevant documents that are retrieved.
 Example:
If 10 documents are retrieved, and 7 are relevant, precision = 7/10=0.77/10=0.7.

3. Query Types

Users may submit different types of queries:

 Navigational Query: To find a specific site (e.g., "Facebook login").


 Informational Query: To learn about a topic (e.g., "Benefits of meditation").
 Transactional Query: To perform an action (e.g., "Buy headphones online").

Example of Web Search Workflow

Scenario: Searching for "Top 10 programming languages in 2025"

1. Crawling:
The search engine’s crawler has already visited and indexed pages about programming languages.
2. Indexing:
Pages with titles and content about programming languages are stored in the search engine’s index.
3. Query Processing:
The search engine identifies keywords like "top 10," "programming languages," and "2025." It understands
the user wants a ranked list of popular programming languages.
4. Ranking:
 Pages are ranked based on factors like freshness (e.g., recently updated pages), popularity, and
relevance to the query.
 A blog post from January 2025 is likely to rank higher than a 2020 article.
5. Retrieval and Display:
The top results include titles like:
 "Top 10 Programming Languages to Learn in 2025"
 "The Most Popular Coding Languages in 2025"

Challenges in Web Search


1. Scale:
The web is massive, with billions of pages constantly changing.
2. Spam and Quality Control:
Search engines must filter out low-quality or spam content.
3. Diversity in Queries:
Users may submit ambiguous or unclear queries, requiring advanced natural language understanding.
4. Personalization:
Tailoring search results to individual users based on location, search history, and preferences.

Web Crawling, Indexes, and Link Analysis

Web search engines rely on three key processes to provide efficient and accurate results: web crawling, indexing,
and link analysis. These components work together to discover, organize, and rank web content.

1. Web Crawling

What is Web Crawling?

Web crawling is the process of systematically browsing the web to discover and collect data from websites. The tool
used for this task is called a web crawler, spider, or bot.

How It Works:

1. Seed URLs:
The crawler starts with a list of initial URLs (seed URLs).
 Example: Start with "https://example.com".
2. Follow Links:
The crawler visits the seed URLs, extracts hyperlinks, and follows them to discover more pages.
 Example: A page on "example.com" links to "https://example.com/page1" and
"https://anotherexample.com".
3. Content Extraction:
The crawler retrieves the content of each page (text, metadata, links, etc.).
4. Repeat:
The process continues recursively until a stopping condition (e.g., time, number of pages) is met.

Challenges in Web Crawling:

1. Scale:
Billions of pages exist on the web.
2. Dynamic Content:
Some pages are generated dynamically and may not be accessible.
3. Politeness:
Crawlers must respect the website’s resources and the robots.txt file (which specifies what parts of the site
can be crawled).
4. Duplicate Content:
Crawlers need to detect and avoid indexing duplicate pages.
Example of Web Crawling Workflow:
1. Seed URL: "https://example.com".
2. The page contains links to:
o "https://example.com/about"
o "https://example.com/contact"
o "https://anotherexample.com".
3. The crawler fetches and processes each page, following links on each page to discover more content.

2. Indexes

What is an Index?

An index is a structured database that stores information about web pages to facilitate fast and accurate retrieval.

How Indexing Works:

1. Content Processing:
The crawler sends the retrieved pages to the indexing system. The system processes the text, metadata, and
structure of the pages.
2. Term Extraction:
Important words (terms) and their frequencies are extracted.
3. Storage:
The extracted information is stored in an inverted index, where:
 Terms are the keys.
 Document IDs (pages where the terms appear) are the values.

Example of an Inverted Index:

Input Documents:

 Doc 1: "Web crawling and indexing are key."


 Doc 2: "Crawling involves following links."

Inverted Index:

Term Doc 1 Doc 2


Web 1 -
Crawling 1 2
Indexing 1 -
Links - 2

When a query is made (e.g., "Crawling"), the index quickly retrieves the relevant documents (Doc 1 and Doc 2).

3. Link Analysis

What is Link Analysis?

Link analysis evaluates the relationships between web pages using hyperlinks to determine their importance and
relevance.

Key Link Analysis Techniques:

1. PageRank (Used by Google):


 Assigns a score to each page based on the number and quality of links pointing to it.
 Pages with more high-quality inbound links have higher scores.
 Formula:PR(A)=(1−d)+d∑i=1NPR(Li)C(Li)PR(A)=(1−d)+di=1∑NC(Li)PR(Li)
 PR(A)PR(A): PageRank of page A.
 dd: Damping factor (usually 0.85).
 LiLi: Pages linking to A.
 C(Li)C(Li): Number of links on LiLi.
2. HITS Algorithm:
 Identifies two types of pages:
 Hubs: Pages that link to many authoritative pages.
 Authorities: Pages linked by many hubs.

Example of Link Analysis:

Web Graph:

 Page A links to Page B and Page C.


 Page B links to Page C.
 Page C links back to Page A.

PageRank Distribution:

1. Page A gets high PageRank because Page C links back to it.


2. Page B and Page C are slightly less important.

HITS Results:

1. Page A: A hub because it links to B and C.


2. Page C: An authority because it is linked by A and B.
3. Summary
Process Purpose Example
Web Discover web pages by following A crawler starts at "https://example.com" and follows
Crawling links. links to "https://example.com/about" and others.
Organize data into a searchable An inverted index maps terms like "crawling" to
Indexing database. documents where they appear.
Evaluate the importance of pages
Link based on links (e.g., PageRank, Page C is an authority because multiple pages link to
Analysis HITS). it.

4. Together, these processes make modern web search engines efficient and
powerful tools for finding relevant information.

You might also like