0% found this document useful (0 votes)
35 views39 pages

Chapter 3

The document discusses the functioning of web crawlers, which automatically find and download web pages for search engines. It covers various aspects of crawling, including the importance of freshness, the challenges of duplicate detection, and the use of sitemaps and document feeds. Additionally, it highlights the role of distributed crawling and storage systems like BigTable in managing large collections of web data.

Uploaded by

gillybobfitz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views39 pages

Chapter 3

The document discusses the functioning of web crawlers, which automatically find and download web pages for search engines. It covers various aspects of crawling, including the importance of freshness, the challenges of duplicate detection, and the use of sitemaps and document feeds. Additionally, it highlights the role of distributed crawling and storage systems like BigTable in managing large collections of web data.

Uploaded by

gillybobfitz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Search Engines

Information Retrieval in Practice

All slides ©Addison Wesley, 2008


Web Crawler
• Finds and downloads web pages automatically
– provides the collection for searching
• Web is huge and constantly growing
• Web is not under the control of search engine providers
• Web pages are constantly changing
• Crawlers also used for other types of data
Retrieving Web Pages
• Every page has a unique uniform resource locator (URL)
• Web pages are stored on web servers that use HTTP to
exchange information with client software
• e.g.,
Retrieving Web Pages
• Web crawler client program connects to a domain name system
(DNS) server
• DNS server translates the hostname into an internet protocol (IP)
address
• Crawler then attempts to connect to server host using a specific
port
• After connection, the crawler sends an HTTP request to the
webserver to request a page
– usually a GET request
Web Crawler
• Starts with a set of seeds, which are a set of URLs given to it as
parameters
• Seeds are added to a URL request queue
• Crawler starts fetching pages from the request queue
• Downloaded pages are parsed to find link tags that might contain
other useful URLs to fetch
• New URLs added to the crawler’s request queue, or frontier
• Continue until no more new URLs or disk full
Web Crawling
• Web crawlers spend a lot of time waiting for responses to requests
• To reduce this inefficiency, web crawlers use threads and fetch
hundreds of pages at once
• Crawlers could potentially flood sites with requests for pages
• To avoid this problem, web crawlers use politeness policies
– e.g., delay between requests to same web server
Controlling Crawling
• Even crawling a site slowly will anger some web server
administrators, who object to any copying of their data
• Robots.txt file can be used to control crawlers
Simple Crawler Thread
Freshness
• Web pages are constantly being added, deleted, and modified
• Web crawler must continually revisit pages it has already
crawled to see if they have changed in order to maintain the
freshness of the document collection
– stale copies no longer reflect the real contents of the web pages
Freshness
• HTTP protocol has a special request type called HEAD that
makes it easy to check for page changes
– returns information about page, not page itself
Freshness
• Not possible to constantly check all pages
– must check important pages and pages that change frequently
• Freshness is the proportion of fresh pages
• Optimizing for this metric can lead to bad decisions, such as
not crawling popular sites
• Age is a better metric
Freshness vs. Age
Age
• Expected age of a page t days after it was last crawled:

• Web page updates follow the Poisson distribution on average


– time until the next update is governed by an exponential distribution
Age
• Older a page gets, the more it costs not to crawl it
– e.g., expected age with mean change frequency λ = 1/7 (one change
per week)
Focused Crawling
• Attempts to download only those pages that are about a
particular topic
– used by vertical search applications
• Rely on the fact that pages about a topic tend to have links to
other pages on the same topic
– popular pages for a topic are typically used as seeds
• Crawler uses text classifier to decide whether a page is on topic
Deep Web
• Sites that are difficult for a crawler to find are collectively
referred to as the deep (or hidden) Web
– much larger than conventional Web
• Three broad categories:
– private sites
• no incoming links, or may require log in with a valid account
– form results
• sites that can be reached only after entering some data into a form
– scripted pages
• pages that use JavaScript, Flash, or another client-side language to generate
links
Sitemaps
• Sitemaps contain lists of URLs and data about those URLs, such
as modification time and modification frequency
• Generated by web server administrators
• Tells crawler about pages it might not otherwise find
• Gives crawler a hint about when to check a page for changes
Sitemap Example
Distributed Crawling
• Three reasons to use multiple computers for crawling
– Helps to put the crawler closer to the sites it crawls
– Reduces the number of sites the crawler has to remember
– Reduces computing resources required
• Distributed crawler uses a hash function to assign URLs to
crawling computers
– hash function should be computed on the host part of each URL
Desktop Crawls
• Used for desktop search and enterprise search
• Differences to web crawling:
– Much easier to find the data
– Responding quickly to updates is more important
– Must be conservative in terms of disk and CPU usage
– Many different document formats
– Data privacy very important
Document Feeds
• Many documents are published
– created at a fixed time and rarely updated again
– e.g., news articles, blog posts, press releases, email
• Published documents from a single source can be ordered in a
sequence called a document feed
– new documents found by examining the end of the feed
Document Feeds
• Two types:
– A push feed alerts the subscriber to new documents
– A pull feed requires the subscriber to check periodically for new
documents
• Most common format for pull feeds is called RSS
– Really Simple Syndication, RDF Site Summary, Rich Site Summary,
or ...
RSS “Really Simple Syndication” Example
RSS stands for Really Simple Syndication, a way to distribute content like news and podcasts in real-time. RSS
feeds are XML-based and can be used to keep up with new content from websites you subscribe to.
RSS Example
RSS
• ttl tag (time to live)
– amount of time (in minutes) contents should be cached
• RSS feeds are accessed like web pages
– using HTTP GET requests to web servers that host them
• Easy for crawlers to parse
• Easy to find new information
Storing the Documents
• Many reasons to store converted document text
– saves crawling time when page is not updated
– provides efficient access to text for snippet generation, information
extraction, etc.
• Database systems can provide document storage for some
applications
– web search engines use customized document storage systems
Storing the Documents
• Requirements for document storage system:
– Random access
• request the content of a document based on its URL
• hash function based on URL is typical
– Compression and large files
• reducing storage requirements and efficient access
– Update
• handling large volumes of new and modified documents
• adding new anchor text
Large Files
• Store many documents in large files, rather than each
document in a file
– avoids overhead in opening and closing files
– reduces seek time relative to read time
• Compound documents formats
– used to store multiple documents in a file
– e.g., TREC Web
Compression
• Text is highly redundant (or predictable)
• Compression techniques exploit this redundancy to make files
smaller without losing any of the content
• Compression of indexes covered later
• Popular algorithms can compress HTML and XML text by 80%
– e.g., DEFLATE (zip, gzip) and LZW (UNIX compress, PDF)
– may compress large files in blocks to make access faster
BigTable
• Google’s document storage system
– Customized for storing, finding, and updating web pages
– Handles large collection sizes using inexpensive computers
BigTable
• No query language, no complex queries to optimize
• Only row-level transactions
• Tablets are stored in a replicated file system that is accessible by all
BigTable servers
• Any changes to a BigTable tablet are recorded to a transaction log,
which is also stored in a shared file system
• If any tablet server crashes, another server can immediately read the
tablet data and transaction log from the file system and take over
BigTable
• Logically organized into rows
• A row stores data for a single web page

• Combination of a row key, a column key, and a timestamp point


to a single cell in the row
BigTable
• BigTable can have a huge number of columns per row
– all rows have the same column groups
– not all rows have the same columns
– important for reducing disk reads to access document data
• Rows are partitioned into tablets based on their row keys
– simplifies determining which server is appropriate
Detecting Duplicates
• Duplicate and near-duplicate documents occur in many situations
– Copies, versions, plagiarism, spam, mirror sites
– 30% of the web pages in a large crawl are exact or near duplicates of
pages in the other 70%
• Duplicates consume significant resources during crawling,
indexing, and search
– Little value to most users
Duplicate Detection
• Exact duplicate detection is relatively easy
• Checksum techniques
– A checksum is a value that is computed based on the content of the
document
• e.g., sum of the bytes in the document file

– Possible for files with different text to have same checksum


• Functions such as a cyclic redundancy check (CRC), have been
developed that consider the positions of the bytes
Near-Duplicate Detection
• More challenging task
– Are web pages with same text context but different advertising or
format near-duplicates?
• A near-duplicate document is defined using a threshold value
for some similarity measure between pairs of documents
– e.g., document D1 is a near-duplicate of document D2 if more than
90% of the words in the documents are the same
Near-Duplicate Detection
• Search:
– find near-duplicates of a document D
– O(N) comparisons required
• Discovery:
– find all pairs of near-duplicate documents in the collection
– O(N2) comparisons
• IR techniques are effective for search scenario
• For discovery, other techniques used to generate compact
representations
Fingerprints
Fingerprint Example

You might also like