Information Retrieval Systems
Information Retrieval Systems
Information Retrieval Systems
in
JNTU World
LECTURE NOTES
ON
or
ld
JN
TU
www.alljntuworld.in
JNTU World
UNIT-1
or
ld
The definition of item varies by how a specific source treats information. A complete
document, such as a book, newspaper or magazine could be an item. At other times each
chapter or article may be defined as an item.
The efficiency of an information system lies in its ability to minimize the overhead for a
TU
Overhead from a users perspective is the time required to find the information needed,
excluding the time for actually reading the relevant data. Thus search composition, search
execution, and reading non-relevant items are all
aspects of information retrieval overhead.
1.2 Objectives
JN
Query Execution
Scanning results of query to select items to read
www.alljntuworld.in
JNTU World
Pr ecision
or
ld
JN
TU
www.alljntuworld.in
Functional
JN
TU
or
ld
1.3
Overview:
JNTU World
www.alljntuworld.in
JNTU World
Item Normalization: The first step in any integrated system is to normalize the incoming
items to a standard format. In addition to translating multiple external formats that might be
received into a single consistent data structure that can be manipulated by the functional
processes, item normalization provides logical restructuring of the item
ld
An Information Retrieval System is software that has the features and function required to
manipulate information items where as a DBMS that is optimized to handle structured
data.
systems (IRS, Digital Libraries & Data Warehouses) are
information and their primary goal is to satisfy user information
or
The conversion of existing hardcopy text, images and analog data and the storage and
retrieval of the digital version is a major concern to Digital Libraries, where as IRS
concentrates on digitized data.
With direct electronic access available to users the social aspects of congregating in a library
and learning from librarians, friends and colleagues will be lost and new electronic
collaboration equivalencies will come into existence.
TU
Information storage and retrieval technology has addressed a small subset of the issues
associated with digital
JN
libraries.
The term Data Warehouse comes more from the commercial sector than academic sources.
Its goal is to provide to the decision makers the critical information to answer future
direction questions.
A data warehouse consists of the following:
www.alljntuworld.in
JNTU World
ld
or
The objective of the search capability is to allow for a mapping between a users specified
need and the items in the information database that will answer that need.
functions
are
The
functions
TU
JN
Boolean Logic: Allows a user to logically relate multiple Concepts together to define
what information is needed
Used
The
t
o restrict the distance allowed within an
i
semantic concept
s that the closer two terms found in a text
in the description of a particular concept.
i
t
e
m
between
search
t
h
e
two
ter
ms.
rel
ate
d
Continuous Word Phrases :( CWP) is two or more words that are treated as a single semantic
unit.
Fuzzy Searches: provide the capability to locate spellings of words that are similar to
the entered search term. Primary used to compensate for errors in
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
spelling of words. Fuzzy searching increases recall at the expense of decreasing precision
Term Masking: The ability to expand a query term by masking a portion of the term
and accepting as valid any processing token that maps to the unmasked portions of
the term
Numeric and Date Ranges: Allows for specialized numeric or date range processing
ld
or
term to
other terms that are similar in meaning where as a concept class is a tree structure that
expands each meaning of a word into potential concepts that are related to the initial term.
Natural Language Queries : Allow a user to enter a prose statement that describes the
information that the user wants to find. The longer the prose, the more accurate the
results returned
Multimedia Queries : The user interface becomes far More complex with introduction of the
TU
availability
of multimedia items.
All of the previous discussions still apply for search of the textual portions of a multimedia
database. But in addition, the user has to be able to specify search terms for the other
modalities
1.6 Browse:
JN
Once the search is complete, browse capabilities provide the user with the capability to
determine which items are of interest and select those to be displayed.
There are two ways of displaying a summary of the items that are associated with a query:
Line Item Status and Data Visualization.
Different browsing capabilities are:
www.alljntuworld.in
JNTU World
oldest item. With the introduction of ranking based upon predicted relevance values, the
status summary displays the relevance score associated with item along with a brief
descriptor of the
Zoning: The user wants to see the minimum information needed to determine if the item is
ld
relevant. Once the determination is made an item is possibly relevant, the user wants to
Highlighting: Another display aid is an indication was selected. This indication frequently
highlighting, lets the user quickly focus on the potentially relevant parts of the text to scan
or
for item relevance. Different strengths of highlighting indicates how strongly the
highlighted word participated in the selection of the item
1.7 Miscellaneous:
There are many additional functions that facilitates the users ability to input queries,
reducing the time it takes to generate the queries, and reducing a priori the probability of
entering a poor query.
JN
TU
Vocabulary Browse: This provides the capability to display in alphabetical sorted order
words from the document database. Logically, all unique words (processing
tokens) in the database are kept in sorted order along with a count of the number of unique
items in which the word is found.
Iterative Search and Search
History Log
:Frequently a search returns a
items
than
Hit file containing many more
the
user wants to review. Rather
than typing in a complete new query, the results of the previous search can be used as a
constraining list to create a new query that is applied against it.
Canned Query : The capability to name a query and store it to be retrieved and executed
during a later user session is called canned or stored queries.
Multimedia : Once a list of potential items that Satisfy the query are
discovered, the
techniques for displaying them when they
are multimedia
Introduces new Challenges. To display more aggregate data textual interfaces sometimes allow for
clustering of the hits and then use of graphical display to show a higher level view of
the information. Neither of these techniques lends themselves
well when
the information
is multimodal. The textual aspect of the multimedia can use to
apply all of the techniques described.
www.alljntuworld.in
JNTU World
UNIT-2
Creating a direct search in document data base or indirect search through index files.
Concept based representation: instead of transforming the input into a searchable format
some systems transform the input into different representation that is concept based .Search ?
Search and return item as per the incoming items.
Indexing originally called cataloguing : oldest technique to identity the contents of items to
assist in retrieval.
Items overlap between full item indexing , public and private indexing of files
Objectives : the public file indexer needs to consider the information needs of all users of
library system . Items overlap between full item indexing , public and private indexing of
files
or
ld
TU
Users may use public index files as part of search criteria to increase recall.
They can constrain there search by private index files
The primary objective of representing the concepts within an item to facilitate users finding
relevant information .
Users may use public index files as part of search criteria to increase recall.
They can constrain there search by private index files
JN
The primary objective of representing the concepts within an item to facilitate users finding
relevant information .
JNTU World
Fig :
Indexing process
ld
www.alljntuworld.in
1.Decide the scope of indexing and the level of detail to be provided. Based on usage
scenario of users .
2.Second decision is to link index terms together in a single index for a particular concept.
JN
TU
TEXT PROCESSING
or
www.alljntuworld.in
JNTU World
1. Document Parsing. Documents come in all sorts of languages, character sets, and formats;
often, the same document may contain multiple languages or formats, e.g., a French email
with Portuguese PDF attachments. Document parsing deals with the recognition and
breaking down of the document structure into individual components. In this pre
processing phase, unit documents are created; e.g., emails with attachments are split into one
document representing the email and as many documents as there are attachments.
ld
2. Lexical Analysis. After parsing, lexical analysis tokenizes a document, seen as an input
stream, into words. Issues related to lexical analysis include the correct identification of
accents, abbreviations, dates, and cases. The difficulty of this operation depends much on the
language at hand: for example, the English language has neither diacritics nor cases, French
has diacritics but no cases, German has both diacritics and cases. The recognition of
abbreviations and, in particular, of time expressions would deserve a separate chapter due to
its complexity and the extensive literature in the field For current approaches
or
3. Stop-Word Removal. A subsequent step optionally applied to the results of lexical analysis
is stop-word removal, i.e., the removal of high-frequency words. For example, given the
sentence search engines are the most visible information retrieval applications and a classic
stop words set such as the one adopted by the Snowball stemmer,1 the effect of stop-word
removal would be: search engine most visible information retrieval applications.
4. Phrase Detection. This step captures text meaning beyond what is possible with pure bagof-word approaches, thanks to the identification of noun groups and other phrases. Phrase
detection may be approached in several ways, including rules (e.g., retaining terms that are
not separated by punctuation marks), morphological analysis , syntactic analysis, and
combinations thereof. For example, scanning our example sentence search engines are the
most visible information retrieval applications for noun phrases would probably result in
identifying search engines and information retrieval.
TU
JN
6. Weighting. The final phase of text pre processing deals with term weighting. As
previously mentioned, words in a text have different descriptive power; hence, index terms
can be weighted differently to account for their significance within a document and/or a
document collection. Such a weighting can be binary, e.g., assigning 0 for term absence and
1 for presence.
www.alljntuworld.in
JNTU World
SCOPE OF INDEXING
When perform the indexing manually, problems arise from two sources the author and the
indexer the author and the indexer .
Exhaustively of index is the extent to which the different concepts in the item are indexed.
For example, whether the term processor or the term microcomputer or the term
Pentium should be used in the index of an item is based upon the specificity decision.
Indexing an item only on the most important concept in it and using general index terms
yields low exhaustively and specificity.
or
ld
Another decision on linkages process whether linkages are available between index terms for
an item .
Used to correlate attributes associated with concepts discussed in an item .this process is
called preordination .
When index terms are not coordinated at index time the coordination occurs at search time.
This is called post coordination , implementing by AND ing index terms .
Factors that must be determined in linkage process are the number of terms that can be
related.
Ex., an item discusses the drilling of oil wells in Mexico by CITGO and the introduction of
oil refineries in Peru by the U.S.
JN
TU
AUTOMATIC INDEXING
www.alljntuworld.in
JNTU World
Automatic indexing requires few seconds based on the processor and complexity of
algorithms to generate indexes.
Index resulting form automated indexing fall into two classes , weighted and un weighted .
Un weighted indexing system : the existence of an index term in a document and some times
its word location are kept as part of searchable data structure .
Weighted indexing system: a attempt is made to place a value on the index term associated
with concept in the document . Based on the frequency of occurrence of the term in the item .
The results are presented to the user in order of rank value from highest number to lowest
number .
Indexing By term
Terms (vocabulary) of the original item are used as basis of index process .
There are two major techniques for creation of index statistical and natural language.
Statistical can be based upon vector models and probabilistic models with a special case
being Bayesian model(accounting for uncertainty inherent in the model selection process) .
Called statistical because their calculation of weights use information such as frequency of
occurrence of words .
Natural language also use some statistical information , but perform more complex parsing to
define the final set of index concept.
The system emphasizes weights as a foundation for information detection and stores these
weights in a vector form.
Each vector represents a document. And each position in a vector represent a unique
word(processing token) in a data base..
The value assigned to each position is the weight of that term in the document.
JN
TU
or
ld
Search is accomplished by calculating the distance between the query vector and document
vector.
Could be applied as part of index term weighing. But usually applied as part of retrieval
process by calculating the relation ship between an item and specific query.
Graphic representation each node represents a random variable arch between the nodes
represent a probabilistic dependencies between the node and its parents .
www.alljntuworld.in
JNTU World
Indexing by concept
Concept indexing determines a canonical set of concept based upon a test set of terms and
uses them as base for indexing all items. Called latent semantics indexing .
Uses neural NW strength of the system word relationship (synonyms) and uses the
information in generating context vectors.
Two neural networks are used one to generated stem context vectors and another one to
perform query.
TU
or
ld
Multimedia indexing:
JN
INFORMATION EXTRACTION
There are two processes associated with information extraction:
The process of extracting facts to go into indexes is called Automatic File Build.
In establishing metrics to compare information extraction, precision and recall are applied
with slight modifications.
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
Recall refers to how much information was extracted from an item versus how much should
have been extracted from the item.
It shows the amount of correct and relevant data extracted versus the correct and relevant
data in the item.
Precision refers to how much information was extracted accurately versus the total
information extracted.
This could be caused by templates filled on topics that are not intended to be extracted or
slots that get filled with non-relevant data.
Fallout measures how much a system assigns incorrect slot fillers as the number of
These measures are applicable to both human and automated extraction processes.
Rather than trying to determine specific facts, the goal of document summarization is to
extract a summary of an item maintaining the most important ideas while significantly
reducing the size.
Examples of summaries that are often part of any item are titles, table of contents, and
abstracts with the abstract being the closest.
The abstract can be used to represent the item for search purposes or as a way for a user to
determine the utility of an item without having to read the complete item.
or
ld
TU
DATA STRUCTURES
Stemming Algorithms
Data structure : The knowledge of data structure gives an insight into the capabilities
available to the system .
JN
www.alljntuworld.in
JNTU World
1. One structurestores and manages received items in their normalized form is called document
manger
Item
normalization
Major data
structure
TU
Original
document file
Document
manager
or
Document
File creation
support
ld
2. The other data structure contains processing tokens and associated data to
search.
Document
Search manager
Processing
token
search file
Result of a search are references to the items that satisfy the search statement which are
passed to the document manager for retrieval.
Focus : on data structure that support search function
JN
Stemming : is the transformation often applied to data before placing it in the searchable
data structure
Stemming algorithm is used to improve the efficiency of IRS and improve recall.
www.alljntuworld.in
JNTU World
Conflation(the process or result of fusing items into one entity; fusion; amalgamation)is a
term that is used to refer mapping multiple morphological variants to single
representation(stem).
Stem carries the meaning of the concept associated with the word and the affixes(ending)
introduce subtle(slight) modification of the concept.
Terms with a common stem will usually have similar meanings, for example:
Ex : Terms with a common stem will usually have similar meanings, for example:
CONNECT
CONNECTED
CONNECTING
CONNECTION
CONNECTIONS
or
ld
Frequently, the performance of an IR system will be improved if term groups such as this are
conflated into a single term. This may be done by removal of the various suffixes -ED, -ING,
-ION, IONS to leave the single term CONNECT
In addition, the suffix stripping process will reduce the total number of terms in the IR
system, and hence reduce the size and complexity of the data in the system, which is always
advantageous.
Important for a system to categories a word prior to making the decision to stem.
TU
Proper names and acronyms (A word formed from the initial letters of a name say IARE )
should not have stemming applied.
Stemming can also cause problems for natural language processing NPL systems by causing
loss of information .
PORTER STEMMING ALGORITHM
Based on a set condition of the stem
JN
3.*v*
4. *d
www.alljntuworld.in
JNTU World
5. *o
- stem ends in consonant vowel sequence where
not w,x,y(e.g. -WIL, -HOP).
Condition
Suffix
1a
Null
Sses
1b
*v*
1b1
Null
1c
*v*
m>0
Example
Ss
Ing
Null
At
Ate
Inflated-> inflate
Happy->happi
aliti
al
Formaliti-> formal
TU
Replacement
ld
Step
or
m>0
Icate
Ic
Duplicate->duplie
m>1
Able
Null
JN
5a
m>1
Null
Inflate-> inflat
5b
m>1 and *d
Null
Single letter
www.alljntuworld.in
JNTU World
in a dictionary and
ware systems-
a root form.
process.
ld
Kstem uses 6 major data files to control and limit the stemming
or
3. Exceptional list of words that should retain a e at the end (e.g., suites to suite but
suited to suit).
4. Direct _conflation - word pairs that override stemming algorithm.
5. County_nationality _conflation ( British maps to Britain )
6. Proper nouns -- that should not be stemmed
New words that are not special forms (e.g., dates, phone numbers) are located in the
dictionary to determine simpler forms by stripping off suffixes and respelling plurals as
defined in the dictionary.
3. Successor stemmers:
Based on length of prefixes .
TU
JN
Uses information to divide a word into segments and selects on of the segments to stem.
Successor variety of words are used to segment a word by applying one of the following four
methods.
www.alljntuworld.in
JNTU World
2. Peak and plateau: a segment break is made after a character whose successor variety exceeds
that of the character.
3. Complete word method: break on boundaries of complete words.
4. Entropy method:uses the distribution method of successor variety letters.
1. Let |Dak| be the number of words beginning with k length sequence of letters a.
2. Let |Dakj| be the number of words in Dak with successor j.
or
p=1
ld
3. The probability that a member of Dak has the successor j is given as |Dakj| / |Dak|
After a word has been segmented the segment to be used as stem must be selected.
Hafer and Weiss selected the following rule
TU
JN
For each word a listof documents in which the word is found is stored(inversion of document
)
Each document is given a unique the numerical identifier that is stored in inversion list .
Dictionary is used to located the inversion list for a particular word.
Which is a sorted list( processing tokens) in the system and a
its inversion list.
Dictionary can also store other information used in query optimization such as length of
inversion lists to increase the precision.
JNTU World
ld
www.alljntuworld.in
or
Ex: bit 1(10),1(12) 1(18) is in 10,12, 18 position of the word bit in the document #1.
When a search is performed, the inversion lists for the terms in the
appropriate logic is applied between inversion lists.
Words with special characteristics can be stored in their own dictionary. Ex: Date which
require date ranging and numbers.
Systems that support ranking are re-organized in ranked order.
B trees can also be used for inversion instead of dictionary.
TU
The inversion lists may be at the leaf level or referenced in higher level pointers.
A B-tree of order m is defined as:
JN
All leaves are at the same level or differ by at most one level.
JNTU World
ld
www.alljntuworld.in
or
N-Grams can be viewed as a special technique for conflation (stemming) and as a unique
data structure in information systems.
Unlike stemming that generally tries to determine the stem of a word that represents the
semantic meaning of the word, n-grams do not care about semantics.
The searchable data structure is transformed into overlapping n-grams, which are then used
to create the searchable database.
Examples of bigrams, trigrams and pentagrams for the word phrase sea colony.
TU
JN
The symbol # is used to represent the interword symbol which is anyone of a set of symbols
(e.g., blank, period, semicolon, colon, etc.).
Each of the n-grams created becomes a separate processing tokens and are searchable.
It is possible that the same n-gram can be created multiple times from a single word.
Uses :
www.alljntuworld.in
JNTU World
Example
compuuter
compter
compiter
ld
comptuer
chars
Zamora showed trigram analysis provided a viable data structure for identifying misspellings
and transposed characters.
or
This impacts information systems as a possible basis for identifying potential input errors for
correction as a procedure within the normalization process.
Frequency of occurrence of n-gram patterns can also be used for identifying the language of
an item.
Trigrams have been used for text compression and to manipulate the length of index terms.
TU
Advantage:
They place a finite limit on the number of searchable token
MaxSeg n=()n maximum number of unique n grams that can be generated.
n is the length of n-grams
JN
PAT structure or PAT tree or PAT array : continuous text input data structures(string like NGram data structure).
The input stream is transformed into a searchable data structure consisting of substrings, all
substrings are unique.
www.alljntuworld.in
JNTU World
In creation of PAT trees each position in the input string is the anchor point for a sub-string
that starts at that point and includes all new text up to the end of the input.
Binary tree, most common class for prefix search,But Pat trees are sorted logically which
facilitate range search, and more accurate then inversion file .
PAT trees provide alternate structure if supporting strings search.
or
ld
Text
Examples of sistrings
The key values are stored at the leaf nodes (bottom nodes) in the PAT Tree.
For a text input of size n there are n leaf nodes and n-1 at most higher level nodes.
TU
INPUT
100110001101
1001....
sistring 2
001100...
JN
sistring 1
sistring 3
01100....
sistring 4
11.......
sistring 5
1000...
sistring 6
000.....
sistring 7
001101
sistring 8
01101
www.alljntuworld.in
JNTU World
or
ld
JN
TU
The value in the intermediate nodes (indicated by rectangles) is the number of bits to skip
until the next bit to compare that causes differences between similar terms.
www.alljntuworld.in
JNTU World
ld
Partitioning of words is done in block size ,Which is nothing but set of words, Code length is
16 bits .
Search is accomplished by template matching on the bit position .
or
To avoid signatures being too dense with 1s, a maximum number of words is
specified and an item is partitioned into blocks of that size.
The block size is set at five words, the code length is 16 bits and the number of bits that are
allowed to be 1 for each word is five.
TEXT: Computer Science graduate students study (assume block size is five words)
TU
WORD
Signature
----------------------------------------------------------------computer
JN
Science
graduate
students
study
---------------------------------------------------------------Block Signature
---------------------------------------------------------------Superimposed Coding
www.alljntuworld.in
JNTU World
Application(s)/Advantage(s)
Signature files provide a practical solution for storing and locating information in a number
of different situations.
Signature files have been applied as medium size databases, databases with low frequency of
terms, WORM devices, parallel processing machines, and distributed environments
HYPERTEXT AND XML DATA STRUCTURES
ld
The advent of the Internet and its exponential growth and wide acceptance as a new global
information network has introduced new mechanisms for representing information.
or
This structure is called hypertext and differs from traditional information storage data
structures in format and use.
The hypertext is Hypertext is stored in HTML format and XML .
Bot of these languages provide detailed descriptions for subsets of text similar to the zoning.
Hypertext allows one item to reference another item via a embedded pointer .
HTML defines internal structure for information exchange over WWW on the internet.
TU
JN
www.alljntuworld.in
JNTU World
UNIT-3
Statistical Indexing
Natural Language
Concept Indexing
Hypertext Linkages
CLASSES OF AUTOMATIC INDEXING
ld
AUTOMATIC INDEXING
or
This process is associated with the generation of the searchable data structures associated
with an item.
The left hand side of the figure includes identifying processing tokens, apply stop list
algorithm , characterize tokens, apply stemming, and creating searchable data structure is
part of indexing process.
JN
TU
www.alljntuworld.in
JNTU World
All systems go through the initial stage of zoning and identify the processing tokens to create
index.
Some systems automatically divide the document up into fixed length passages or localities,
which become the item unit that is indexed.
Filters such as stop list algorithm and stemming algorithms -to reduce the processing tokens.
ld
Statistical strategy covers the broadest range of indexing techniques and common in
commercial systems .
or
Statistics that are applied to the event data are probabilistic, Bayesian, vector spaces, neural
net.
Statistic approach stores a single statistics, such as how often each word occurs in an item
used for generating relevance scores after a Boolean search .
Probabilistic indexing -stores the information that are used in calculating a probability that a
particular item satisfies (i.e., is relevant to) a particular query.
TU
Bayesian and vector space stores the information that are used in generating a relative
confidence level of an items relevancy to a query.
Neural networks dynamic learning structures comes
determine concept class.
JN
Concept indexing uses the words within an item to correlate to concepts discussed in the
index item.
When generating the concept classes automatically, there may not be a name applicable to
the concept but just a statistical significance.
Finally, a special class of indexing can be defined by creation of hypertext linkages.
These linkages provide virtual threads of concepts between items versus directly defining the
concept within an item.
Each technique has its own strengths and weaknesses.
www.alljntuworld.in
JNTU World
STATISTICAL INDEXING
The documents are found by normal Boolean search and then statistical calculation are
performed on the hit file, ranking the out put( e.g. The term- frequency algorithm)
1. Probability weighting
or
4. Discrimination Value
ld
2. Vector Weighting
Probabilistic systems attempt to calculate a probability value that should be invariant to both
calculation method and text corpora(large collection of written/spoken texts).
This allows easy integration of the final results when searches are performed across multiple
databases and use different search algorithms.
TU
The use of probability theory is a natural choice the basis of evidential reasoning ( drawing
conclusions from evidence).
JN
IRS the relevance of an item is a continuous function from non- relevant to absolutely useful.
Source of problems: in application of probability theory come from lack of accurate data
and simplified assumptions that are applied to mathematical modeling.
cause the results of probabilistic approaches in ranking items to be less accurate than other
approaches.
Downloaded From JNTU World (http://www.alljntuworld.in)
for
www.alljntuworld.in
JNTU World
Advantage of probabilistic approach is that it can identify its weak assumptions and work to
strengthens them.
Ex : logistical regression
The logistic reference model uses a random sample of query documentterms for which binary
relevance judgment has been made from judgment from samples.
ld
or
o Logarithm O is the logarithm of odds of relevance for terms Tk which is present in document
Dj and query Qi
o The logarithm that the ith Query is relevant to the jth document is the sum of the logodds for
all terms:
o The coefficients of the equation for logodds is derived for a particular database using a
random sample of query-document-term-relevance quadruples and used to predict odds of
relevance for other query-document pairs.
o Additional attributes of relative frequency in the query (QRF), relative frequency in the
document (DRF) and relative frequency of the term in all the documents (RFAD) were
included, producing the logodds formula:
TU
o QRF = QAF\ (total number of terms in the query), DRF = DAF\(total number of words in the
document) and RFAD = (total number of term occurrences in the database)\ (total number of
all words in the database).
o Logs are used to reduce the impact of frequency information; then smooth out skewed
distributions.
JN
o The coefficients and log (O(R)) were calculated creating the final formula for ranking for
query vector Q, which contains q terms:
The logistic inference method was applied to the test database along with the Cornell
SMART vector system, inverse document frequency and cosine relevance weighting
formulas.
www.alljntuworld.in
JNTU World
ld
2 . Vector weighing
Vector is one dimension of set of values, where the order position is fixed and represents a
domain .
Under weighted - domain is set of positive value the value of each processing token
represents the relative importance of the item.
Binary vector requires a decision process to determine if the degree that a particular token
the semantics of item is sufficient to include in the vector.
Ex., a five-page item may have had only one sentence like Standard taxation of the
shipment of the oil to refineries is enforced.
For the binary vector, the concepts of Tax and Shipment are below the threshold of
importance (e.g., assume threshold is 1.0) and they not are included in the vector.
TU
or
JN
A Weighted vector acts same as the binary vector but provides a range of values that
accommodates a variance in the value of relative importance of processing tokens in
representing the item .
The use of weights also provides a basis for determining the rank of an item.
The vector approach allows for mathematical and physical representation using a
vector space model.
Each processing token can be considered another dimension in an item representation space.
www.alljntuworld.in
3D vector representation assuming there were only three processing tokens, Petroleum
Mexico and Oil.
or
ld
JNTU World
TU
JN
If the word computer occurs 15 times within an item it has a weight of 15.
The term frequency weighting formula: (1+log(TF))/1+log(average(TF)) / (1-slope) * pivot
+
slope*no. of unique terms
where slope was set at .2 and the pivot was set to the average no of unique terms occurring in
the collection.
This leads to the final algorithm that weights each term by the above formula divided by the
pivoted normalization:
www.alljntuworld.in
JNTU World
The term computer represents a concept used in an item, but it does not help a user find the
specific information being sought since it returns the complete DB.
This leads to the general statement enhancing weighting algorithms that the weight assigned
to an item should be inversely proportional to the frequency of occurrence of an item in the
database.
ld
Tfij (term frequency) is the frequency of term j in item i, n is the number of items in
the database and
IFij (item frequency or document frequency) is the number of items in the database that have
term j in them.
Ex., Assume that the term oil is found in 128 items, Mexico is found in 16 items and
refinery is found in 1024 items.
If a new item arrives with all three terms in it, oil found 4 times, Mexico found 8 times,
and refinery found 10 times and there are 2048 items in the total database.
with the resultant inverse document frequency item vector = (20, 64, 20).
The value of n and IF vary as items are added and deleted from the database.
JN
TU
or
3. Signal weighting
IDF adjusts the weight of a processing token for an item based upon the number of items that
contain the term in the existing database.
It does not account for is the term frequency distribution of the processing token in the items
that contain the term - can affect the ability to rank items.
For example, assume the terms SAW and DRILL are found in 5 items with the following
frequencies:
www.alljntuworld.in
JNTU World
An instance of an event that occurs all the time has less information value than an instance of
a seldom occurring event.
If there are many independent occurring events then the calculation for the average
information value across the events is:
TU
or
ld
formula for calculating the weighting factor called Signal (Dennis-67) can be used:
JN
can be defined
JNTU World
ld
www.alljntuworld.in
The weighting factor for term DRILL that does not have a uniform distribution is larger
than that for term SAW and gives it a higher weight.
This technique could be used by itself or in combination with inverse document frequency or
other algorithms.
The overhead of the additional data needed in an index and the calculations required to get
the values have not been demonstrated to produce better results.
or
4. Discrimination value
all items appear the same, the harder it is to identify those that are needed.
TU
Salton and Yang proposed a weighting algorithm that takes into consideration the ability for
a search term to discriminate among items.
where AVESIM is the average similarity between every item in the database and AVESIMi
is the same calculation except that term i is removed from all items.
JN
A positive value indicates that removal of term i has increased the similarity between items.
In this case, leaving the term in the database assists indiscriminating between items and is of
value.
A value close to zero implies that the terms removal or inclusion does not change the
similarity between items.
www.alljntuworld.in
JNTU World
If the value is negative, the terms effect on the database is to make the items appear more
similar since their average similarity decreased with its removal.
Once the value of DISCRMi is normalized as a positive number, it can be used in the standard
weighting formula as:
ld
Often weighting schemes use information that is based upon processing token distributions
across the database.
Information databases tend to be dynamic with new items always being added and to a lesser
degree old items being changed or deleted.
Thus these factors are changing dynamically.
or
There are a number of approaches to compensate for the constant changing values.
a. Ignore the variances and calculate weights based upon current values, with the factors
changing over time. Periodically rebuild the complete search database.
b. Use a fixed value while monitoring changes in the factors. When the changes reach a
certain threshold, start using the new value and update all existing vectors with the new
value.
c. Store the invariant variables (e.g., term frequency within an item) and at search time
calculate the latest weights for processing tokens in items needed for search terms.
First approach, Periodically the database and all term weights are recalculated based upon the
most recent updatesto the database.
TU
For large databases in the millions of items, the overhead of rebuilding the database can be
significant.
In the second approach, there is a recognition that for the most frequently occurring items, the
aggregate values are large.
As such, minor changes in the values have negligible effect on the final weight calculation.
JN
Thus, on a term basis, updates to the aggregate values are only made when sufficient changes
not using the current value will have an effect on the final weights and the search/ranking
process.
This process also distributes the update process over time by only updating a subset of terms
at any instance in time.
The third approach is the most accurate. The weighted values in the database only matter when
they are being used to determine items to return from a query or the rank order to return the
items.
This has more overhead in that database vector term weights must be calculated dynamically
for every query term.
SOLUTIONS:
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
If the system is using an inverted file search structure, this overhead is very minor.
The best environment would allow a user to run a query against multiple different time periods
and different databases that potentially use different weighting algorithms, and have the
system integrate the results into a single ranked Hit file.
Problems With the Vector Model
A major problem comes in the vector model when there are multiple topics being discussed in
a particular item.
ld
For example, assume that an item has an in-depth discussion of oil in Mexico and also
coal in Pennsylvania. The vector model does not have a mechanism to associate each
energy source with its particular geographic area.
There is no way to associate correlation factors between terms since each dimension in a
vector is independent of the other dimensions.
or
Thus the item results in a high value in a search for coal in Mexico.
Bayesian Model
The concept of a vector space allows only one scalar value to be associated with each
processing term for each item.
One way of overcoming the restrictions inherent in a vector model is to use a Bayesian
approach to maintaining information on processing tokens.
The Bayesian model provides a conceptually simple yet complete model for information
systems.
TU
The Bayesian approach is based upon conditional probabilities (e.g., Probability of Event 1
given Event 2 occurred).
This general concept can be applied to the search function as well as to creating the index to
the database.
JN
Thus the general case, using the Bayesian formula, is P(REL/DOCi , Queryj) which is
interpreted as the probability of relevance (REL) to a search statement given a particular
document and query.
In addition to search, Bayesian formulas can be used in determining the weights associated
with a particular processing token in an item.
The objective of creating the index to an item is to represent the semantic information in the
item.
A Bayesian network can be used to determine the final set of processing tokens (called topics)
and their weights.
www.alljntuworld.in
JNTU World
or
ld
A simple view of the process where Ti represents the relevance of topic i in a particular
item and Pj represents a statistic associated with the event of processing token j being
present in the item.
The statistics associated with the processing token are typically frequency of occurrence.
But they can also incorporate proximity factors that are useful in items that discuss multiple
topics.
There is one major assumption made in this model:Assumption of Binary Independence : the
topics and the processing token statistics are independent of each other. The existence of one
topic is not related to the existence of the other topics. The existence of one processing token
is not related to the existence of other processing tokens.
TU
In most cases this assumption is not true. Some topics are related to other topics and some
processing tokens related to other processing tokens.
For example, the topics of Politics and Economics are in some instances related to each
other and in many other instances totally unrelated.
The same type of example would apply to processing tokens.
There are two approaches to handling this problem.
JN
The first is to assume that there are dependencies, but that the errors introduced by assuming
the mutual independence do not noticeably effect the determination of relevance of an item
nor its relative rank associated with other retrieved items.
This is the most common approach used in system implementations.
A second approach can extend the network to additional layers to handle interdependencies.
Thus an additional layer of Independent Topics (ITs) can be placed above the Topic layer and
a layer of Independent Processing Tokens (IPs) can be placed above the processing token
layer.
www.alljntuworld.in
JNTU World
or
ld
TU
The new set of Independent Processing Tokens can then be used to define the attributes
associated with the set of topics selected to represent the semantics of an item.
To compensate for dependencies between topics the final layer of Independent Topics is
created.
The degree to which each layer is created depends upon the error that could be introduced by
allowing for dependencies between Topics or Processing Tokens.
JN
Although this approach is the most mathematically correct, it suffers from losing a level of
precision by reducing the number of concepts available to define the semantics of an item.
NATURAL LANGUAGE
The goal of natural language processing is to use the semantic information in addition to the
statistical information to enhance the indexing of the item.
This improves the precision of searches, reducing the number of false hits a user reviews.
www.alljntuworld.in
JNTU World
The semantic information is extracted as a result of processing the language rather than
treating each word as an independent entity.
The simplest output of this process results in generation of phrases that become indexes to an
item.
More complex analysis generates thematic representation of events rather than phrases.
Statistical approaches use proximity as the basis behind determining the strength of word
relationships in generating phrases.
ld
For example, with a proximity constraint of adjacency, the phrases venetian blind and blind
Venetian may appear related and map to the same phrase.
But syntactically and semantically those phrases are very different concepts.
or
Natural language processing can also combine the concepts into higher level concepts
sometimes referred to as thematic representations.
1. Index Phrase Generation
The goal of indexing is to represent the semantic concepts of an item in the information system
to support finding relevant information.
Single words have conceptual context, but frequently they are too general to help the user find
the desired information.
Term phrases allow additional specification and focusing of the concept to provide better
precision and reduce the users overhead of retrieving non-relevant items.
TU
Having the modifier grass or magnetic associated with the term field clearly
disambiguates between very different concepts.
One of the earliest statistical approaches to determining term phrases using of a COHESION
factor between terms (Salton-83):
JN
where SIZE-FACTOR is a normalization factor based upon the size of the vocabulary and
is the total frequency of co-occurrence of the pair Termk , Termh in the item collection.
Co-occurrence may be defined in terms of adjacency, word proximity, sentence proximity, etc.
This initial algorithm has been modified in the SMART system to be based on the following
guidelines
any pair of adjacent non-stop words is a potential phrase
any pair must exist in 25 or more items
phrase weighting uses a modified version of the SMART system single term algorithm
normalization is achieved by dividing by the length of the single-term sub-vector.
Statistical approaches tend to focus on two term phrases.
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
The natural language approaches is their ability to produce multiple-term phrases to denote a
single concept.
If a phrase such as industrious intelligent students was used often, a statistical approach
would create phrases such as industrious intelligent and intelligent student.
A natural language approach would create phrases such as industrious student, intelligent
student and industrious intelligent student.
The first step in a natural language determination of phrases is a lexical analysis of the input.
ld
In its simplest form this is a part of speech tagger that, for example, identifies noun phrases by
recognizing adjectives and nouns.
Precise part of speech taggers exist that are accurate to the 99 per cent range.
or
Additionally, proper noun identification tools exist that allow for accurate identification of
names, locations and organizations since these values should be indexed as phrases and not
undergo stemming.
The Tagged Text Parser (TTP), based upon the Linguistic String Grammar (Sager-81),
produces a regularized parse tree representation of each sentence reflecting the predicateargument structure (Strzalkowski-93).
TU
The tagged text parser contains over 400 grammar production rules. Some examples of
The TTP parse trees are header-modifier pairs where the header is the main concept and the
modifiers are the additional descriptors that form the concept and eliminate ambiguities.
JN
Ex., The former Soviet President has been a local hero regularized parse tree structure
generated for the
|assert
perf[HAVE]
verb[BE]
subject
np
noun[President]
t_pos[The]
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
adj[former]
adj[Soviet]
object
np
noun[hero]
t_pos[a]
ld
adj[local]
To determine if a header-modifier pair warrants indexing,Strzalkowski calculates a value for
Informational Contribution (IC) for each element in the pair.
or
The basis behind the IC formula is a conditional probability between the terms. The formula
for IC between two terms (x,y) is:
Where
is the frequency of (x,y) in the database,
is the number of pairs in which x
occurs at the same Position as in (x,y) and D(x) is the dispersion parameter which is the
number of distinct words with which x is paired. When IC= 1, x occurs only with the
following formula for weighting phrases:
Natural language processing not only produces more accurate term phrases, but can provide
higher level semantic information identifying relationships between concepts.
TU
System adds the functional processes Relationship Concept Detectors, Conceptual Graph
Generators and Conceptual Graph Matchers that generate higher level linguistic relationships
including semantic and is course level relationships.
During the first phase of this approach, the processing tokens in the document are mapped to
Subject Codes.
These codes equate to index term assignment and have some similarities to the concept-based
systems.
JN
The next phase is called the Text Structurer, which attempts to identify general discourse level
areas within an item.
The next level of semantic processing is the assignment of terms to components, classifying
the intent of the terms in the text and identifying the topical statements.
The next level of natural language processing identifies interrelationships between the
concepts.
www.alljntuworld.in
JNTU World
CONCEPT INDEXING
Natural language processing starts with a basis of the terms within an item and extends the
information kept on an item to phrases and higher level concepts such as the relationships
between concepts.
Concept indexing takes the abstraction a level further.
Its goal is use concepts instead of terms as the basis for the index, producing a reduced
dimension vector space.
ld
Concept indexing can start with a number of unlabeled concept classes and let the information
in the items define the concepts classes created.
A term such as automobile could be associated with concepts such as vehicle,
transportation, mechanical device, fuel, and environment.
or
The term automobile is strongly related to vehicle, lesser to transportation and much
lesser the other terms.
Thus a term in an item needs to be represented by many concept codes with different weights
for a particular item.
The basis behind the generation of the concept approach is a neural network model.
JN
TU
Example demonstrates how the process would work for the term automobile.
HYPERTEXT LINKAGES
www.alljntuworld.in
JNTU World
user
navigates
through
hierarchy
by
expanding
the
hyper
ld
link.
or
3. Web Crawler's : A web crawler (also known as a Web spider or Web robot) is a
program or automated script which browses the World Wide Web in a methodical, automated
manner.
Are tools that that allow a user to define items of interest and they automatically go to various
sites on the net and search for the desired information. know as search tool rather than
indexing.
What is needed is an indexalgorithm for items that look at hypertext linkages as an
extension of the concept where the link exits .
Current concept is defined by the proximity of information .
TU
Attempts have been made to achieve automatically generate hyper link between items . But
they suffer from the dynamic growing data bases.
JN
Significant portion of errors have come from parsing rather than algorithm problems.
www.alljntuworld.in
JNTU World
UNIT-4
or
ld
Cluster analysis is a technique for multivariate analysis that assigns items to automatically
created groups based on a calculation of the degree of association between items and groups.
In the information retrieval (IR) field, cluster analysis has been used to create groups of
documents with the goal of improving the efficiency and effectiveness of retrieval, or to
determine the structure of the literature of a field. The terms in a document collection can
also be clustered to show their relationships. The two main types of cluster analysis methods
are the non-hierarchical, which divide a data set of N items into M clusters, and the
hierarchical, which produce a nested data set in which pairs of items or clusters are
successively linked. The non-hierarchical methods such as the single pass and reallocation
methods are heuristic in nature and require less computation than the hierarchical methods.
However, the hierarchical methods have usually been preferred for cluster-based document
retrieval. The commonly used hierarchical methods, such as single link, complete link, group
average link, and Ward's method, have high space and time requirements. In order to cluster
the large data sets with high dimensionality that are typically found in IR applications, good
algorithms (ideally O(N2) time, O(N) space) must be found. Examples are the SLINK and
minimal spanning tree algorithms for the single link method, the Voorhees algorithm for
group average link, and the reciprocal nearest neighbour algorithm for Ward's method.
4.1 CLUSTER ANALYSIS
4.1.1 Introduction
TU
Cluster analysis is a statistical technique used to generate a category structure which fits a set
of observations. The groups which are formed should have a high degree of association
between members of the same group and a low degree between members of different groups
(Anderberg 1973). While cluster analysis is sometimes referred to as automatic classification,
this is not strictly accurate since the classes formed are not known prior to processing, as
classification implies, but are defined by the items assigned to them.
JN
Because there is no need for the classes to be identified prior to processing, cluster analysis is
useful to provide structure in large multivariate data sets. It has been described as a tool of
discovery because it has the potential to reveal previously undetected relationships based on
complex data (Anderberg 1973). An early application of cluster analysis was to determine
taxonomic relationships among species. Psychiatric profiles, medical and clinical data,
census and survey data, images, and chemical structures and properties have all been studied
using cluster analytic methods, and there is an extensive and widely scattered journal
literature on the subject.
Basic texts include those by Anderberg (1973), Hartigan (1975), Everitt (1980), Aldenderfer
and Blashfield (1984), Romesburg (1984), Spath (1985), Jain and Dubes (1988) and
Kaufman (1990). Taxonomic applications have been described by Sneath and Sokal (1973),
and social science applications by Lorr (1983) and Hudson and Associates (1982).
Comprehensive reviews by Lee (1981), Dubes and Jain (1980) and Gordon (1987) are also
recommended.
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
Because cluster analysis is a technique for multivariate analysis that has application in many
fields, it is supported by a number of software packages which are often available in
academic and other computing environments. Most of the methods and some of the
algorithms described in this chapter are found in statistical analysis packages such as SAS,
SPSSX, and BMDP and cluster analysis packages such as CLUSTAN and
CLUSTAR/CLUSTID. Brief descriptions and sources for these and other packages are
provided by Romesburg (1984).
4.1.2 Applications in Information Retrieval
ld
or
Documents may be clustered on the basis of the terms that they contain. The aim of this
approach has usually been to provide more efficient or more effective retrieval, though it has
also been used after retrieval to provide structure to large sets of retrieved documents. In
distributed systems, clustering can be used to allocate documents for storage. A recent review
(Willett 1988) provides a comprehensive summary of research on term-based document
clustering.
TU
Selecting the attributes on which items are to be clustered and their representation.
Selecting an appropriate clustering method and similarity measure from those available .
JN
If the collection to be clustered is a dynamic one, the requirements for update must be
considered.
If the aim is to use the clustered collection as the basis for information retrieval, a method
for searching the clusters or cluster hierarchy must be selected.
The emphasis in this chapter will be on the range of clustering methods available and
algorithms for their implementation, with discussion of the applications and evaluation that
have been carried out in an information retrieval environment. The following notation will be
used: N for the numberof items Di in a data set, L for its dimensionality, and M for the
number of clusters Ci created. It will be assumed that the items to be clustered are documents,
and each document Di (or cluster representative Ci) is represented by (weighti1, . . ., weightiL),
where weightik is the weight assigned to termk in Di or Ci. The choice of an appropriate
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
document representation is discussed elsewhere in this text; a summary of research on termweighting approaches is provided by Salton and Buckley (1988).
4.2 MEASURES OF ASSOCIATION
4.2.1 Introduction
ld
In order to cluster the items in a data set, some means of quantifying the degree of
association between them is required. This may be a distance measure, or a measure of
similarity or dissimilarity. Some clustering methods have a theoretical requirement for use of
a specific measure (Euclidean distance for Ward's method, for example), but more commonly
the choice of measure is at the discretion of the researcher.
or
While there are a number of similarity measures available, and the choice of similarity
measure can have an effect on the clustering results obtained, there have been only a few
comparative studies (summarized by Willett [1988]). In cluster-based retrieval, the
determination of interdocument similarity depends on both the document representation, in
terms of the weights assigned to the indexing terms characterizing each document, and the
similarity coefficient that is chosen. The results of tests by Willett (1983) of similarity
coefficients in cluster-based retrieval suggest that it is important to use a measure that is
normalized by the length of the document vectors. The results of tests on weighting schemes
were less definitive but suggested that weighting of document terms is not as significant in
improving performance in cluster-based retrieval as it is in other types of retrieval. Sneath
and Sokal (1973) point out that simple similarity coefficients are often monotonic with more
complex ones, and argue against the use of weighting schemes. The measures described
below are commonly used in information retrieval applications. They are appropriate for
binary or real-valued weighting scheme.
4.2.2 Similarity Measures
TU
A variety of distance and similarity measures is given by Anderberg (1973), while those most
suitable for comparing document vectors are discussed by Salton (1989). The Dice, Jaccard
and cosine coefficients have the attractions of simplicity and normalization and have often
been used for document clustering.
JN
Dice coefficient:
If binary term weights are used, the Dice Coefficient reduces to:
where C is the number of terms that Di and Dj have in common, and A and B are the number
of terms in Di and Dj.
Jaccard coefficient:
www.alljntuworld.in
JNTU World
Cosine coefficient:
ld
TU
or
The inverted file algorithm is particularly useful in limiting the amount of computation
required to calculate a similarity matrix, if the similarity measure used is one that results in a
0 value whenever a document-document or document-cluster pair have no terms in common
(Willett 1980; Perry and Willett 1983). The document term list is used as an index to the
inverted index lists that are needed for the similarity calculation. Only those
document/cluster pairs that share at least one common term will have their similarity
calculated; the remaining values in the similarity matrix are set to 0. The inverted file
algorithm is as follows:
JN
www.alljntuworld.in
JNTU World
or
ld
It should be noted that the similarity matrix can be the basis for identifying a nearest
neighbor (NN), that is, finding the closest vector to a given vector from a set
of N multidimensional vectors. The identification of an NN arises in many clustering
algorithms, and for large data sets makes a significant contribution to the computational
requirement. Calculating and storing the similarity matrix, or recalculating it when needed,
provides a brute force approach to nearest neighbor identification. Therefore, if an efficient
NN-finding algorithm can be incorporated into the clustering algorithm, considerable savings
of processing time may be achieved. However, although there are a number of techniques
available for introducing efficiency into the NN-finding process (Bentley et al. 1980;
Murtagh 1985), these techniques are generally inappropriate for data sets with the high
dimensionality typical of information retrieval applications. Use of the inverted file algorithm
to calculate the similarity matrix or a row of it seems to be the best optimization technique
available in these circumstances.
TU
There are a very large number of ways of sorting N objects into M groups, a problem
compounded by the fact that M is usually unknown. Most of the possible arrangements are of
no interest; it is the role of a clustering method to identify a set of groups or cluster that
reflects some underlying structure in the data. Moreover, there are many clustering methods
available, which have differing theoretical or empirical bases and therefore produce different
cluster structures. For a given clustering method, there may be a choice of
clustering algorithm or means to implement the method. The choice of clustering method
will determine the outcome, the choice of algorithm will determine the efficiency with which
it is achieved. In this section, an overview of the clustering methods most used in information
retrieval will be provided. The associated algorithms that are best suited to the processing of
the large data sets found in information retrieval applications are discussed in sections 4.4
and 4.5.
16.3.2 Computational and Storage Requirements
JN
In cases where the data set to be processed is very large, the resources required for cluster
analysis may be considerable. A major component of the computation required is the
calculation of the document-document or document-cluster similarity. The time requirement
will be minimally O(NM), where M is the number of clusters, for the simpler reallocation
methods; where the similarity matrix must be constructed, the proportionality is at least N2.
Most of the preferred clustering methods have time requirements of at least O(N2). The
storage requirement will be O(N) if the data set is stored, or O(N2) if the similarity matrix is
stored. For large N this may be unacceptable, and disk accesses may make processing time
too large if the similarity matrix is stored on disk. An alternative is to recalculate the
similarity matrix from the stored data whenever it is needed to identify the current most
similar pair, but this increases the time requirement by a factor of N2.
Because of the heavy demands of processing and storage requirements, much of the early
work on cluster analysis for information retrieval was limited to small data sets, often only a
few hundred items. However, improvements in processing and storage capacity and the
introduction of efficient algorithms for implementing some clustering methods and finding
nearest neighbors have made it feasible to cluster increasingly large data sets. Salton and
Bergmark (1981) have pointed out that there is a high degree of parallelism in the calculation
www.alljntuworld.in
JNTU World
of a set of similarity values, and parallel hardware also offers the potential for increased
processing efficiency (Willett and Rasmussen 1990).
Nonhierarchical methods
or
ld
Clustering methods are usually categorized according to the type of cluster structure they
produce. The simple nonhierarchical methods divide the data set of N objects into M clusters;
where no overlap is allowed, these are known as partitioning methods. Each item has
membership in the cluster with which it is most similar, and the cluster may be represented
by a centroid or cluster representative that is indicative of the characteristics of the items it
contains. The more complex hierarchical methods produce a nested data set in which pairs of
items or clusters are successively linked until every item in the data set is connected. The
hierarchical methods can be either agglomerative, with N - 1 pairwise joins beginning from
an unclustered data set, or divisive, beginning with all objects in a single cluster and
progressing through N - 1 divisions of some cluster into a smaller cluster. The divisive
methods are less commonly used and few algorithms are available; only agglomerative
methods will be discussed in this chapter.
TU
The nonhierarchical methods are heuristic in nature, since a priori decisions about the
number of clusters, cluster size, criterion for cluster membership, and form of cluster
representation are required. Since the large number of possible divisions of N items
into M clusters make an optimal solution impossible, the nonhierarchical methods attempt to
find an approximation, usually by partitioning the data set in some way and then reallocating
items until some criterion is optimized. The computational requirement O(NM) is much
lower than for the hierarchical methods if M << N, so that large data sets can be partitioned.
The nonhierarchical methods were used for most of the early work in document clustering
when computational resources were limited; see for example work on the SMART project,
described by Salton (1971).
Hierarchical methods
JN
Most of the early published work on cluster analysis employed hierarchical methods
(Blashfield and Aldenderfer 1978), though this was not so in the IR field. With
improvements in computer resources, the easy availability of software packages for cluster
analysis, and improved algorithms, the last decade of work on clustering in IR retrieval has
concentrated on the hierarchical agglomerative clustering methods (HACM, Willett [1988]).
The cluster structure resulting from a hierarchical agglomerative clustering method is often
displayed as a dendrogram like that shown in Figure 16.2. The order of pairwise coupling of
the objects in the data set is shown, and the value of the similarity function (level) at which
each fusion occurred. The dendrogram is a useful representation when considering retrieval
from a clustered set of documents, since it indicates the paths that the retrieval process may
follow.
www.alljntuworld.in
JNTU World
ld
The most commonly used hierarchical agglomerative clustering methods and their
characteristics are:
or
single link: The single link method joins, at each step, the most similar pair of objects that
are not yet in the same cluster. It has some attractive theoretical properties (Jardine and
Sibson 1971) and can be implemented relatively efficiently, so it has been widely used.
However, it has a tendency toward formation of long straggly clusters, or chaining, which
makes it suitable for delineating ellipsoidal clusters but unsuitable for isolating spherical or
poorly separated clusters.
complete link: The complete link method uses the least similar pair between each of two
clusters to determine the intercluster similarity; it is called complete link because all entities
in a cluster are linked to one another within some minimum similarity. Small, tightly bound
clusters are characteristic of this method.
TU
group average link: As the name implies, the group average link method uses the average
values of the pairwise links within a cluster to determine similarity. All objects contribute to
intercluster similarity, resulting in a structure intermediate between the loosely bound single
link clusters and tightly bound complete link clusters. The group average method has ranked
well in evaluative studies of clustering methods (Lorr 1983).
JN
Ward's method: Ward's method is also known as the minimum variance method because it
joins at each stage the cluster pair whose merger minimizes the increase in the total withingroup error sum of squares, based on the Euclidean distance between centroids. It tends to
produce homogeneous clusters and a symmetric hierarchy, and its definition of a cluster
center of gravity provides a useful way of representing a cluster. Tests have shown it to be
good at recovering cluster structure, though it is sensitive to outliers and poor at recovering
elongated clusters (Lorr 1983).
Two other HACM are sometimes used, the centroid and median methods. In the centroid
method, each cluster as it is formed is represented by the coordinates of a group centroid, and
at each stage in the clustering the pair of clusters with the most similar mean centroid is
merged. The median method is similar but the centroids of the two merging clusters are not
weighted proportionally to the size of the clusters. A disadvantage of these two methods is
that a newly formed cluster may be more like some point than were its constituent points,
resulting in reversals or inversions in the cluster hierarchy.
www.alljntuworld.in
JNTU World
ld
2. For Di, calculate the similarity S with the representative for each existing cluster.
3. If Smax is greater than a threshold value ST, add the item to the corresponding cluster and
recalculate the cluster representative; otherwise, use Di to initiate a new cluster.
4. If an item Di remains to be clustered, return to step 2.
or
Though the single pass method has the advantage of simplicity, it is often criticized for its
tendency to produce large clusters early in the clustering pass, and because the clusters
formed are not independent of the order in which the data set is processed. It is sometimes
used to form the groups that are used to initiate reallocation clustering. An example of a
single pass algorithm developed for document clustering is the cover coefficient algorithm
(Can and Ozkarahan 1984). In this algorithm, a set of documents is selected as cluster seeds,
and then each document is assigned to the cluster seed that maximally covers it. For Di, the
cover coefficient is a measure that incorporates the extent to which it is covered by Dj and the
uniqueness of Di, that is, the extent to which it is covered by itself.
4.4.2 Reallocation Methods
TU
The reallocation methods operate by selecting some initial partition of the data set and then
moving items from cluster to cluster to obtain an improved partition. Anderberg (1973)
discusses some of the criteria that have been suggested to establish an initial partition and to
monitor the improvement achieved by reallocation. A general algorithm is:
1. Select M cluster representatives or centroids.
2. For i = 1 to N, assign Di to the most similar centroid.
JN
4. Repeat steps 2 and 3 until there is little or no change in cluster membership during a pass
through the file.
The single pass and reallocation methods were used in early work in cluster analysis in IR,
such as the clustering experiments carried out in the SMART project (Salton 1971). Their
time and storage requirements are much lower than those of the HACM and much larger data
sets could be processed. With improved processing capability and more efficient hierarchical
algorithms, the HACMs are now usually preferred in practice, and the nonhierarchical
methods will not be considered further in this chapter.
www.alljntuworld.in
JNTU World
ld
2. Identify and combine the next two closest points (treating existing clusters as points).
or
Individual HACM differ in the way in which the most similar pair is defined, and in the
means used to represent a cluster. Lance and Williams (1966) proposed a general
combinatorial formula, the Lance-Williams dissimilarity update formula, for calculating
dissimilarities between new clusters and existing points, based on the dissimilarities prior to
formation of the new cluster. If objects Ci and Cj have just been merged to form cluster Cij,
the dissimilarity d between the new cluster and any existing cluster Ck is given by:
dcick - dcjck
JN
TU
This formula can be modified to accomodate a variety of HACM by choice of the values of
, , and . The hierarchical clustering methods previously discussed are presented in Table
16.1 in the context of their Lance-Williams parameters and cluster centers.
JNTU World
TU
or
ld
www.alljntuworld.in
Notes: mi is the number of items in Ci; the dissimilarity measure used for Ward's
method must be the increase in variance (section 4.5.5).
JN
There are three approaches to implementation of the general HACM (Anderberg 1973), each
of which has implications for the time and storage requirements for processing. In the stored
matrix approach, anN N matrix containing all pairwise dissimilarity values is stored, and
the Lance-Williams update formula makes it possible to recalculate the dissimilarity between
cluster centers using only the stored values. The time requirement is O (N2), rising to O (N3)
if a simple serial scan of the similarity matrix is used; the storage requirement is O (N2).
A stored data approach has only an O (N) storage requirement but the need to recalculate the
pairwise dissimilarity values for each fusion leads to an O (N3) time requirement. In
the sorted matrix approach, the dissimilarity matrix is calculated (O (N2)) and then sorted
(O(N2 log N2)) prior to the construction of the hierarchy (O (N2)). The data set need not be
stored and the similarity matrix is processed serially, which minimizes disk accesses.
However, this approach is suitable only for the single link and complete link methods, which
do not require the recalculation of similarities during clustering.
In addition to the general algorithm, there are also algorithms specific to individual HACM.
For some methods, algorithms have been developed that are the optimal O (N2) in time
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
and O (N) in space (Murtagh 1984). These include several algorithms for the single link
method, Defay's algorithm for complete link, and a nearest-neighbor algorithm for Ward's
method, all of which are discussed below.
16.5.2 Single Link Method
ld
The single link method merges at each stage the closest previously unlinked pair of points in
the data set. Since the distance between two clusters is defined as the distance between the
closest pair of points each of which is in one of the two clusters, no cluster centroid or
representative is required, and there is no need to recalculate the similarity matrix during
processing. This makes the method attractive both from a computational and a storage
perspective, and it also has desirable mathematical properties (Jardine and Sibson 1971), so
that it is one of the most widely used of the HACM.
or
A number of algorithms for the single link method have been reviewed by Rohlf (1982),
including related minimal spanning tree algorithms. The computational requirements range
from O (NlogN) to O (N5). Many of these algorithms are not suitable for information retrieval
applications where the data sets have large N and high dimensionality. The single link
algorithms discussed below are those that have been found most useful for information
retrieval.
TU
Van Rijsbergen (1971) developed an algorithm to generate the single link hierarchy that
allowed the similarity values to be presented in any order and therefore did not require the
storage of the similarity matrix. It is O (N2) in time and O (N) in storage requirements. It
generates the hierarchy in the form of a data structure that both facilitates searching and is
easily updated, and was the first to be applied to a relatively large collection of 11,613
documents (Croft 1977). However, most later work with large collections has used either the
SLINK or Prim-Dijkstra algorithm, which are quite simple to implement.
SLINK algorithm
The SLINK algorithm (Sibson 1973) is optimally efficient, O (N2) for computation and O (N)
for time, and therefore suitable for large data sets. It is simply a sequence of operations by
which a representation of the single link hierarchy can be recursively updated; the
dendrogram is built by inserting one point at a time into the representation.
JN
The hierarchy is generated in a form known as the pointer representation, which consists of
two functions
and
for a data set numbered 1..N, with the following conditions:
(N) = N
(i) > i
(N) =
(
(i)) >
In simple terms, (i) is the lowest level (distance or dissimilarity) at which i is no longer the
last (i.e., the highest numbered) object in its cluster, and
(i) is the last object in the cluster
www.alljntuworld.in
JNTU World
it joins at this level; a mathematical definition for these parameters is provided by Sibson
(1973).
Fortran code for SLINK is provided in the original paper (Sibson 1973). In the pseudocode
below,
three
arrays
of
dimension N are
used: pi (to
hold
the
pointer
representation), lambda (to hold the distance value associated with each pointer),
and distance (to process the current row of the distance matrix); next indicates the current
pointer for a point being examined.
JN
TU
or
ld
For output in the form of a dendrogram, the pointer representation can be converted into
the packed representation. This can be accomplished in O (N2) time (with a small coefficient
for N2) and O (N) space. A FORTRAN subroutine to effect the transformation is provided in
Sibson's original paper.
Minimal spanning tree algorithms
A minimal spanning tree (MST) is a tree linking N objects with N - 1 connections so that
there are no loops and the sum of the N - 1 dissimilarities is minimized. It can be shown that
all the information required to generate a single link hierarchy for a set of points is contained
in their MST (Gower and Ross 1969). Once an MST has been constructed, the corresponding
www.alljntuworld.in
JNTU World
single link hierarchy can be generated in O (N2) operations; or the data structures for the
MST can be modified so that the hierarchy can be built simultaneously (Rohlf 1982).
Two fundamental construction principles for MSTs are:
1. Any isolated point can be connected to a nearest neighbor.
2. Any isolated fragment (subset of an MST) can be connected to a nearest neighbor by a
shortest available link.
ld
or
2. Find the point not in the MST closest to any point in the MST and add it to the fragment.
3. If a point remains that is not in the fragment, return to step 2.
The algorithm requires O (N2) time if, for each point not in the fragment, the identity and
distance to its nearest neighbor in the fragment is stored. As each new point is added to the
fragment, the distance from that point to each point not in the fragment is calculated, and the
NN information is updated if necessary. Since the dissimilarity matrix need not be stored, the
storage requirement is O (N).
TU
JN
/* initialize lists */
for (i = 0; i < n; i++)
{
ndistance[i] = MAXINT;
notintree[i] = i;
}
/* arbitrarily place the Nth point in the MST */
lastpoint = n;
nt = n-1;
/* grow the tree an object at a time */
for ( i = 0; i < n-1;i++)
{
/*consider the lastpoint in the tree for the NN list */
for (j = 0; j < nt; j++)
{
D = calculate_distance(lastpoint, notintree[j]);
if (D < ndistance[j]
{
npoint[j] =lastpoint;
ndistance[j] = D;
}
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
*/
ld
*/
or
}
/* find the unconnected point closest to a point in the
/* tree
*/
nj = index_of_min(ndistance);
/* add this point to the MST; store this point and their
/* clustering level
*/
lastpoint = notintree[nj];
store_in_MST ( lastpoint, npoint[nj], ndistance[nj]);
/* remove lastpoint from notintree list; */
/* close up npoint and ndistance lists */
notintree[nj] = nt;
npoint[nj] = npoint[nt];
ndistance[nj] =ndistance[nt];
nt = nt - 1;
}
}
The small, tightly bound clusters typical of the complete link method have performed well in
comparative studies of document retrieval (Voorhees 1986a). Unfortunately, it is difficult to
apply to large data sets since there does not seem to be an algorithm more effective than the
stored data or stored matrix approach to the general HACM algorithm.
Defays' CLINK algorithm
TU
The best-known algorithm for implementing the complete link method is the CLINK
algorithm developed by Defays (1977). It is presented in a form analogous to the SLINK
algorithm, uses the same three arrays (pi, lambda, and distance), and like SLINK, produces
output in the form of the pointer representation. Defays presents a CLINK subroutine which
allows his algorithm to be incorporated into Sibson's original FORTRAN program for
SLINK. CLINK is efficient, requiring O (N2) time, O (N) space, but it does not seem to
generate an exact hierarchy and has given unsatisfactory results in some information retrieval
experiments (El-Hamdouchi and Willett 1989).
Voorhees algorithm
JN
The Voorhees algorithm (1986b) for the complete link method has been used to cluster
relatively large document collections with better retrieval results than the CLINK algorithm
(El-Hamdouchi and Willett 1989). It is a variation on the sorted matrix approach, and is
based on the fact that if the similarities between all pairs of documents are processed in
descending order, two clusters of size mi and mj can be merged as soon as the mi mith
similarity of documents in the respective clusters is reached. This requires a sorted list of
document-document similarities, and a means of counting the number of similarities seen
between any two active clusters. The large number of zero-valued similarities in a typical
document collection make it more efficient than its worst case O (N3) time, O (N2) storage
requirement would suggest; however it is still very demanding of resources, and ElHamdouchi and Willett found it impractical to apply it to the largest of the collections they
studied.
www.alljntuworld.in
JNTU World
or
ld
Voorhees (1986b) has pointed out that the group average link hierarchy can be constructed
in O (N2) time, O (N) space if the similarity between documents chosen is the inner product
of two vectors using appropriately weighted vectors. In this case, the similarity between a
cluster centroid and any document is equal to the mean similarity between the document and
all the documents in the cluster. Since the centroid of the cluster is the mean of all the
document vectors, the centroids can be used to compute the similarities between the clusters
while requiring only O(N) space. Voorhees was able to cluster a document collection of
12,684 items using this algorithm, for which she provides pseudocode.
Using Voorhees' weighting scheme and intercentroid similarity, El-Hamdouchi (1987) was
able to implement the group average link method using the reciprocal nearest neighbor
algorithm described below for Ward's method.
4.5.5 Ward's Method
TU
Ward's method (Ward 1963; Ward and Hook 1963) follows the general algorithm for the
HACM, where the object/cluster pair joined at each stage is the one whose merger minimizes
the increase in the total within-group squared deviation about the means, or variance. When
two points Di and Dj are clustered, the increase in variance Iij is given by:
JN
where mi is the number of objects in Di and d2ij is the squared Euclidean distance, given by:
where Di is represented by a vector (xi1, xi2, . . . , xiL) in L-dimensional space. The cluster
center for a pair of points Di and Dj is given by:
www.alljntuworld.in
JNTU World
ld
An efficient clustering algorithm can be based on the process of following a chain of nearest
neighbors:
1. Select an arbitrary point.
or
2. Follow the NN chain from this point till an RNN pair is found.
3. Merge these two points and replace them with a single point.
4. If there is a point in the NN chain preceding the merged points, return to step 2; otherwise
return to step 1. Stop when only one point remains.
This algorithm requires O(N2) computation but only O(N) storage. It carries out
agglomerations in restricted spatial regions, rather than in strict order of increasing
dissimilarity, but still results (for Ward's method) in a hierarchy that is unique and exact. This
is designated the Single Cluster Algorithm since it carries out one agglomeration per
iteration; a Multiple Cluster Algorithm, suitable for parallel processing, has also been
proposed (Murtagh 1985).
TU
JN
These are important questions because any clustering method will produce a set of clusters,
and results are sought which reflect some "natural" grouping rather than one that arises as an
artifact of the method. The answer to the first question can be found in evaluative studies of
clustering methods, and to the second question, in validation techniques for clustering
solutions.
4.6.1 Evaluation
Many evaluative studies have attempted to determine the "best" clustering method (Lorr
1983) by applying a range of clustering methods to test data sets and comparing the quality
of the results, for example by using artificially created data structures, or comparing the
cluster results to a classification established by experts in the field. Even under laboratory
conditions it is difficult to evaluate clustering methods, since each method has different
properties and strengths. The results of these studies do not suggest a single best method,
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
though Ward's method, and in more recent studies, the group average method, have
performed well. It is usually advisable to apply more than one clustering method and use
some validation method to check the reliability of the resulting cluster structure.
4.6.2 Validation
or
ld
For retrieval purposes, the "best" method for clustering a document collection is that which
provides the most effective retrieval in response to a query. Several evaluative studies have
taken this approach, using standard test collections of documents and queries. Most of the
early work used approximate clustering methods, or the least demanding of the HACM, the
single link method, a restriction imposed by limited processing resources. However, two
recent studies are noteworthy for their use of relatively large document collections for the
evaluation of a variety of HACM (El-Hamdouchi and Willett 1989; Voorhees 1986a).
Voorhees compared single link, complete link, and group average methods, using document
collections of up to 12,684 items, while El-Hamdouchi and Willett compared these three
methods plus Ward's method on document collections of up to 27,361 items. Voorhees found
complete link most effective for larger collections, with complete and group average link
comparable for smaller collections; single link hierarchies provided the worst retrieval
performance. El-Hamdouchi and Willett found group average most suitable for document
clustering. Complete link was not as effective as in the Voorhees study, though this may be
attributed to use of Defays' CLINK algorithm. As noted in section 16.8.1, there are several
ways in which retrieval from a clustered document collection can be performed, making
comparisons difficult when using retrieval as an evaluative tool for clustering methods.
TU
Cluster validity procedures are used to verify whether the data structure produced by the
clustering method can be used to provide statistical evidence of the phenomenon under study.
Dubes and Jain (1980) survey the approaches that have been used, categorizing them on their
ability to answer four questions: is the data matrix random? how well does a hierarchy fit a
proximity matrix? is a partition valid? and which individual clusters appearing in a hierarchy
are valid? Willett (1988) has reviewed the application of validation methods to clustering of
document collections, primarily the application of the random graph hypothesis and the use
of distortion measures.
JN
An approach that is carried out prior to clustering is also potentially useful. Tests
for clustering tendency attempt to determine whether worthwhile retrieval performance
would be achieved by clustering a data set, before investing the computational resources
which clustering the data set would entail. El-Hamdouchi and Willett (1987) describe three
such tests. The overlap test is applied to a set of documents for which query-relevance
judgments are available. All the relevant-relevant (RR) and relevant-nonrelevant (RNR)
interdocument similarities are calculated for a given query, and the overlap (the fraction of
the RR and RNR distributions that is common to both) is calculated. Collections with a low
overlap value are expected to be better suited to clustering than those with high overlap
values. Voorhees' nearest neighbor test considers, for each relevant document for a query,
how many of its n nearest neighbors are also relevant; by averaging over all relevant
documents for all queries in a test colleciton, a single indicator for a collection can be
obtained. The density test is defined as the total number of postings in the document
collection divided by the product of the number of documents and the number of terms that
have been used for the indexing of those documents. It is particularly useful because it does
not require any predetermined query-relevance data or any calculation of interdocument
similarities. Of the three tests, the density test provided the best indication of actual retrieval
performance from a clustered data set.
www.alljntuworld.in
JNTU World
The goal of a hierarchical clustering process may be to partition the data set into some
unknown number of clusters M (which may be visualized as drawing a horizontal line across
the dendrogram at some clustering level). This requires the application of a stopping rule, a
statistical test to predict the clustering level that will determine M. Milligan and Cooper
(1985) evaluated and ranked 30 such rules, one or more of which is usually present in a
software package for cluster analysis (though not necessarily those ranked highest by
Milligan and Cooper).
4.7 UPDATING THE CLUSTER STRUCTURE
or
ld
In many information retrieval environments, the collection is dynamic, with new items being
added on a regular basis, and, less frequently, old items withdrawn. Since clustering a large
data set is resource-intensive, some mechanism for updating the cluster structure without the
need to recluster the entire collection is desirable. Relatively little work has been done on
methods for cluster maintenance (Can and Ozkarahan 1989), particularly for the hierarchical
methods. In certain cases, update of the cluster structure is implicit in the clustering
algorithm. This is true of both the van Rijsbergen and SLINK algorithms for the single link
method, and the CLINK algorithm for the complete link method, all of which operate by
iteratively inserting a document into an existing hierarchy.
Where the application uses a partitioned data set (from a nonhierarchical or hierarchical
method), new items may simply be added to the most similar partition until the cluster
structure becomes distorted and it is necessary to regenerate it. For a few methods, cluster
update has been specifically incorporated. Crouch's reallocation algorithm includes a
mechanism for cluster maintenance (Crouch 1975). Can and Ozkarahan (1989) review the
approaches that have been taken for cluster maintenance and propose a strategy for dynamic
cluster maintenance based on their cover coefficient concept.
4.8 DOCUMENT RETRIEVAL FROM A CLUSTERED DATA SET
JN
TU
Document clustering has been studied because of its potential for improving the efficiency of
retrieval, for improving the effectiveness of retrieval, and because it provides an alternative
to Boolean or best match retrieval. Initially the emphasis was on efficiency: document
collections were partitioned, using nonhierarchical methods, and queries were matched
against cluster centroids, which reduced the number of query-document comparisons that
were necessary in a serial search. Studies of retrieval from partitioned document collections
showed that though retrieval efficiency was achieved, there was a decrease in retrieval
effectiveness (Salton 1971). Subsequent study has concentrated on the effectiveness of
retrieval from hierarchically clustered document collections, based on the cluster hypothesis,
which states that associations between documents convey information about the relevance of
documents to requests (van Rijsbergen 1979).
4.8.1 Approaches to Retrieval
There are several ways in which a query can be matched against the documents in a hierarchy
(Willett 1988). A top-down search involves entering the tree at the root and matching the
query against the cluster at each node, moving down the tree following the path of greater
similarity. The search is terminated according to some criterion, for instance when the cluster
size drops below the number of documents desired, or when the query-cluster similarity
begins to decrease. A single cluster is retrieved when the search is terminated. Since it is
difficult to adequately represent the clusters in the very large top-level clusters, a useful
modification is to eliminate the top-level clusters by applying a threshold clustering level to
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
the hierarchy to obtain a partition, and using the best of these mid-level clusters as the
starting point for the top-down search. The top-down strategy has been shown to work well
with the complete link method (Voorhees 1986a).
ld
A bottom-up search begins with some document or cluster at the base of the tree and moves
up until the retrieval criterion is satisfied; the beginning document may be an item known to
be relevant prior to the search, or it can be obtained by a best match search of documents or
lowest-level clusters. Comparative studies suggest that the bottom-up search gives the best
results (apart from the complete link method), particularly when the search is limited to the
bottom-level clusters (Willett 1988). Output may be based on retrieval of a single cluster, or
the top-ranking clusters may be retrieved to produce a predetermined number of either
documents or clusters; in the latter case, the documents retrieved may themselves be ranked
against the query.
or
A simple retrieval mechanism is based on nearest neighbor clusters, that is, retrieving a
document and that document most similar to it. Griffiths et al. (1984) determined that for a
variety of test collections, search performance comparable to or better than that obtainable
from nonclustered collections could be obtained using this method.
JN
TU
www.alljntuworld.in
JNTU World
UNIT-5
Systemic approach
User-centered approach
or
ld
Task-oriented evaluation
Search Statements and Binding Search statements are the statements of an information
need generated by users to specify the concepts they are trying to locate in items. As
discussed in Chapter 2, the search statement use traditional Boolean logic and/or Natural
Language. In generation of the search statement, the user may have the ability to weight
(assign an importance) to different concepts in the statement. At this point the binding is to
the vocabulary and past experiences of the user.
TU
User part of the system, interacting with other components, trying to resolve an anomalous
state of knowledge
Binding in this sense is when a more abstract form is redefined into a more specific form.
The search statement is the user's attempt to specify the conditions needed to subset logically
the total item space to that cluster of items that contains the information needed by the user.
JN
The next level of binding comes when the search statement is parsed for use by a specific
search system. The search system translates the query to its own meta language.
This process is similar to the indexing of item processes described in Chapter 5. For example,
statistical systems determine the processing tokens of interest and the weights assigned to
each processing token based upon frequency of occurrence from the search statement.
Natural language systems determine the syntactical and discourse semantics using algorithms
similar to those used in indexing. Concept systems map the search statement to the set of
concepts used to index items.
The final level of binding comes as the search is applied to a specific database. This binding
is based upon the statistics of the processing tokens in the database and the semantics used in
the database.
www.alljntuworld.in
JNTU World
This is especially true in statistical and concept indexing systems. Some of the statistics used
in weighting are based upon the current contents of the database. Some examples are
Document Frequency and Total Frequency for a specific term.
Frequently in a concept indexing system, the concepts that are used as the basis for indexing
are determined by applying a statistical algorithm against a representative sample of the
database versus being generic across all databases (see Chapter 5).
ld
Natural Language indexing techniques tend to use the most corpora-independent algorithms.
Figure 7.1 illustrates the three potential different levels of binding. Parenthesis are used in the
second binding step to indicate expansion by a thesaurus.
The length of search statements directly affect the ability of Information Retrieval Systems to
find relevant items. The longer the search query, the easier it is for the system to find items.
Profiles used as search statements for Selective Dissemination of Information systems are
usually very long, typically 75 to 100 terms.
TU
or
In large systems used by research specialists and analysts, the typical adhoc search statement
is approximately 7 terms. In a paper to be published in SIGIR-97, Fox et al. at Virginia Tech
have noted that the typical search statement on the Internet is one or two words. These
extremely short search statements for Search Statements a
JN
Continue
inding
The length of search statements directly affect the ability of Information Retrieval Systems to
find relevant items.
The longer the search query, the easier it is for the system to find items.
Selective Dissemination of Information systems are usually very long, typically 75 to 100
terms.
www.alljntuworld.in
JNTU World
Restricting the similarity measure to passages gains significant precision with minimal
impact on recall.
Once items are identified as possibly relevant to the users query, it is best to present the
most likely relevant items first- Ranking is a scalar number that represents how similar an
item is to the query.
Once items are identified as possibly relevant to the user's query, it is best to present the most
likely relevant items first. This process is called "ranking."
Usually the output of the use of a similarity measure in the search process is a scalar number
that represents how similar an item is to the query.
or
Similarity measure
ld
A variety of different similarity measures can be used to calculate the similarity between the
item and the search statement. A characteristic of a similarity formula is that the results of the
formula increase as the items become more similar. The value is zero if the items are totally
dissimilar. An example of a simple "sum of the products" similarity measure from the
examples in Chapter 6 to determine the similarity between documents for clustering purposes
is:
SIM(Itemi, Itemj) = Z(Termix) (Termja)
TU
This formula uses the summation of the product of the various terms of two items when
treating the index as a vector. If Itemj is replaced with Queryj then the same formula
generates the similarity between every Item and Queryj. The problemwith this simple
measure is in the normalization needed to account for variances in the length of items.
Additional normalization is also used to have the final results come between zero and +1
(some formulas use the range -1 to +1).
JN
One of the originators of the theory behind statistical indexing and similarity functions was
Robertson and Spark Jones (Robertson-76). Their model suggests that knowledge of terms in
relevant items retrieved from a query should adjust the weights of those terms in the
weighting process. They used the number of relevant documents versus the number of nonrelevant documents in the database and the number of relevant documents having a specific
query term versus the number of non-relevant documents having that term to devise Ibur
formulas for weighting. This assumption of the availability of relevance information in the
weighting process was later relaxed by Croft and Harper (Croft-79). Croft expanded this
original concept, taking into account the frequency of occurrence of terms within an item
producing the following similarity formula (Croft-83):
www.alljntuworld.in
JNTU World
ld
Common elements common increase, the similarity value quickly decreases, in the range -1
and +1.
or
TU
The Dice:
measure simplifies the denominator from the Jaccard measure and introduces a factor of 2 in
the numerator.
The normalization in the Dice formula is also invariant to the number of terms in common.
JN
Use of a similarity algorithm returns the complete data base as search results.
Thresholds(default is the similarity > zero) are usually associated with the search process.
The threshold defines the items in the resultant Hit file from the query.
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
Thresholds are either a value that the similarity measure must equal or exceed or a number
that limits the number of items in the Hit file.
only
Doc1
is
selected
JN
TU
If
threshold
=
If threshold = 4 all selected
or
ld
JNTU World
ld
www.alljntuworld.in
The query is compared to the centroids A and B. If the results of the similarity measure
are above the threshold, the query is then applied to the nodes children.
The filled circle represents the query and the filled boxes represent the centroids for the three
clusters represented by the ovals.
TU
or
JN
Relevance Feedback
Thesuari and semantic networks provide utility in generally expanding a users search
statement to include potential related search terms.
But this still does not correlate to the vocabulary used by the authors that contributes to a
particular database.
There is also a significant risk that the thesaurus does not include the latest jargon being
used, acronyms or proper nouns.
In an interactive system, users can manually modify an inefficient query or have the system
automatically expand the query via a thesaurus.
www.alljntuworld.in
JNTU World
Relevant items (or portions of relevant items) are used to reweight the existing query terms
and possibly expand the users search statement with new terms.
The relevance feedback concept was that the new query should be based on the old query
modified to increase the weight of terms in relevant items and decrease the weight of terms
that are in non-relevant items.
JN
TU
or
ld
JNTU World
or
ld
www.alljntuworld.in
In a dissemination system, the user defines a profile and as new info is added to the system it
is automatically compared to the users profile.
The differences between the two functions lie in the dynamic nature of the profiling process,
the size and diversity of the search statements and number of simultaneous searches per item.
JN
TU
As such, corpora statistics exist on term frequency within and between terms.
These can be used for weighting factors in the indexing process and the similarity
comparison (e.g., inverse document frequency algorithms).
Profiles are relatively static search statements that cover a diversity of topics.
Rather than specifying a particular information need, they usually generalize all of the
potential information needs of a user.
www.alljntuworld.in
JNTU World
One of the first commercial search techniques for dissemination was the Logicon Message
Dissemination System (LMDS). The system originated from a system created by Chase,
Rosen and Wallace (CRW Inc.). It was designed for speed to support the search of thousands
of profiles with items arriving every 20 seconds.
It demonstrated one approach to the problem where the profiles were treated as the static
database and the new item acted like the query.
It uses the terms in the item to search the profile structure to identify those profiles whose
logic could be satisfied by the item.
The system uses a least frequently occurring trigraph (three character) algorithm that quickly
identifies which profiles are not satisfied by the item.
The potential profiles are analyzed in detail to confirm if the item is a hit.
This makes maximum use of the retrospective search software but loses near real time
delivery of items.
Schutze et al. used two approaches to reduce the dimensionality: selecting a set of existing
features to use or creating a new much smaller set of features that the original features are
mapped into.
most important features. The test was applied to a table that contained the number
JN
TU
or
ld
The two major approaches to generating queries are Boolean and natural language.
Natural language queries are easily represented within statistical models and are usable by
the similarity measures discussed.
www.alljntuworld.in
JNTU World
The advent of WIMP interfaces and the evolution of the interface focused on how to
represent to the user what is taking place in the computer environment.
Extending the HCI to improve the information flow, thus reducing wasted user overhead in
locating needed information.
Although the major focus is on enhanced visualization of information, other senses are also
being looked at for future interfaces.
The audio sense has always been part of simple alerts in computers.
The sounds are now being replaced by speech in both input and output interfaces.
The tactile (touch) sense is being addressed in the experiments using Virtual Realty (VR).
A significant portion of the brain is devoted to vision and supports the maximum information
transfer function from the environment to a human being.
Visualization is the transformation of information into a visual form which enables the user
to observe and understand the information.
The Gestalt psychologists postulate that the mind follows a set of rules to combine the input
stimuli to a mental representation that differs from the sum of the individual inputs (Rock90):
TU
or
ld
Continuity - figures are interpreted as smooth continuous patterns rather than discontinuous
concatenations of shapes (e.g., a circle with its diameter drawn is perceived as two
continuous shapes, a circle and a line, versus two half circles concatenated together)
Closure - gaps within a figure are filled in to create a whole (e.g., using dashed lines to
represent a square does not prevent understanding it as a square)
Connectedness - uniform and linked spots, lines or areas are perceived as a single unit.
Shifting the information processing load from slower cognitive processes to faster perceptual
systems significantly improves the information-carrying interfaces between humans and
computers (Card-96).
JN
One of the first-level cognitive processes is pre attention, that is, taking the significant visual
information from the photoreceptors and forming primitives.
www.alljntuworld.in
JNTU World
An example of using the conscious processing capabilities of the brain is the detection of the
different shaped objects and the border between them shown between the left side and middle
of the Figure.
The reader can likely detect the differences in the time it takes to visualize the two different
boundaries.
This suggests that if information semantics are placed in orientations, the minds clustering
aggregate function enables detection of groupings easier than using different objects.
Information Visualization Technologies
The ones focused on Information Retrieval Systems are investigating how best to display the
results of searches, structured data from DBMSs and the results of link analysis correlating
data.
The goals for displaying the result from searches fall into two major classes: document
clustering and search statement analysis.
Visualization tools in this area attempt to display the clusters, with an indication of their size
and topic, as a basis for users to navigate to items of interest.
Displaying the total set of terms, including additional terms from relevance feedback or
thesaurus expansion, along with documents retrieved and indicate the importance of the term
to the retrieval and ranking process.
A tree structure is useful in representing information that ranges over time (e.g.,
genealogical lineage), constituents of a larger unit (e.g., organization structures, mechanical
device definitions) and aggregates from the higher to lower level (e.g., hierarchical clustering
of documents).
One of the earliest experiments in information visualization was the Information Visualizer
developed by XEROX PARC.
It incorporates various visualization formats such as DataMap, InfoGrid, ConeTree, and the
Perspective wall.
The Cone-Tree is a 3-Dimensional representation of data, where one node of the tree is
represented at the apex and ail the information subordinate to it is arranged in a circular
structure at its base.
JN
TU
or
ld
www.alljntuworld.in
JNTU World
ld
The perspective wall divides the information into three visual areas with the area being
focused on in the front and other areas out of focus to each.
This allows the user to keep all of the information in perspective while focusing on a
particular area.
TU
or
This technique makes maximum use of the display screen space by using rectangular boxes
that are recursively subdivided based upon parent-child relationships between the data.
JN
The CPU, OS, Memory, and Network management articles are all related to a general
category of computer operating systems versus computer applications which are shown in the
rest of the figure.
JNTU World
ld
www.alljntuworld.in
The Envision system not only displays the relevance rank and estimated relevance of each
item found by a query, but also simultaneously presents other query information.
The design is intentionally graphical and simple using two dimensional visualization.
This allows a larger variety of user computer platforms to have access to their system
(Nowell-96).
Envisions three interactive windows to display search results: Query window, Graphic View
window, and Item Summary window.
JN
TU
or
Document Content Analysis and Retrieval System (DCARS) being developed by Calspan
Advanced Technology Center.
They display the query results as a histogram with the items as rows and each terms
contribution to the selection indicated by the width of a tile bar on the row.
DCARS provides a friendly user interface that indicates why a particular item was found, but
it is much harder to use the information in determining how to modify search statements to
improve them.
JNTU World
or
ld
www.alljntuworld.in
Another representation that is widely used for both hierarchical and network related
information is the cityscape which uses the metaphor of movement within a city.
In lieu of using hills, as in the terrain approach, skyscrapers represent the theme (concept)
areaas
JN
TU
www.alljntuworld.in
JNTU World
UNIT-6
TEXT SEARCH ALGORITHM
Searching in Compressed Text (Overview)
What is Text Compression
Definition The Shannon Bound Huffman Codes The Kolmogorov Measure Searching in
Non-adaptive Codes KMP in Huffman Codes
The Lempel-Ziv Codes Pattern Matching in Z-Compressed Files Adapting Compression for
Searching
ld
or
Three classical text retrieval techniques have been defined for organizing items in a textual
database, for rapidly identifying the relevant items
The techniques are
Full text scanning(streaming)
Word inversion
JN
TU
In software streaming techniques, the item to be searched is read into memory and then the
algorithm is applied.
There are four major algorithms associated with software text search:
Knuth-Morris-Pratt
www.alljntuworld.in
JNTU World
Boyer-Moore
Shift-OR algorithm
Rabin -karp
Brute Force :
ld
This approach is the simplest string matching algorithm. The idea is to try and match the
search string against the input text. If as soon as a mis-match is detected in the comparison
process , shift the input text one position and start the comparison process over.
KNUTH-MORRIS PRATT
Position :1 2 3 4 5 6 7 8
Input stream a b d a d e f g
Search pattern a b d f
or
Even in the worst case it does not depend upon the length of the text pattern being searched
for. The basic concept behind the algorithm is that whenever a mismatch is detected , the
previous matched characters define the number of characters that can be skipped in the input
stream prior to process again . Starting the comparison
To illustrate the algorithm's details, consider a (relatively artificial) run of the algorithm,
where W = "ABCDABD" and S = "ABC ABCDAB ABCDABCDABDE". At any given
TU
m , denoting the position within S where the prospective match for W begins,
In each step the algorithm compares S[m+i] with W[i] and advances i if they are equal.
This is depicted, at the start of the run, like
JN
1
2
m: 01234567890123456789012
S: ABCABCDAB ABCDABCDABDE
W: ABCDABD
i: 0123456
www.alljntuworld.in
JNTU World
is no chance of finding the beginning of a match. Therefore, the algorithm sets m = 3 and i
= 0.
1
2
m: 01234567890123456789012
S: ABCABCDAB ABCDABCDABDE
W: ABCDABD
i: 0123456
or
1
2
m: 01234567890123456789012
S: ABC ABCDABABCDABCDABDE
W: ABCDABD
i: 0123456
ld
This match fails at the initial character, so the algorithm sets m = 4 and i = 0
TU
matching. Thus the algorithm not only omits previously matched characters
of S (the "BCD" ), but also previously matched characters of W (the prefix "AB" ).
JN
1
2
m: 01234567890123456789012
S: ABC ABCDABABCDABCDABDE
W:
ABCDABD
i:
0123456
This search fails immediately, however, as W does not contain another "A" , so as in the
first trial, the algorithm returns to the beginning of W and begins searching at the
mismatched character position of S : m = 10 , reset i = 0 .
1
2
m: 01234567890123456789012
S: ABC ABCDABABCDABCDABDE
W:
ABCDABD
i:
0123456
The match at m=10 fails immediately, so the algorithm next tries m = 11 and i = 0 .
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
1
2
m: 01234567890123456789012
S: ABC ABCDAB ABCDABCDABDE
W:
ABCDABD
i:
0123456
or
1
2
m: 01234567890123456789012
S: ABC ABCDAB ABCDABCDABDE
W:
ABCDABD
i:
0123456
ld
Once again, the algorithm matches "ABCDAB" , but the next character, 'C' , does not match
the final character 'D' of the word W . Reasoning as before, the algorithm sets m = 15 , to
start at the two-character string "AB" leading up to the current position, set i = 2 , and
This time the match is complete, and the first character of the match is S[15]
The above example contains all the elements of the algorithm. For the moment, we assume
the existence of a "partial match" table T , described below, which indicates where we need
TU
to look for the start of a new match in the event that the current one ends in a mismatch. The
entries of T are constructed so that if we have a match starting at S[m] that fails when
comparing S[m + i] to W[i] , then the next possible match will start at index m + i T[i] in S (that is, T[i] is the amount of "backtracking" we need to do after a mismatch).
This has two implications: first, T[0] = -1 , which indicates that if W[0] is a mismatch, we
cannot backtrack and must simply check the next character; and second, although the next
possible match will begin at index m + i - T[i] , as in the example above, we need not
actually check any of the T[i] characters after that, so that we continue searching
from W[T[i]] . The following is a sample pseudocode implementation of the KMP search
JN
algorithm.
algorithmkmp_search:
input:
an array of characters, S (the text to be searched)
an array of characters, W (the word sought)
output:
an integer (the zero-based position in S at which W is found)
define variables:
an integer, m 0 (the beginning of the current match in S)
an integer, i 0 (the position of the current character in W)
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
ld
or
Assuming the prior existence of the table T , the search portion of the KnuthMorrisPratt
algorithm has complexity O(n), where n is the length of S and the O is big-O notation.
Except for the fixed overhead incurred in entering and exiting the function, all the
computations are performed in the while loop. To bound the number of iterations of this
loop; observe that T is constructed so that if a match which had begun at S[m] fails while
comparing S[m + i] to W[i] , then the next possible match must begin at S[m + (i - T[i])] . In
particular, the next possible match must occur at a higher index than m , so that T[i] < i .
JN
TU
This fact implies that the loop can execute at most 2n times, since at each iteration it executes
one of the two branches in the loop. The first branch invariably increases i and does not
change m , so that the index m + i of the currently scrutinized character of S is increased.
The second branch adds i - T[i] to m , and as we have seen, this is always a positive number.
Thus the location m of the beginning of the current potential match is increased. At the same
time, the second branch leaves m + i unchanged, for m gets i - T[i] added to it, and
immediately after T[i] gets assigned as the new value of i , hence new_m + new_i = old_m
+ old_i - T[old_i] + T[old_i] = old_m + old_i . Now, the loop ends if m + i = n; therefore,
each branch of the loop can be reached at most n times, since they respectively increase
either m + i or m , and m m + i : if m = n, then certainly m + i n, so that since it
increases by unit increments at most, we must have had m + i = n at some point in the past,
and therefore either way we would be done.
Thus the loop executes at most 2n times, showing that the time complexity of the search
algorithm is O(n).
Here is another way to think about the runtime: Let us say we begin to match W and S at
position i and p . If W exists as a substring of S at p, then W[0..m] = S[p..p+m] . Upon
success, that is, the word and the text matched at the positions ( W[i] = S[p+i] ), we
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
increase i by 1. Upon failure, that is, the word and the text does not match at the positions
( W[i] S[p+i] ), the text pointer is kept still, while the word pointer is rolled back a certain
amount ( i = T[i] , where T is the jump table), and we attempt to
match W[T[i]] with S[p+i] . The maximum number of roll-back of i is bounded by i , that is
to say, for any failure, we can only roll back as much as we have progressed up to the failure.
Then it is clear the runtime is 2n.
TU
or
ld
Examples:
1) Input:
JN
pat[] = "TEST"
Output:
2) Input:
txt[] = "AABAACAADAABAAABAA"
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
pat[] = "AABA"
Output:
ld
or
TU
Naive
Algorithm
KMP
Algorithm
Rabin
Karp
Algorithm
Finite Automata based Algorithm
In this post, we will discuss Boyer Moore pattern searching algorithm. Like KMP and Finite
Automata algorithms, Boyer Moore algorithm also preprocesses the pattern.
Boyer
Moore
is
a
combination
of
following
two
approaches.
1)
Bad
Character
Heuristic
2)
Good
Suffix
Heuristic
Both of the above heuristics can also be used independently to search a pattern in a text. Let
us first understand how two independent approaches work together in the Boyer Moore
algorithm. If we take a look at the Naive algorithm, it slides the pattern over the text one by
one. KMP algorithm does preprocessing over the pattern so that the pattern can be shifted by
more than one. The Boyer Moore algorithm does preprocessing for the same reason. It
preporcesses the pattern and creates different arrays for both heuristics. At every step, it
slides the pattern by max of the slides suggested by the two heuristics. So it uses best of the
two heuristics at every step. Unlike the previous pattern searching algorithms, Boyer Moore
algorithm starts matching from the last character of the pattern.
In this post, we will discuss bad character heuristic, and discuss Good Suffix heuristic in the
next post.
JN
The idea of bad character heuristic is simple. The character of the text which doesnt match
with the current character of pattern is called the Bad Character. Whenever a character
doesnt match, we slide the pattern in such a way that aligns the bad character with the last
occurrence of it in pattern. We preprocess the pattern and store the last occurrence of every
possible character in an array of size equal to alphabet size. If the character is not present at
all, then it may result in a shift by m (length of pattern). Therefore, the bad character heuristic
takes O(n/m) time in the best case.
/* Program for Bad Character Heuristic of Boyer Moore String Matching Algorithm */
# include <limits.h>
www.alljntuworld.in
JNTU World
# include <string.h>
# include <stdio.h>
ld
{
inti;
or
TU
JN
intm = strlen(pat);
intn = strlen(txt);
intbadchar[NO_OF_CHARS];
www.alljntuworld.in
JNTU World
intj = m-1;
ld
or
TU
JN
else
/* Shift the pattern so that the bad character in text
aligns with the last occurrence of it in pattern. The
www.alljntuworld.in
JNTU World
{
chartxt[] = "ABAAABCD";
charpat[] = "ABC";
return0;
}
Run on IDE
Output:
search(txt, pat);
or
intmain()
ld
TU
JN
JNTU World
TU
or
ld
www.alljntuworld.in
JN
The Fast Data Finder (FDF) is the most recent specialized hardware text search unit still in
use in many organizations. It was developed to search text and has been used to search
English and foreign languages. The early Fast Data Finders consisted of an array of
programmable text processing cells connected in series forming a pipeline hardware search
processor (Mettler-93). The cells are implemented using a VSLI chip. In the TREC tests each
chip contained 24
processor cells with a typical system containing 3600 cells (the FDF-3 has a rack mount
configuration with 10,800 cells). Each cell is a comparator for a single character, limiting the
total number of characters in a query to the number of cells. The cells are interconnected with
an 8-bit data path and approximately 20-bit control path. The text to be searched passes
through each cell in a pipeline fashion until the complete database has been searched. As data
are analyzed at each cell, the 20 control lines states are modified depending upon their
current state and the results from the comparator. An example of a Fast Data Finder system is
shown in Figure 9.11.
A cell is composed of both a register cell (Rs) and a comparator (Cs). The input from the
Document database is controlled and buffered by the microprocessor/memory and feed
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
TU
or
ld
through the comparators. The search characters are stored in the registers. The connection
between the registers reflects the control lines that are also passing state information.
JN
www.alljntuworld.in
JN
TU
or
ld
JNTU World
UNIT-7
www.alljntuworld.in
TU
or
The most important characteristic of a multimedia information system is the variety of data it
must be able to support.
Multimedia systems must have the capability to store, retrieve, transport and present data
with very heterogeneous characteristics such as text, images (both still and moving), graphs
and sound.Multimedia Information Retrieval Systems specifically for handling multimedia
data.
Multimedia IR systems must be able to deal with different kinds of media and with semistructured data.Multimedia IR systems must be able to deal with different kinds of media and
with semi-structured data
The main goal of a multimedia IR system is to efficiently perform retrieval, based on user
requests, exploiting not only data attributes, as in traditional DBMSs, but also the content of
multimedia objects.
Data retrieval relies on the following basic steps:
1.Query specification
2.Query processing and optimization
3.Query answer
4.Query iteration
Multimedia IR systems should therefore combine both the DBMS and the IR technology, to
integrate the data modelling capabilities of DBMSs with the advanced and similarity based
query capabilities of IR systems
Data Modelling
The complex nature of multimedia data may benefit from the use of DBMS functions for
data representation and querying. Multimedia data is inherently different from conventional
data. The main difference is that information about the content of multimedia data are usually
not encoded into attributes provided by the data schema (structured). Rather, text, image,
video and audio data are typically unstructured.
Addressing data modelling issues in the framework of multimedia IR systems entails two
main tasks.
A data model should be defined by which the user can specify the data to be stored into the
system.
The system should provide a model for the internal representation of multimedia data.
MULTOS data model : Multimedia Office Server is a multimedia document server with
advanced document retrieval capabilities, developed in the context of an ESPRIT project in
the area of Office Systems.
MULTOS is based on a client/ server architecture. Three different types of document servers
are supported:
1.Current servers
2.Dynamic servers
3.Archive servers
These three servers differ in storage capacity and document retrieval speed
Two types of index are constructed:
1. Object Index
2.Cluster Index
ld
JNTU World
JN
Do
cu
me
nt From JNTU World (http://www.alljntuworld.in)
Downloaded
JNTU World
or
ld
www.alljntuworld.in
JN
TU
www.alljntuworld.in
JNTU World
Document
Name
Sender
Receiver+
Addres
s
Name
Address
Street City
Letter_
body
ld
Date
or
Place
Country
Signa
ture
TU
Company
_Logo
Image
Product_
Cose
Product_Pres
entation
JN
Text
Text
Product_Des
cription
Text
Query languages
Relational/object-oriented database system
Similarity-based approach
Considers the structure and the content of the objects
Content-based query
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
JNTU World
Interfaces
Browsing and navigation
ld
Request specification
Specifying the conditions the objects of interest must satisfy, by means of queries
TU
or
JN
Sketch entry
www.alljntuworld.in
JNTU World
Results of search
ld
TU
or
Color selection
Results of search
JN
Concern the attributes for which an exact value is supplied for each object
Exact-match retrieval
2. Structural predicates
www.alljntuworld.in
JNTU World
Concern the semantic content of the required data, depending on the features that have been
extracted and stored for each multimedia object
Find all the red houses
Exact match cannot be applied
Indexing and Searching
Given two objects O1and O2the distance (=dissimilarity) of the two objects is denoted by
D(O1, O2).
ld
Similarity queries can be classified into two categories: Whole match : given a collection
of N objects O1, O2, . .. , ON and a query object Q, we want to find those data objects that
are within distance from Q.
Sub-pattern match: Given N data objects O1, O2, . . . , ON, a
or
query (sub-) object Q and a tolerance , we want to identify the parts of the data objects that
match the query
Ideal method which required by all categories of queries should ful fill the following
requirements:
o It should be fast.
o It should be correct
Using a spatial access method, retrieve all points within the desired tolerance from F(Q).
TU
Retrieve the corresponding objects, compute their actual distance from Q and discard the
false alarms.
Lemma(Lower Bounding): to guarantee no false dismissals for whole match queries, the
feature extraction function F() should satisfy the following formula:
JN
Determine the distance function D() between two objects o Find one or more numerical
feature-extraction functions, to provide a quick-and-dirty test
Prove that the distance in feature space lower-bounds the actual distance D(), to guarantee
correctness. Use a SAM, to store and retrieve the f-D feature vectors.
use only one numerical feature to describe each data object, what should this feature be?
JNTU World
UNIT-8
JN
TU
or
ld
www.alljntuworld.in
Introduction
Document Models, Representations, and Access
Prototypes, Projects, and Interfaces
Standards
Trends and Research Issues
Environment in the Library
Online Public Access Catalogues (OPACs)
Document Databases
IR in Organizations
Conclusion
Digital library (DL) can be defined by:
www.alljntuworld.in
JNTU World
ld
Information retrieval (IR) is essential for the sucess of DLs, so they can achieve high levels
of effectiveness while at the same time affording ease of use to a diverse community.
A significant portion of the research and development efforts related to DLs has been in the
IR area.This presentation reviews some of these efforts, organizes them into a simple
framework, and highlights needs for the future.
or
The simplest approach is to locate words or phrases in dictionaries and to use the translated
terms to search in collections in other languages.It is likely that research in this area will
continue to be of great importance to both the IR and DL communities.
Multimedia documents streams usually must be synchronized in some way, and so it is
promising that a new standard for handling this over the Web has been adopted.
TU
JN
www.alljntuworld.in
Dealing with collections of information that are distributed in nature is one of the commom
requirements of DL technology.
Document Models, Representations, and Access - Distributed Collections
There are two approaches to tackle this problem:
The first one is to develop a description language for each DL and to build federated search
systems that can interpret it;
The second one is to make each DL support a powerful protocol aimed at effective retrieval.
The first course is supported by BioKleili system and the second one by Computer
Interchange of Museum Information (CIMI).
Document Models, Representations, and Access - Distributed Collections
or
ld
JNTU World
TU
JN
Prototypes, Projects and Interfaces- Internacional Range of Efforts Since each nation wishes to share highlights of its history, culture, and accomplishments with
the rest of the world, developing a DL can be very helpful.
Examples :
European ERCIM program and UK initiatives;
New Zealand, Australia and Singapore;
Downloaded From JNTU World (http://www.alljntuworld.in)
www.alljntuworld.in
Standards
Since there are many DL projects worldwide, involving diverse research, development, and
commercial approaches, it is imperative that standards are employed so as to make
interoperability and data exchange possible.
At the heart of supporting federated DLs is agreement on protocols for computer-computer
communication.
Standards- Protocols and Federation
The standard Z39.50 was designed to search remote bibliographic collections;
One example of widespread utilization was the WAIS system (based on Z39.50), very
popular before WWW emerged;
The application of Z39.50 to DLs was demonstrated in the CIMI project;
Dienst is another standard which has been considered in regard to NDLTD.
TU
ld
or
JNTU World
JN
www.alljntuworld.in
JNTU World
or
ld
OPACs Catalogue access point for materials held by library Integrated Library System (ILS)
system that manages all library catalogues and collections OPAC early on, libraries used card
catalogues later, followed on microfilms and microfiche forms in the 1970s, online
catalogues were implemented initially, they had very limited functionality in the 1980s, true
online public access catalogues (OPACs) today, OPACs are the main component of an ILS
Library Systems, Modern Information Retrieval, Addison Wesley, 2010 p. 7 OPACs
OPACs used standardized record formats (MARC) minimal subject information (title, a few
headings) Three generations first generation: known-item finding tools through search by
author, title, control number second generation: increased search technology with access by
subject headings, keywords, boolean queries problems included failed searches, navigational
confusion enhancements represented large investments for a library third generation: focus
on open systems architectures, improved GUI, support for Z39.50 and Dublin Core,
hypertext links, java programming, ranked results sets problems include slow pace of
development failure to match trends in the Web Library Systems, Modern Information
Retrieval, Addison Wesley, 2010 p. 8 OPACs and Bibliographic Records Libraries use
standardized systems for cataloguing and classifying materials Anglo-American
TU
JN
MARC is the Machine Readable Cataloguing Record underlines cooperation among libraries
provides support to distinct online catalogues data format that implements national and
international standards Information Interchange Format (ANSI Z39.2) .