0% found this document useful (0 votes)
24 views15 pages

Unit 4 Ai

Uploaded by

Arpit Chaudhary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views15 pages

Unit 4 Ai

Uploaded by

Arpit Chaudhary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

UNIT -4

Natural Language Processing


Natural Language Processing (NLP) is a field of artificial intelligence (AI) that focuses on the
interaction between computers and humans through natural language. It enables computers to
understand, interpret, and generate human language in a manner that is both meaningful and
useful.

APPLICATION

Speech Recognition: NLP enables systems to transcribe spoken language into text, powering
voice-to-text applications, dictation software, and voice-controlled interfaces in smart devices.

Sentiment Analysis: NLP is used to analyze sentiment in social media posts, customer reviews,
and surveys to gauge public opinion, customer satisfaction, or market trends.

Language Translation: Services like Google Translate , NLP to translate text from one language
to another, enabling communication across linguistic barriers.

Information Extraction: NLP techniques are used to extract structured information from
unstructured text sources such as emails, news articles, and research papers, facilitating data
analysis and decision-making.

Healthcare: NLP is used in medical applications to extract information from clinical notes,
analyze medical literature, assist in diagnosis, and improve patient care through electronic
health records (EHR) analysis.

Education: NLP assists in automated grading, personalized learning, and content


recommendation systems, adapting educational materials to students' needs and learning
styles.

Text Summarization: NLP algorithms can automatically summarize lengthy documents or


articles, providing concise overviews for readers or extracting key information for analysis.

Chatbots : NLP powers chatbots deployed in customer service, sales, and support roles,
allowing businesses to automate interactions with users through messaging platforms or
websites.
Working of Natural Language Processing (NLP)

The field is divided into three different parts

• Speech Recognition — The translation of spoken language into text.

• Natural Language Understanding (NLU) — The computer’s ability to


understand what we say.

• Natural Language Generation (NLG) — The generation of natural


language by a computer.

Speech Recognition:
• First, the computer must take natural language and convert it into machine-
readable language. This is what speech recognition or speech-to-text does.
This is the first step of NLU.

• Hidden Markov Models (HMMs) are used in the majority of voice recognition
systems nowadays. These are statistical models that use mathematical
calculations to determine what you said in order to convert your speech to
text.

Natural Language Understanding (NLU):


The next and hardest step of NLP is the understanding part.

• First, the computer must comprehend the meaning of each word. It tries to
figure out whether the word is a noun or a verb, whether it’s in the past or
present tense, and so on. This is called Part-of-Speech tagging (POS).

• A lexicon (a vocabulary) and a set of grammatical rules are also built into
NLP systems. The most difficult part of NLP is understanding .

Natural Language Generation (NLG):

• NLG is much simpler to accomplish. NLG converts a computer’s machine-readable


language into text and can also convert that text into audible speech using text-to-
speech technology.
Semantic Processing

Semantic Analysis is a subfield of Natural Language Processing (NLP) that attempts


to understand the meaning of Natural Language.

Parts of Semantic Analysis


Semantic Analysis of Natural Language can be classified into two broad parts:
1. Lexical Semantic Analysis: Lexical Semantic Analysis involves understanding the
meaning of each word of the text individually.

2. Compositional Semantics Analysis: Although knowing the meaning of each word


of the text is essential, it is not sufficient to completely understand the meaning of the
text.

Focuse’s on how the meaning of a complex expression (such as a sentence) is derived


from the meanings of its constituent parts (such as words and phrases) and the rules
that govern their combination.

For example, consider the following two sentences:


• Sentence 1: Students loves Teachers
• Sentence 2: Teachers loves Students.
Although both these sentences 1 and 2 use the same set of root words {student, love,
Teachers}, they convey entirely different meanings.

Tasks involved in Semantic Analysis


In order to understand the meaning of a sentence, the following are the major
processes involved in Semantic Analysis:
• Word Sense Disambiguation
• Relationship Extraction

Word Sense Disambiguation:

In Natural Language, the meaning of a word may vary as per its usage in sentences
and the context of the text. Word Sense Disambiguation involves interpreting the
meaning of a word based upon the context of its occurrence in a text.
For example, the word ‘Bark’ may mean ‘the sound made by a dog’ or ‘the outermost
layer of a tree.’
Relationship Extraction:

Another important task involved in Semantic Analysis is Relationship Extracting. It


involves firstly identifying various entities present in the sentence and then extracting
the relationships between those entities.

Elements of Semantic Analysis


Some of the critical elements of Semantic Analysis that must be scrutinized and
taken into account while processing Natural Language are:
• Hyponymy: Hyponymys refers to a term that is an instance of a generic
term. They can be understood by taking class-object as an analogy. For
example: ‘Color‘ is a hypernymy while ‘grey‘, ‘blue‘, ‘red‘, etc, are its
hyponyms.
• Homonymy: Homonymy refers to two or more lexical terms with the same
spellings but completely distinct in meaning. For example: ‘Rose‘ might
mean ‘the past form of rise‘ or ‘a flower‘, – same spelling but different
meanings; hence, ‘rose‘ is a homonymy.
• Synonymy: When two or more lexical terms that might be spelt distinctly
have the same or similar meaning, they are called Synonymy. For
example: (Job, Occupation), (Large, Big), (Stop, Halt).
• Antonymy: Antonymy refers to a pair of lexical terms that have contrasting
meanings – they are symmetric to a semantic axis. For example: (Day,
Night), (Hot, Cold), (Large, Small).
• Polysemy: Polysemy refers to lexical terms that have the same spelling but
multiple closely related meanings. It differs from homonymy because the
meanings of the terms need not be closely related in the case of homonymy.
For example: ‘man‘ may mean ‘the human species‘ or ‘a male human‘ or ‘an
adult male human‘ – since all these different meanings bear a close
association, the lexical term ‘man‘ is a polysemy.
Elements of Semantic Analysis
Some of the critical elements of Semantic Analysis that must be scrutinized and
taken into account while processing Natural Language are:
• Hyponymy: Hyponymys refers to a term that is an instance of a generic
term. They can be understood by taking class-object as an analogy. For
example: ‘Color‘ is a hypernymy while ‘grey‘, ‘blue‘, ‘red‘, etc, are its
hyponyms.
• Homonymy: Homonymy refers to two or more lexical terms with the same
spellings but completely distinct in meaning. For example: ‘Rose‘ might
mean ‘the past form of rise‘ or ‘a flower‘, – same spelling but different
meanings; hence, ‘rose‘ is a homonymy.
• Synonymy: When two or more lexical terms that might be spelt distinctly
have the same or similar meaning, they are called Synonymy. For
example: (Job, Occupation), (Large, Big), (Stop, Halt).
• Antonymy: Antonymy refers to a pair of lexical terms that have contrasting
meanings – they are symmetric to a semantic axis. For example: (Day,
Night), (Hot, Cold), (Large, Small).
• Polysemy: Polysemy refers to lexical terms that have the same spelling but
multiple closely related meanings. It differs from homonymy because the
meanings of the terms need not be closely related in the case of homonymy.
For example: ‘man‘ may mean ‘the human species‘ or ‘a male human‘ or ‘an
adult male human‘ – since all these different meanings bear a close
association, the lexical term ‘man‘ is a polysemy.
Elements of Semantic Analysis
Some of the critical elements of Semantic Analysis that must be scrutinized and taken
into account while processing Natural Language are:
• Hyponymy: Hyponymys refers to a term that is an instance of a generic term.
They can be understood by taking class-object as an analogy. For example: ‘Color‘
is a hypernymy while ‘grey‘, ‘blue‘, ‘red‘, etc, are its hyponyms.
• Homonymy: Homonymy refers to two or more lexical terms with the same
spellings but completely distinct in meaning. For example: ‘Rose‘ might mean ‘the
past form of rise‘ or ‘a flower‘, – same spelling but different meanings; hence, ‘rose‘
is a homonymy.
• Synonymy: When two or more lexical terms that might be spelt distinctly have the
same or similar meaning, they are called Synonymy. For example: (Job,
Occupation), (Large, Big), (Stop, Halt).
• Antonymy: Antonymy refers to a pair of lexical terms that have contrasting
meanings – they are symmetric to a semantic axis. For example: (Day, Night),
(Hot, Cold), (Large, Small).
• Polysemy: Polysemy refers to lexical terms that have the same spelling but
multiple closely related meanings. It differs from homonymy because the meanings
of the terms need not be closely related in the case of homonymy. For example:
‘man‘ may mean ‘the human species‘ or ‘a male human‘ or ‘an adult male human‘ –
since all these different meanings bear a close association, the lexical term ‘man‘
is a polysemy.

Basic Units of Semantic System:

In order to accomplish Meaning Representation in Semantic Analysis, it is vital to


understand the building units of such representations. The basic units of semantic
systems are explained below:
• Entity: An entity refers to a particular unit or individual in specific such as a person
or a location. For example Delhi, etc.
• Concept: A Concept may be understood as a generalization of entities. It refers to
a broad class of individual units. For example City, etc.
• Relations: Relations help establish relationships between various entities and
concepts. For example: ‘Delhi is a City.’ etc.
• Predicate: Predicates represent the verb structures of the sentences.

Syntactic Analysis (Parsing)


Syntactic Analysis is used to check grammar, word arrangements, and shows the
relationship among the words.
Example: Agra goes to the Poonam
In the real world, Agra goes to the Poonam, does not make any sense, so this sentence
is rejected by the Syntactic analyzer.
Syntactic analysis, also known as parsing, is a fundamental component of natural
language processing (NLP) that focuses on analyzing the grammatical structure of
sentences. It involves breaking down sentences into their constituent parts and
determining how these parts are related to each other based on the rules of a formal
grammar.

Here's how syntactic analysis works in NLP:

Tokenization:

The first step in syntactic analysis is typically tokenization, where the input text is
divided into individual words or tokens. This process involves separating words,
punctuation marks, and other elements such as numbers or symbols.

Part-of-Speech (POS) Tagging:

POS tagging assigns a grammatical category (such as noun, verb, adjective, etc.) to
each word in the sentence. This step helps in understanding the syntactic roles of words
within the sentence. For example, in the sentence "The cat sleeps," POS tagging would
label "The" as a determiner, "cat" as a noun, and "sleeps" as a verb.

Parsing:

Parsing involves analyzing the syntactic structure of sentences according to the rules of
a formal grammar.

Parse Trees:
The output of syntactic analysis is typically represented as a parse tree, which is a
hierarchical structure that shows how the words in the sentence are grouped into
phrases and how these phrases are related to each other according to the rules of the
grammar. Parse trees can be visualized graphically or represented in a textual format.

Applications:

Syntactic analysis is used in various NLP applications, including machine translation,


information extraction, question answering, sentiment analysis, and more.
Understanding the syntactic structure of sentences helps in extracting meaning,
generating text, and performing higher-level language understanding tasks.

DISCOURSE AND PRAGMATIC PROCESSING


Discourse and pragmatics are essential aspects of natural language understanding, and
integrating them into AI systems can significantly enhance their ability to interact with
humans effectively. Here's an overview of each concept and its relevance to AI:

Discourse:
Discourse refers to the structure and flow of conversation or written text. It involves
understanding how individual sentences or utterances relate to each other within a
larger context to convey meaning. Discourse understanding in AI involves parsing and
interpreting sequences of statements or turns in a conversation to comprehend the
overall message and context.

Coreference Resolution: AI systems need to identify and resolve references to entities


across different parts of a discourse. For example, understanding that "he" in one
sentence refers to the same person as "John" in another.

Anaphora Resolution: This is closely related to coreference resolution but specifically


focuses on resolving pronouns and other referring expressions to their antecedents.

Coherence Modeling: AI models must capture the coherence of discourse by


understanding relationships between sentences, such as cause-and-effect or temporal
ordering.

Dialog Act Recognition: Identifying the purpose or function of each utterance in a


conversation, such as questioning, affirming, or providing information.

Pragmatics:
Pragmatics deals with the study of language use in context and how meaning is
conveyed beyond the literal interpretation of words. It encompasses aspects such as
implicature, presupposition, and speech acts.

Implicature: Understanding implied meaning based on context, such as inferring a


speaker's intention or attitude.

Presupposition: Recognizing assumptions that are taken for granted in a conversation,


which can influence interpretation.

Speech Acts: Understanding that utterances not only convey information but also
perform actions, such as making requests, promises, or apologies.

Integrating discourse and pragmatics into AI systems involves several challenges:

Context Sensitivity: AI models need to understand and generate responses that are
appropriate within the context of a conversation or text.

Ambiguity Resolution: Natural language is often ambiguous, and AI systems must


disambiguate based on contextual cues and pragmatic principles.

Commonsense Reasoning: Understanding and applying background knowledge and


commonsense reasoning to interpret implicit meaning and resolve ambiguities.

Dynamic Interaction: AI systems must engage in dynamic interactions with users,


adapting responses based on ongoing discourse and user feedback./’8

Overall, incorporating discourse and pragmatic understanding into AI systems is crucial


for enabling more natural and effective human-computer interactions, particularly in
conversational AI applications such as chatbots, virtual assistants, and dialogue
systems.

INTRODUCTION TO LEARNING IN ARTIFICIAL


INTELLIGENCE:

• Artificial Intelligence (AI) is a field of computer science that aims to create


systems or machines capable of performing tasks that typically require human
intelligence. One of the fundamental aspects of AI is learning, which enables
machines to improve their performance on tasks over time without explicit
programming.
• learning is a core component of artificial intelligence, enabling machines to
acquire knowledge and improve their performance on tasks through experience
and data. From supervised and unsupervised learning to reinforcement learning,
these paradigms play a crucial role in developing intelligent systems capable of
solving diverse real-world problems.

Learning in AI can be broadly categorized into three main types:

Supervised Learning:

• In supervised learning, the model is trained on a labeled dataset, where each


input is associated with the correct output.

• The goal is to learn a mapping from inputs to outputs, such that given new inputs,
the model can accurately predict the corresponding outputs.

• Examples of supervised learning tasks include classification (assigning labels to


inputs) and regression (predicting continuous values).

Common algorithms used in supervised learning include:

Linear Regression: Used for predicting continuous values.

Logistic Regression: Used for binary classification problems.

Support Vector Machines (SVM): Effective for both classification and regression tasks.

Decision Trees and Random Forests: Versatile algorithms for classification and
regression.

Neural Networks: Deep learning models capable of learning complex patterns.

Unsupervised Learning:

• Unsupervised learning involves training the model on unlabeled data, where the
goal is to discover hidden patterns or structures within the data.

• Unlike supervised learning, there are no predefined output labels, and the model
must find its own representations of the data.
• Clustering, dimensionality reduction, and anomaly detection are common tasks in
unsupervised learning.

Key techniques and algorithms in unsupervised learning include:

K-Means Clustering: Divides the dataset into clusters based on similarity.

Hierarchical Clustering: Builds a tree of clusters to represent the data's structure.

Principal Component Analysis (PCA): Reduces the dimensionality of the data while
preserving its variance.

Autoencoders: Neural network models used for dimensionality reduction and feature
learning without supervision.

Generative Adversarial Networks (GANs): Model framework for generating synthetic


data similar to real data distributions.

Reinforcement Learning:

• Reinforcement learning (RL) is a learning paradigm inspired by behavioral


psychology, where an agent learns to make decisions by interacting with an
environment.

• The agent receives feedback in the form of rewards or penalties based on its
actions, and the goal is to learn a policy that maximizes cumulative rewards over
time. RL is particularly useful for sequential decision-making tasks, such as game
playing, robotics, and autonomous driving.

• These learning paradigms serve as the foundation for various AI techniques and
algorithms, including neural networks, decision trees, support vector machines,
and more.

Additionally, advancements in deep learning have revolutionized AI by enabling models


to learn complex patterns from large amounts of data, leading to breakthroughs in areas
such as computer vision, natural language processing, and speech recognition.

ROTE LEARNING.
In artificial intelligence (AI), rote learning means teaching a computer to remember
things by repeating them many times. But in AI, this isn't the best way to teach a
computer. Instead, we want the computer to learn from examples and understand the
bigger picture.
Rote learning is like trying to memorize something by repeating it over and over again,
without really understanding what it means. It's like when you memorize a phone
number just by saying it repeatedly, without knowing whose number it is or where it
goes

Here's why rote learning isn't great for AI:

Limited Learning: If a computer just memorizes things, it can't adapt well to new
situations. It's like if you only knew one way to solve a math problem but couldn't figure
out a different way if needed.

Makes Mistakes Easily: Computers that rely on rote learning can easily mess up when
faced with something new. It's like if you tried to use a memorized answer for a question
that's a little different, and it just doesn't work.

Wastes Time and Resources: Rote learning can take a lot of time and computer power
because the computer has to remember every single detail, even if it's not important. It's
like if you tried to remember every single word in a book instead of just understanding
the main ideas.

Poor Performance on Unseen Data: Rote learning often leads to poor performance
when applied to new data that differs from what was memorized. AI systems should be
able to make accurate predictions or decisions on unseen data, which requires them to
generalize from the patterns learned during training.

Susceptibility to Errors: Rote learning can lead to errors, especially when faced with
noisy or ambiguous data. AI systems that memorize data without understanding may
incorrectly classify or interpret information, leading to unreliable results.

LEARNING BY TAKING ADVICE

Learning by taking advice in artificial intelligence refers to a learning paradigm where an


AI system learns from an external source of knowledge or guidance, typically provided
by human experts or pre-existing knowledge bases.

This approach involves uses external information to improve the AI system's


performance and decision-making capabilities.
Here's how it works:

Knowledge Acquisition: In this approach, the AI system starts with a basic


understanding or model of the problem domain. However, this initial knowledge may be
incomplete or insufficient for handling all possible scenarios.

Expert Advice or Guidance: When an AI system faces a problem it can't solve on its
own, it seeks advice from human experts or existing databases of knowledge.

Incorporating Advice into Learning:By incorporating advice from experts or


knowledge bases, the AI system becomes smarter and better at solving problems. It
learns to make more informed decisions, just like we do when we learn from others.

Continuous Learning: Learning by taking advice is an ongoing process. The AI system


keeps asking for advice and learning from it, gradually getting better over time.

Real-World Examples: For example, if an AI system is trying to diagnose a medical


condition, it might consult with doctors to improve its accuracy. Or if it's playing a game,
it might learn strategies from expert players to become more competitive.

LEARNING IN PROBLEM SOLVING IN AI


Learning in problem-solving in AI involves the ability of AI systems to improve their
performance at solving tasks through experience, observation, and interaction with the
environment.

Here's a breakdown in easy language:

Learning from Experience: AI systems learn from the outcomes of their actions. If an
action leads to a positive result, the system learns to favor that action in similar
situations in the future. If an action leads to a negative outcome, the system learns to
avoid or adjust that action.

Observation and Feedback: AI systems can learn from feedback provided by humans
or by observing the outcomes of their actions. This feedback helps them understand
which actions are good or bad in different situations. For example, in a game, the
system learns from winning or losing matches.

Adapting Strategies: AI systems can adjust their strategies based on what they've
learned. If a particular approach isn't working well, the system can try a different
strategy. Over time, it refines its strategies to become more effective at solving
problems.

Reinforcement Learning: This is a specific approach to learning in AI where the


system learns by trial and error. It tries different actions and receives feedback in the
form of rewards or penalties. Based on this feedback, it adjusts its actions to maximize
rewards over time.

Generalizing from Examples: AI systems can generalize from examples to solve new,
unseen problems. For example, if a system has seen many examples of cats in images,
it can learn to recognize cats in new images it hasn't seen before.

LEARNING FROM EXAMPLE INDUCTION


• Learning from example induction in AI refers to the process by which AI systems
generalize from specific examples to form more abstract concepts or rules.
Here's a simplified explanation:

• This process enables AI systems to learn from experience and make accurate
predictions or decisions on new, unseen instances.

Example-Based Learning: In example-based learning, AI systems learn from


individual instances or examples rather than explicit rules or instructions.

For instance, if an AI system is tasked with identifying spam emails, it might learn from a
collection of labeled emails where each email is marked as spam or not spam.

Generalization: Example induction involves generalizing from these specific examples


to infer broader patterns or rules that can be applied to new, unseen instances.

For instance, after analyzing many examples of spam and non-spam emails, the AI
system might learn that emails containing certain keywords or patterns are more likely
to be spam.

Forming Concepts or Rules: Based on the observed patterns in the examples, the AI
system constructs concepts or rules that capture the common characteristics of the
instances in the dataset.

These concepts or rules serve as generalized knowledge that can be applied to classify
or make predictions about new instances.

Iterative Learning: Example induction is often an iterative process where the AI system
continually updates and refines its concepts or rules as it encounters new examples.
With each new example, the system adjusts its understanding to better capture the
underlying patterns in the data.

Application: Example induction is commonly used in various AI tasks such as


classification, pattern recognition, and decision-making. For example, in image
recognition, an AI system might learn to identify different objects (e.g., cars, trees,
animals) by analyzing a large dataset of labeled images and inducing concepts or rules
based on common visual features.

EXPLANATION BASED LEARNING


Explanation-based learning (EBL) is a type of machine learning approach in artificial
intelligence that leverages existing knowledge to acquire new knowledge more
efficiently.

Explanation-based learning (EBL) in AI is like learning from someone who tells you not
just what to do but also why you should do it.

Here's an explanation in simple terms:

Understanding Explanations: Instead of just learning from examples, EBL focuses on


understanding the reasons or explanations behind those examples. It's like when
someone teaches you to solve a math problem by explaining the steps rather than just
giving you the answer.

Generalizing Knowledge: By understanding the explanations, the AI system can


generalize that knowledge to solve similar problems in the future. It's like learning a
strategy for solving one type of math problem and then using that strategy to solve other
similar problems.

Efficient Learning: EBL is efficient because it doesn't require as many examples to


learn. Instead of needing lots of examples to figure out what to do, the AI system can
learn from a few explanations and apply that knowledge to new situations.

Improving Over Time: EBL is an ongoing process where the AI system learns from
explanations, tries out what it's learned, and then refines its understanding based on the
results. It's like practicing a skill and getting better at it each time you try.

Real-World Examples: In real life, EBL could be used in many ways. For example, a
robot could learn how to do a task by understanding explanations given by humans. Or
a computer program could learn to make better decisions by understanding the
reasoning behind previous decisions.

In simple terms, explanation-based learning in AI is about understanding why things


work the way they do and using that understanding to learn and improve over time.

You might also like