Unit 4 Ai
Unit 4 Ai
APPLICATION
Speech Recognition: NLP enables systems to transcribe spoken language into text, powering
voice-to-text applications, dictation software, and voice-controlled interfaces in smart devices.
Sentiment Analysis: NLP is used to analyze sentiment in social media posts, customer reviews,
and surveys to gauge public opinion, customer satisfaction, or market trends.
Language Translation: Services like Google Translate , NLP to translate text from one language
to another, enabling communication across linguistic barriers.
Information Extraction: NLP techniques are used to extract structured information from
unstructured text sources such as emails, news articles, and research papers, facilitating data
analysis and decision-making.
Healthcare: NLP is used in medical applications to extract information from clinical notes,
analyze medical literature, assist in diagnosis, and improve patient care through electronic
health records (EHR) analysis.
Chatbots : NLP powers chatbots deployed in customer service, sales, and support roles,
allowing businesses to automate interactions with users through messaging platforms or
websites.
Working of Natural Language Processing (NLP)
Speech Recognition:
• First, the computer must take natural language and convert it into machine-
readable language. This is what speech recognition or speech-to-text does.
This is the first step of NLU.
• Hidden Markov Models (HMMs) are used in the majority of voice recognition
systems nowadays. These are statistical models that use mathematical
calculations to determine what you said in order to convert your speech to
text.
• First, the computer must comprehend the meaning of each word. It tries to
figure out whether the word is a noun or a verb, whether it’s in the past or
present tense, and so on. This is called Part-of-Speech tagging (POS).
• A lexicon (a vocabulary) and a set of grammatical rules are also built into
NLP systems. The most difficult part of NLP is understanding .
In Natural Language, the meaning of a word may vary as per its usage in sentences
and the context of the text. Word Sense Disambiguation involves interpreting the
meaning of a word based upon the context of its occurrence in a text.
For example, the word ‘Bark’ may mean ‘the sound made by a dog’ or ‘the outermost
layer of a tree.’
Relationship Extraction:
Tokenization:
The first step in syntactic analysis is typically tokenization, where the input text is
divided into individual words or tokens. This process involves separating words,
punctuation marks, and other elements such as numbers or symbols.
POS tagging assigns a grammatical category (such as noun, verb, adjective, etc.) to
each word in the sentence. This step helps in understanding the syntactic roles of words
within the sentence. For example, in the sentence "The cat sleeps," POS tagging would
label "The" as a determiner, "cat" as a noun, and "sleeps" as a verb.
Parsing:
Parsing involves analyzing the syntactic structure of sentences according to the rules of
a formal grammar.
Parse Trees:
The output of syntactic analysis is typically represented as a parse tree, which is a
hierarchical structure that shows how the words in the sentence are grouped into
phrases and how these phrases are related to each other according to the rules of the
grammar. Parse trees can be visualized graphically or represented in a textual format.
Applications:
Discourse:
Discourse refers to the structure and flow of conversation or written text. It involves
understanding how individual sentences or utterances relate to each other within a
larger context to convey meaning. Discourse understanding in AI involves parsing and
interpreting sequences of statements or turns in a conversation to comprehend the
overall message and context.
Pragmatics:
Pragmatics deals with the study of language use in context and how meaning is
conveyed beyond the literal interpretation of words. It encompasses aspects such as
implicature, presupposition, and speech acts.
Speech Acts: Understanding that utterances not only convey information but also
perform actions, such as making requests, promises, or apologies.
Context Sensitivity: AI models need to understand and generate responses that are
appropriate within the context of a conversation or text.
Supervised Learning:
• The goal is to learn a mapping from inputs to outputs, such that given new inputs,
the model can accurately predict the corresponding outputs.
Support Vector Machines (SVM): Effective for both classification and regression tasks.
Decision Trees and Random Forests: Versatile algorithms for classification and
regression.
Unsupervised Learning:
• Unsupervised learning involves training the model on unlabeled data, where the
goal is to discover hidden patterns or structures within the data.
• Unlike supervised learning, there are no predefined output labels, and the model
must find its own representations of the data.
• Clustering, dimensionality reduction, and anomaly detection are common tasks in
unsupervised learning.
Principal Component Analysis (PCA): Reduces the dimensionality of the data while
preserving its variance.
Autoencoders: Neural network models used for dimensionality reduction and feature
learning without supervision.
Reinforcement Learning:
• The agent receives feedback in the form of rewards or penalties based on its
actions, and the goal is to learn a policy that maximizes cumulative rewards over
time. RL is particularly useful for sequential decision-making tasks, such as game
playing, robotics, and autonomous driving.
• These learning paradigms serve as the foundation for various AI techniques and
algorithms, including neural networks, decision trees, support vector machines,
and more.
ROTE LEARNING.
In artificial intelligence (AI), rote learning means teaching a computer to remember
things by repeating them many times. But in AI, this isn't the best way to teach a
computer. Instead, we want the computer to learn from examples and understand the
bigger picture.
Rote learning is like trying to memorize something by repeating it over and over again,
without really understanding what it means. It's like when you memorize a phone
number just by saying it repeatedly, without knowing whose number it is or where it
goes
Limited Learning: If a computer just memorizes things, it can't adapt well to new
situations. It's like if you only knew one way to solve a math problem but couldn't figure
out a different way if needed.
Makes Mistakes Easily: Computers that rely on rote learning can easily mess up when
faced with something new. It's like if you tried to use a memorized answer for a question
that's a little different, and it just doesn't work.
Wastes Time and Resources: Rote learning can take a lot of time and computer power
because the computer has to remember every single detail, even if it's not important. It's
like if you tried to remember every single word in a book instead of just understanding
the main ideas.
Poor Performance on Unseen Data: Rote learning often leads to poor performance
when applied to new data that differs from what was memorized. AI systems should be
able to make accurate predictions or decisions on unseen data, which requires them to
generalize from the patterns learned during training.
Susceptibility to Errors: Rote learning can lead to errors, especially when faced with
noisy or ambiguous data. AI systems that memorize data without understanding may
incorrectly classify or interpret information, leading to unreliable results.
Expert Advice or Guidance: When an AI system faces a problem it can't solve on its
own, it seeks advice from human experts or existing databases of knowledge.
Learning from Experience: AI systems learn from the outcomes of their actions. If an
action leads to a positive result, the system learns to favor that action in similar
situations in the future. If an action leads to a negative outcome, the system learns to
avoid or adjust that action.
Observation and Feedback: AI systems can learn from feedback provided by humans
or by observing the outcomes of their actions. This feedback helps them understand
which actions are good or bad in different situations. For example, in a game, the
system learns from winning or losing matches.
Adapting Strategies: AI systems can adjust their strategies based on what they've
learned. If a particular approach isn't working well, the system can try a different
strategy. Over time, it refines its strategies to become more effective at solving
problems.
Generalizing from Examples: AI systems can generalize from examples to solve new,
unseen problems. For example, if a system has seen many examples of cats in images,
it can learn to recognize cats in new images it hasn't seen before.
• This process enables AI systems to learn from experience and make accurate
predictions or decisions on new, unseen instances.
For instance, if an AI system is tasked with identifying spam emails, it might learn from a
collection of labeled emails where each email is marked as spam or not spam.
For instance, after analyzing many examples of spam and non-spam emails, the AI
system might learn that emails containing certain keywords or patterns are more likely
to be spam.
Forming Concepts or Rules: Based on the observed patterns in the examples, the AI
system constructs concepts or rules that capture the common characteristics of the
instances in the dataset.
These concepts or rules serve as generalized knowledge that can be applied to classify
or make predictions about new instances.
Iterative Learning: Example induction is often an iterative process where the AI system
continually updates and refines its concepts or rules as it encounters new examples.
With each new example, the system adjusts its understanding to better capture the
underlying patterns in the data.
Explanation-based learning (EBL) in AI is like learning from someone who tells you not
just what to do but also why you should do it.
Improving Over Time: EBL is an ongoing process where the AI system learns from
explanations, tries out what it's learned, and then refines its understanding based on the
results. It's like practicing a skill and getting better at it each time you try.
Real-World Examples: In real life, EBL could be used in many ways. For example, a
robot could learn how to do a task by understanding explanations given by humans. Or
a computer program could learn to make better decisions by understanding the
reasoning behind previous decisions.