0% found this document useful (0 votes)
43 views17 pages

Unit 5 A.I

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views17 pages

Unit 5 A.I

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

ARTIFICIAL INTELLIGENCE

Unit - 5
APPLICATIONS

TOPICS TO BE COVERED
1.AI applications
2.Language Models
3.Information Retrieval
4.Information Extraction
5.Natural Language Processing
6.Machine Translation
7.Speech Recognition
8.Robot
9.Hardware
10.Perception
11.Planning
12.Moving
1.​AI applications

AI applications are continually evolving, promising to reshape


industries, solve pressing challenges, and improve quality of
life. Their potential lies in their ability to learn, adapt, and
deliver insights, making them indispensable in the modern era.

2.Language models in A.I


A language model in artificial intelligence (AI) is a computational system designed
to process, understand, and generate human language in a way that is both
meaningful and contextually relevant. These models leverage advanced machine
learning techniques, particularly natural language processing (NLP) and deep
learning, to achieve tasks like text generation, translation, summarization, and
more. They have become the cornerstone of many AI applications, especially with
the advent of transformer-based architectures.

Key Concepts of Language Models

1.​ Purpose​
Language models predict the likelihood of a sequence of words, phrases, or
sentences in a given context. They help in understanding and generating
language that appears human-like.​

2.​ Types​

○​ Statistical Language Models (SLMs): Older models like n-grams,


which calculate the probability of a word sequence based on limited
preceding words.
○​ Neural Language Models (NLMs): Deep learning-based models like
recurrent neural networks (RNNs), long short-term memory networks
(LSTMs), and transformers that understand context better.
○​ Pre-trained Language Models: Modern models trained on vast
amounts of data and fine-tuned for specific tasks (e.g., GPT, BERT).
3.​ Training​
Language models are trained on a large corpora of text data, learning
patterns, grammar, semantics, and context to predict or generate text.​

Applications of Language Models

1.​ Text Generation​

○​ Examples: ChatGPT, GPT-3, GPT-4​


Language models can generate essays, stories, and even code by
predicting and constructing coherent sentences.
2.​ Machine Translation​

○​ Example: Google Translate​


Translate text from one language to another using context-aware
predictions.
3.Sentiment Analysis​

○​ Example: Analyzing product review​


Language models classify text into sentiment categories like positive,
negative, or neutral.
3.​ Speech Recognition and Synthesis​

○​ Examples: Siri, Google Assistant​


Language models interpret spoken language and convert it into text or
vice versa.
4.​ Question Answering​

○​ Example: AI-powered search engines​


Models like BERT answer questions by extracting relevant
information from text.
5.​ Chatbots and Virtual Assistants​

○​ Examples: Alexa, ChatGPT​


Language models enable conversational AI by understanding user
input and generating human-like responses.
6.​ Text Summarization​

○​ Example: Summarizing articles​


Language models condense large documents into concise summaries.
7.​ Grammar and Style Correction​

○​ Example: Grammarly​
AI models identify and correct grammar, spelling, and stylistic issues.

Popular Language Models

1.​ BERT (Bidirectional Encoder Representations from Transformers)​


○​ Focus: Context understanding by reading text bidirectionally.
○​ Applications: Question answering, text classification, and entity
recognition.
2.​ GPT (Generative Pre-trained Transformer)​

○​ Focus: Text generation and conversation.


○​ Notable Versions: GPT-3, GPT-4.
3.​ T5 (Text-to-Text Transfer Transformer)​

○​ Focus: Converts all tasks into a text-to-text format, supporting


versatile NLP applications.
4.​ XLNet​

○​ Focus: Overcomes BERT limitations by capturing bidirectional


context better.
5.​ RoBERTa (A Robustly Optimized BERT)​

○​ Focus: Improved pre-training for better downstream task performance.

Advantages of Language Models

●​ Can process vast amounts of data and derive insights.


●​ Understand nuanced context, idioms, and semantic meaning.
●​ Enable automation of complex text-based tasks.

Challenges in Language Models

●​ Bias: May inherit biases from training data.


●​ Resource-Intensive: Require significant computational power for training.
●​ Context Limitations: Struggle with very long-term dependencies.
Future Directions

Language models are moving toward better interpretability, efficiency, and ethical
AI practices. Models like GPT-4 and LLaMA have shown the promise of scaling
up capabilities while addressing shortcomings like hallucinations and contextual
misunderstandings. Hybrid approaches that combine symbolic reasoning with
neural networks are also being explored for greater accuracy and robustness.
3.IR VS IE

5.Natural Language Processing


Key Points about Natural Language Processing (NLP)

1.​ Definition:​
NLP enables machines to understand, interpret, and generate human
language.​

2.​ Core Tasks:​

○​ Text Analysis: Tokenization, stemming, and lemmatization.


○​ Language Understanding: Sentiment analysis, named entity
recognition (NER), and syntax parsing.
○​ Language Generation: Text summarization, translation, and
chatbots.
3.​ Techniques Used:​

○​ Traditional NLP: Rule-based models, statistical methods, and feature


engineering.
○​ Modern NLP: Machine learning (ML), deep learning (DL), and
transformer architectures (e.g., BERT, GPT).
4.​ Applications:​
Chatbots (e.g., ChatGPT), machine translation (e.g., Google Translate),
sentiment analysis, and search engines.

5.​ Popular Libraries/Tools:​

○​ Spacy, NLTK, Hugging Face Transformers, and Stanford CoreNLP.


6.​ Challenges:​

○​ Understanding context, ambiguity, sarcasm, and managing biased


datasets.

Steps in Speech Recognition Process in AI

1.​ Audio Input​

○​ The system receives audio signals through a microphone or recording


device.
2.​ Preprocessing​

○​ Converts raw audio signals into a digital format and processes it to


reduce noise.
○​ Key techniques: Noise cancellation and normalization.
3.​ Feature Extraction​

○​ Extracts relevant features from the audio signal, such as frequency,


amplitude, and time-domain patterns.
○​ Common techniques: Mel-Frequency Cepstral Coefficients
(MFCCs), Spectrograms, and Log-Mel Spectrograms.
4.​ Acoustic Model​

○​ Maps the audio features to phonemes (basic sound units of speech).


○​ Uses models like Hidden Markov Models (HMMs) or deep
learning-based techniques like RNNs, LSTMs, or CNNs.
5.​ Language Model​

○​ Predicts the most likely sequence of words from the detected


phonemes based on linguistic rules and probability.
○​ Common models: n-gram models, transformers (e.g., BERT, GPT).
6.​ Decoding​

○​ Combines acoustic and language model outputs to form coherent


words and sentences.
○​ Outputs the final transcription of the speech input.
7.​ Post-Processing​

○​ Improves the text accuracy by correcting grammar, punctuation, and


context errors.
○​ May involve spell-checking or context-aware models.

Example in Action

●​ User says: "What's the weather like today?"


●​ Steps:
1.​ Audio is captured via microphone.
2.​ Preprocessing removes noise and converts it to a digital signal.
3.​ MFCCs are extracted to identify speech features.
4.​ The acoustic model detects phonemes like /wh/, /ah/, /ts/.
5.​ Language model interprets these as "What's the weather like today?"
6.​ Decoding outputs the text query.
7.​ Post-processing ensures grammatical correctness.

This process powers applications like Siri, Google Assistant, and voice-to-text
converters.
6.What is Machine Translation?
Machine translation is a sub-field of computational linguistics that focuses
on developing systems capable of automatically translating text or speech
from one language to another. In Natural Language Processing (NLP), the
goal of machine translation is to produce translations that are not only
grammatically correct but also convey the meaning of the original content
accurately.

Machine Translation Model

History of Machine Translation

The automatic translation of text from one natural language (the source)
to another is known as machine translation (the target). It was one of the
first applications for computers that were imagined (Weaver, 1949).

There have been three primary uses of machine translation in the past:

1.​ Rough translation, such as that given by free internet services,

conveys the “gist” of a foreign statement or document but is


riddled with inaccuracies. Companies utilize pre-edited

translation to publish documentation and sales materials in

several languages.

2.​ The original source content is written in a limited language that

makes machine translation easier, and the outputs are often

edited by a person to rectify any flaws.

3.​ Restricted-source translation is totally automated, but only for

highly stereotyped language like a weather report.

What are the key approaches in Machine Translation?


In machine translation, the original text is decoded and then encoded into
the target language through a two step process that involves various
approaches employed by language translation technology to facilitate the
translation mechanism.

1. Rule-Based Machine Translation

Rule-based machine translation relies on these resources to ensure


precise translation of specific content. The process involves the software
parsing input text, generating a transitional representation, and then
converting it into the target language with reference to grammar rules
and dictionaries.

2. Statistical Machine Translation

Rather than depending on linguistic rules, statistical machine translation


utilizes machine learning for text translation. Machine learning algorithms
examine extensive human translations, identifying statistical patterns.
When tasked with translating a new source text, the software
intelligently guesses based on the statistical likelihood of specific words
or phrases being associated with others in the target language.

3. Neural Machine Translation (NMT)

A neural network, inspired by the human brain, is a network of


interconnected nodes functioning as an information system. Input data
passes through these nodes to produce an output. Neural machine
translation software utilizes neural networks to process vast datasets,
with each node contributing a specific change from source text to target
text until the final result is obtained at the output node.

4. Hybrid Machine Translation

Hybrid machine translation tools integrate multiple machine translation


models within a single software application, leveraging a combination of
approaches to enhance the overall effectiveness of a singular translation
model. This process typically involves the incorporation of rule-based and
statistical machine translation subsystems, with the ultimate translation
output being a synthesis of the results generated by each subsystem.

Why do we need Machine Translation in NLP?


Machine translation in Natural Language Processing (NLP) has several
benefits, including:

1.​ Improved communication: Machine translation makes it easier for

people who speak different languages to communicate with each


other, breaking down language barriers and facilitating

international cooperation.

2.​ Cost savings: Machine translation is typically faster and less

expensive than human translation, making it a cost-effective

solution for businesses and organizations that need to translate

large amounts of text.

3.​ Increased accessibility: Machine translation can make digital

content more accessible to users who speak different languages,

improving the user experience and expanding the reach of digital

products and services.

4.​ Improved efficiency: Machine translation can streamline the

translation process, allowing businesses and organizations to

quickly translate large amounts of text and improving overall

efficiency.

5.​ Language learning: Machine translation can be a valuable tool for

language learners, helping them to understand the meaning of

unfamiliar words and phrases and improving their language

skills.
7.Speech recognition

The concepts Robot, Hardware, Perception, Planning, and


Moving are interconnected components in the field of
robotics, forming the foundation of how robots operate
effectively in the real world. Here's how they relate to each
other:
How They Work Together

1.​Robot with Hardware: A robot relies on its hardware


for sensing, decision-making, and action. Sensors (part
of hardware) feed environmental data into the robot's
processing unit.​

2.​Perception: Sensors gather raw data (e.g., images,


distances), which the robot processes to perceive the
environment, identifying objects, obstacles, or paths.​

3.​Planning: Based on perception, the robot creates a plan


to achieve its goals (e.g., navigating a room or picking
up an object).​

4.​Moving: The plan is executed by controlling hardware


components like motors or grippers, enabling the robot
to interact with the physical world.​

Example

●​ Robot: An autonomous delivery robot.


●​ Hardware: Equipped with wheels, cameras, GPS, and a
LiDAR sensor.
●​ Perception: Uses LiDAR and cameras to detect
pedestrians, obstacles, and roads.
●​ Planning: Creates a path to deliver a package while
avoiding obstacles.
●​ Moving: Executes the plan by driving along the path
and stopping when obstacles appear.
Together, these components enable robots to function
autonomously in dynamic and complex environments.

You might also like