0% found this document useful (0 votes)
54 views72 pages

Chapter 5 - 7

AI introduction

Uploaded by

Zee Foxer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views72 pages

Chapter 5 - 7

AI introduction

Uploaded by

Zee Foxer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 72

CHAPTER 5

LEARNING

1
INTRODUCTION TO LEARNING
• Machine Learning is an application of Artificial Intelligence (AI) which
enables a program(software) to learn from the experiences and improve
their self at a task without being explicitly programmed. Machine learning
was introduced by Arthur Samuel in 1959.

• Machine Learning is the study of making machines more human-like in their


behavior and decision making by giving them the ability to learn with
minimum human intervention, i.e., no explicit programming.

• Data is also called the fuel for Machine Learning and we can safely say that
there is no machine learning without data.

2
INTRODUCTION TO LEARNING
• In ML, when agents are exposed to new data, they learn, grow, change,
and develop by themselves.

• The main focus of machine learning is to allow the computers learn


automatically without human intervention.

How such learning can be started and done?:


• It can be started with the observations of data. The data can be examples,
instruction or some direct experiences.

• Then on the basis of this input, machine makes better decision by looking for
some patterns in data.

3
HOW CAN MACHINE LEARN?
• The Machine Learning process starts with inputting training data into
the selected algorithm.

• New input data is fed into the machine learning algorithm to test
whether the algorithm works correctly.

• The prediction and results are then checked against each other.
• If the prediction and results don’t match, the algorithm is re-trained
multiple times until it gets the desired outcome.

• This enables the machine learning algorithm to continually learn on its


own and produce the optimal answer, gradually increasing in accuracy over
4

time.
LEARNING COMPONENTS
• The learning system consists of the following components:
Goal: Defined with respect to the task to be performed by the
system.

Model: A mathematical function which maps perception to actions.


Learning rules: Which update the model parameters with new
experience such that the performance measures with respect to the
goals is optimized.

Experience: A set of perception (and possibly the corresponding


actions)
5
MACHINE LEARNING TYPES
• AI machine learning models can be classified as:

Supervised Learning

Unsupervised Learning

Reinforcement Learning

6
SUPERVISED LEARNING
Supervised Learning: we use known or labeled data(Target) for the training.

Since the data is known, the learning is, therefore, supervised, i.e., directed into
successful execution.

 Once the model is trained based on the known data, you can use unknown data into the
model and get a new response.

It uses external feedback to learning functions that map inputs to output
observations. In those models the external environment acts as a “teacher” of the AI

algorithms.
7
SUPERVISED LEARNING
• In Supervised Learning, we have two sets of variables called the target variable, or
labels (the variable we want to predict) and features(variables that help us to
predict target variables).

• We show the program(model) the features and the label associated with these
features and then the program is able to find the underlying pattern in the data. Take
this example of the dataset where we want to predict the price of the house given its
size. The price which is a target variable depends upon the size which is a feature.

• Input variables (x), and an output variable (y).


Number of rooms Price • An algorithm identifies the mapping function
between the input and output variables.
1 $100
• The relationship is y = f(x).
3 $300
8

5 $500
SUPERVISED LEARNING
• We can group the supervised learning problems as:
Regression problems: used to predict future values and the model is
trained with the historical data. Regression algorithms are used to
predict the continuous values such as, salary, age, Predicting the
future price of a house.

Classification problems: various labels train the algorithm to identify


items within a specific category. It is used to predict/Classify the
discrete values such as Male or Female, Dog or cat, Apple or an
orange, Beer or wine or water.

• The important difference between classification and regression problems,


is, classification is about predicting a label and regression is about 9

predicting a quantity.
SUPERVISED LEARNING:
CLASSIFICATION
• Classification Algorithms can be further divided into two category:
• Linear Models
Logistic Regression
Support Vector Machines
• Non-linear Models
K-Nearest Neighbors
Kernel SVM
Naïve Bayes
Decision Tree Classification
Random Forest Classification 10
EXERCISE
• Example: Suppose there is a marketing company A, who does various
advertisement every year and get sales on that. The below list shows
the advertisement made by the company in the last 5 years and the
corresponding sales:
Advertisement Sales Advertisement Sales
90 1000 90 Low
120 1500 120 Medium
150 2000 150 High
100 1300 100 Medium
130 1600 130 High
200 ?? 200 ??

11
The type of problem: Regression The type of problem:
Classification
UNSUPERVISED LEARNING
• Unsupervised Learning: is a machine learning technique in which models are
not supervised using training dataset. Instead, models itself find the hidden

patterns and insights from the given data. It can be compared to learning

which takes place in the human brain while learning new things.

12
UNSUPERVISED LEARNING
• Unsupervised approach is the one where we have no target variables, and we have
only the input variable(features) at hand.

• The algorithm learns by itself and discovers an impressive structure in the data. The
goal is to decipher the underlying distribution in the data to gain more knowledge

about the data.

Clustering: this means bundling the input variables with the same characteristics
together. E.g., grouping users based on search history

Association: discover the rules that govern meaningful associations among the
13

data set. E.g., People who watch ‘X’ will also watch ‘Y’.
UNSUPERVISED LEARNING
• Some example of unsupervised learning algorithms:

K-means clustering

KNN (k-nearest neighbors)

Principle Component Analysis

Independent Component Analysis

Singular value decomposition

14
REINFORCEMENT LEARNING
• Reinforcement Learning: works on a feedback-based process, in which
an agent automatically explore its surrounding by trail, taking action,
learning from experiences, and improving its performance.

• Agent gets rewarded for each good action and get punished for each
bad action; hence the goal of reinforcement learning agent is to
maximize the rewards.

• Positive Reinforcement Learning: specifies increasing the tendency


that the required behavior would occur again by adding something.
Negative Reinforcement Learning: works exactly opposite to the
positive RL. It increases the tendency that the specific behavior would
15

occur again by avoiding the negative condition.


NEURAL NETWORKS
• The term Artificial neural network refers to a biologically inspired sub-field of
artificial intelligence modeled after the brain..

• Artificial Neural Network attempts to mimic the network of neurons makes up


a human brain so that computers will have an option to understand things and

make decisions in a human-like manner.

• Similar to a human brain has neurons interconnected to each other, artificial


neural networks also have neurons that are linked to each other in various

layers of the networks. These neurons are known as nodes.


16
ARTIFICIAL NEURAL NETWORKS :
PERCEPTRON
• Perceptron: it consists of a single neuron with multiple inputs and a
single output. It has restricted information processing capability.

• A perceptron draws a hyperplane as the decision boundary over the


input space.

17
ANN: MULTILAYER PERCEPTRON
• Multilayer Perceptron: consists of more than one perception which is
grouped together to form a multiple layer neural network. In contrast
to perceptron, multilayer networks can learn not only multiple decision
boundaries, but the boundaries may be nonlinear.

18
LEARNING IN ANN
• Learning a perceptron means finding the right values for W.
• The perceptron learning algorithm can be stated as below.
1. Assign random values to the weight vector

2. Apply the weight update rule to every training example

3. Are all training examples correctly classified?

a. Yes. Quit

b. No. Go back to Step 2.

19
LEARNING IN ANN
Learning means minimizing the error term.
In the Neural Network, we take the computed loss value
back to each layer so that it updates the weights in such a
way that minimizes the loss.

This way of first adjusting the weights in the output is


known as Backward Propagation.

On the other hand, the process of going from left to right
i.e from the Input layer to the Output Layer for the
adjustment is known as the Forward Propagation.
20
LEARNING IN ANN
• Forward Propagation is the way to move from the Input layer (left)
to the Output layer (right) in the neural network. The process of
moving from the right to left i.e backward from the Output to the
Input layer is called the Backward Propagation.

21
DEEP LEARNING
• It is a subfield of Machine Learning, inspired by the biological
neurons of a brain. Most deep learning methods use neural network
architectures, which is why deep learning models are often referred
to as deep neural networks.

• When the volume of data increases, Machine learning techniques, no


matter how optimized, starts to become inefficient in terms of
performance and accuracy, whereas Deep learning performs much
better in such cases.

• Deep learning algorithms work with almost any kind of data and
require large amounts of computing power and information to solve
22

complicated issues.
DEEP LEARNING
• The term deep usually refers to the number of hidden layers in the
neural network. Traditional neural networks only contain 2-3 hidden
layers, while deep networks can have as many as 150.

23
DEEP LEARNING
• The most popular deep learning algorithms are:
Convolutional Neural Networks (CNNs)
Long Short Term Memory Networks (LSTMs)
Recurrent Neural Networks (RNNs)
Generative Adversarial Networks (GANs)
Self Organizing Maps (SOMs)
Deep Belief Networks (DBNs)
Restricted Boltzmann Machines( RBMs)
Autoencoders
24
DEEP LEARNING
• Convolutional Neural Networks (CNNs): also known as ConvNets, consist of
multiple layers and are mainly used for image processing and object detection.
Yann LeCun developed the first CNN in 1988 when it was called LeNet.

• CNN's are widely used to identify satellite images, process medical images,
forecast time series, and detect anomalies.

25
DEEP LEARNING
• Recurrent Neural Networks(RNNs): consist of some directed
connections that form a cycle that allow the output provided to be
used as input in the current phase of RNNs. RNNs are mostly used in
captioning the image, time series analysis, recognizing handwritten
data, and translating data to machines.

26
DEEP LEARNING
• LSTMs are a type of Recurrent Neural Network (RNN) that can learn
and memorize long-term dependencies. Recalling past information for
long periods is the default behavior.

27
GENERALIZATION AND OVERFITTING

• The cause of poor performance in machine learning is either overfitting or


underfitting the data.

• The situation where any given model is performing too well on the training
data but the performance drops significantly over the test set is called an

over fitting model. Good performance on the training data, poor

generalization to other data.

• If the model is performing poorly over the test and the train set, then we call
that an underfitting model. Poor performance on the training data and poor 28

generalization to other data


Chapter 6
NATURAL LANGUAGE PROCESSING
BASICS
INTRODUCTION
For a machine to be intelligent, a key tenet is to be able
to communicate via one of the natural languages known
to humans.

Natural Language Processing is a part of Computer


Science, Human language, and Artificial Intelligence.

30
INTRODUCTION
 NLP is the technology that is used by machines to understand, analyze,
manipulate, and interpret human's languages.

 It is the process of computer analysis of input provided in a human


language, and conversion of this input into a useful form of
representation.
Human Computer
Language Representation

 It helps developers to organize knowledge for performing tasks such as translation, automatic

summarization, Named Entity Recognition( process of identifying word or phrase


spans in unstructured text (the entity) and classifying them as belonging
to a particular class (the entity type).
31
 speech recognition, relationship extraction, and topic segmentation.
 Relationship Extraction (RE) is an important process in
Natural Language Processing that automatically identifies and
categorizes the connections between entities within natural
language text.
 These entities can encompass individuals, organizations,
locations, dates, or any other nouns or concepts mentioned in
the text.

32
INTRODUCTION

The input/output of a NLP system can be:


Written text

Speech Steps in NLP

33
Steps in NLP
Lexical Analysis(Morphological Analysis): The first phase of NLP
is the Lexical Analysis. This phase scans the source code as a stream
of characters and converts it into meaningful lexemes. It divides the
whole text into paragraphs, sentences, and words.

Syntactic Analysis (Parsing): is used to check grammar, word


arrangements, and shows the relationship among the words.

Example: the ate apple John , john ate the apple.

Semantic Analysis: is concerned with the meaning representation. It


mainly focuses on the literal meaning of words, phrases, and
sentences.
34
Discourse Integration: The meaning of any sentence
depends upon the meaning of the sentence just before it.

In addition, it also brings about the meaning of

immediately succeeding sentence (i.e. the steps of

resolving references are done on this step.)

Example: Monkeys eat Banana, when they wake up.

who is they here? –Monkeys.

Monkeys eat Banana, when they are ripe.


35
who is they here? –Banana.
• Pragmatic Analysis: it is the fifth and last phase of
NLP. It deals with the overall communicative and
social content and its effect on interpretation.

• It means abstracting or deriving the meaningful use of


language in situations. In this analysis, the main focus
is always on what was said in reinterpreted on what is
meant.

• Example: John saw Marry in the garden with a cat.

This can be interpreted as:


36
NATURAL LANGUAGE PROCESSING
The following language related information are useful in NLP:

• Phonology: concerns how words are related to the


sounds(pronunciation) that realize them.

• Morphology: concerns how words are constructed from more basic


meaning units called morphemes.

• Syntax: concerns how can be words put together to form correct


sentences and determines what structural role each word plays in the
sentence.

• Semantics: concerns what words mean and how these meaning combine
in sentences to form sentence meaning.
37
NATURAL LANGUAGE PROCESSING
• Pragmatics: concerns how sentences are used in different
situations and how use affects the interpretation of the sentence.

• Discourse: concerns how the immediately preceding sentences


affect the interpretation of the next sentence. For example,
interpreting pronouns and interpreting the temporal aspects of the
information.

• World Knowledge: includes general knowledge about the world.


What each language user must know about the other’s beliefs and
goals.

38
components of NLP.

• There are two components of NLP.

+ Natural Language Understanding

+ Natural Language Generation

39
NATURAL LANGUAGE PROCESSING
• Natural Language Understanding

• Mapping the given input in the natural language into a


useful representation.
• Different level of analysis required:
 Morphological analysis,

 Syntactic analysis,

 Semantic analysis,

 Discourse analysis, …

40
NATURAL LANGUAGE PROCESSING

• Natural Language Generation

• Producing output in the natural language from some


internal representation.
• Different level of synthesis required:
 Deep planning (what to say),

 Syntactic generation

• NL Understanding is much harder than NL Generation. But, still both


of them are hard.

41
WHY NLP IS DIFFICULT?
• For example, consider these three sentences:

+ How is the weather today?

+ Is it going to rain today?

+ Do I need to take my umbrella today?

• All these sentences have the same underlying question, which is to


enquire about today’s weather forecast.

• As humans, we can identify such underlying similarities almost


effortlessly and respond accordingly.

• But this is a problem for machine. we have to code rules for each and
every combination of words in any natural language to help a machine
42
understand, then things will get very complicated very quickly.
• Natural Language processing is considered as difficult problem in
computer science. The difficulty in NL understanding arises from the
different facts:

Contextual words, phrases and homonyms :the same words and phrases
can have different meanings based on the context of a sentence.

Homograph: Same spelling, different meaning/pronunciation.

- Homophone: Same pronunciation, different spelling/meaning.

- Homonym: Can refer to words that are homographs or homophones.

Synonyms can lead to issues similar to contextual understanding


because we use many different words to express the same idea. E.g:
small, little, tiny, minute. different people use synonyms to denote
slightly different meanings within their personal vocabulary. 43
• Irony and sarcasm: words and phrases that, strictly by
definition, may be positive or negative, but actually
connote the opposite.

• Informal phrases, expressions, idioms, and culture-


specific lingo: no dictionary definition at all found, and
these expressions may even have different meanings in
different geographic areas. Example: Break a leg,
meaning Good luck.

• Domain-specific language: different businesses and


44
industries often use very different language.
WHY NLP IS DIFFICULT?
• Natural language is extremely rich in form and
structure, and very ambiguous.

• Ambiguity is the capability of being understood in


more than one way.

• Lexical Ambiguity: exists in the presence of two or


more possible meanings of the sentence within a single
word.

Example: Hewan is looking for a match.


45
WHY NLP IS DIFFICULT?
• Syntactic Ambiguity: exists in the presence of two or more possible
meanings within the sentence.

 This form of ambiguity is also called structural or grammatical ambiguity.

 It occurs in the sentence because the sentence structure leads to two or


more possible meanings.

Example: I saw the girl with the binocular.

o Referential Ambiguity

Referential Ambiguity exists when you are referring to something using the
pronoun. Example: Kiran went to Sunita. She said, "I am hungry. “ In the
above sentence, you do not know that who is hungry, either Kiran or Sunita.

46
CORPUS
• A corpus is a large and structured set of machine
readable texts that have been produced in a natural
communicative setting.

• They can be derived in different ways like text that


was originally electronic, transcripts of spoken
language and optical character recognition, etc.

• Having well organized corpus helps the machine to


learn natural language easily.
47
Applications of Natural Language Processing

• Machine Translation refers to converting a text in language A into the corresponding text in language B
(or speech).

• Different Machine Translation architectures are:

• Interlingua based systems: (only trains one model for translating many languages. To
achieve this, it uses a shared representation of a sentence meaning.)

• Transfer based systems:( it trains specific model for each language to translate from one
to another.)

• Challenges are to acquire the required knowledge resources such as mapping rules and bi-lingual
dictionary?

• By hand or acquire them automatically from corpora. 48


Applications of Natural Language Processing

• Spam Detection: is used to detect unwanted e-mails getting to a user's


inbox.

49
Applications of Natural Language Processing

• Sentiment Analysis: is also known as opinion mining. It is used on the


web to analyze the attitude, behavior, and emotional state of the sender.

• This application is implemented through a combination of NLP and


statistics by assigning the values to the text (positive, negative, or
natural), identify the mood of the context (happy, sad, angry, etc.)

50
Applications of Natural Language Processing

• Information Retrieval: Web search (Uni-lingual or multi-lingual).

• Query Answering/Dialogue: Natural language interface with a database

system, or a dialogue system.

• Report Generation: Generation of reports such as weather reports.

• Some Small Applications: Grammar Checking, Spell Checking, Spell

Corrector

51
Quiz(5%)
Which types of Ambiguity exists in the following
sentences?
1.The boy told his father the theft. He was very upset.
2.The professor said on Monday he would give an exam.
3.The chicken is ready to eat.
4.I saw Chala near the bank.
5. I gave her the book with the cover.

52
CHAPTER 7: ROBOTICS

ROBOTICS
Introduction
• The Word robot was coined by a Czech novelist Karel Capek in a 1920 play titled Rassum’s

Universal Robots (RUR). Robot in Czech is a word for worker or servant.

• Robotics is the term used in artificial intelligence that deals with a study of creating intelligent

and efficient robots.

• The aim of the robot is to manipulate the objects by perceiving, moving, picking, modifying the

physical properties of object.

54
Introduction
• Robotics is a branch of Artificial Intelligence (AI), it is mainly composed of
electrical engineering, mechanical engineering and computer science for
construction, design and application of robots.

• Robots have electrical components for providing power and control the
machinery.

• They have mechanical construction, shape, or form designed to accomplish a


particular task.

• It contains some type of computer program that determines what, when and how
a robot does something.
55

• 55
Introduction
• A robot is a reprogrammable, multifunctional manipulator designed to move

material, parts, tools or specialized devices through variable programmed motions

for the performance of a variety of tasks.

• Robots can be work as:-

+ An automatic machine sweeper in space(vacuum cleaner)

+ A machine removing mines in a war field

+ An automatic car for a child to play with


56
Main Components Of Robot
• Robots are built to bring solutions to a variety of needs and fulfill several different
purposes, and therefore, require a variety of specialized components to complete
these tasks. However, there are several components that are central to every robot’s
construction.

57
Main Components Of Robot
• Control system: are programmed to tell a robot how to utilize its specific

components, similar in some ways to how the human brain sends signals

throughout the body, in order to complete a specific task.

• Sensors provide a robot with stimuli in the form of electrical signals that are

processed by the controller and allow the robot to interact with the outside world.

• There are different type of sensors available to choose from, and the characteristics

of sensors are used for determining the type of sensor to be used for particular

application. 58
Main Components Of Robot
• Common sensors found within robots include video cameras that function as eyes,
photoresistors that react to light and microphones that operate like ears etc.

59
Main Components Of Robot
• Actuators: a device can only be considered to be a robot if it has a movable frame
or body. Actuators are the components that are responsible for this movement. They
are made up of motors that receive signals from the control system. Common
robotic actuators utilize combinations of different electro mechanical devices.

60
Main Components Of Robot
• Locomotion is the method of moving from one place to another. The mechanism
that makes a robot capable of moving in its environment is called as robot
locomotion.

• There are many types of locomotion's:

 Legged

 Wheeled

 Tracked slip/skid

 Combination of legged and wheeled locomotion

61
Main Components Of Robot
• Legged locomotion: requires more number of motors for accomplish a movement.
It is suited for rough as well as smooth terrain where irregular or too smooth
surface makes it consume more operational power. It is little difficult to implement
because of stability issues.

• It comes up with the variety of one, two, four, and six legs. If a robot has multiple
legs then leg coordination is required for locomotion.

62
Main Components Of Robot
• Wheeled Locomotion: requires less number of motors for accomplishing a
movement. It is little easy to implement as there are lesser stability issues in case of
more number of wheels. It is more power efficient as compared to legged
locomotion.

 Standard wheel: characterized by two degrees of freedom and an ability to move in


the forward and reverse direction. The angle between the robot frame/chassis and
the wheel is constant while the center of the wheel is fixed to the robot frame.
63
Main Components Of Robot
 Castor/Ball wheels :have total freedom of 360° and are good at balancing the robot. These

wheels are provided with a spherical ball made of nylon, plastic or any hard material fixed

in a holder.

 Castor wheels are easy to implement and have high load capacity. However, the major

limitation of castor wheels is that there is a high degree of traction and therefore more

power consumption.

 Omni directional wheels: attached to large wheels at an angle of 45° to form a multi

directional structure capable of moving in any direction.

 Omni wheels are designed such that the axis of the smaller wheels is at a right angle to the

axis of the large wheels, which enables to rotate parallel to its axis, generating low surface
64
resistance. These wheels are used for driving as well as steering the robot.
Main Components Of Robot
 Orientable wheels: mounted on an omni-directional fork support. There are two
types of orientable wheels; centered and off-centered. Centered wheels have a
vertical axle passing through the wheel center, while the off-centered wheels have a
vertical axis slightly off-centered from the its center

• Slip/Skid Locomotion: the vehicles use tracks as available in a tank. The robot is
steered by moving tracks with different speeds in the same or opposite direction. It
offers stability because of large contact area of ground and track.

65
Main Components Of Robot
• Power Supply: Like the human body requires food in order to function, robots
require power. Stationary robots, such as those found in a factory, may run on AC
power through a wall outlet but more commonly, robots operate via an internal
battery.

• Most robots utilize lead-acid batteries for their safe qualities and long shelf life
while others may utilize the more compact but also more expensive silver-cadmium
variety.

• Safety, weight, replace ability and lifecycle are all important factors to consider
when designing a robot’s power supply.

66
Types Of Robots
• All robots vary in design, functionality and degree of autonomy.

• Pre-programmed robots operate in a controlled environment where they do


simple, monotonous tasks. They are ones that have to be told ahead of time what to
do, and then they simply execute that program.

• They cannot change their behavior while they are working, and no human is
guiding their actions.

67
Types Of Robots
• Humanoid robots are robots that look like and/or mimic human behavior. These
robots usually perform human-like activities (like running, jumping and carrying
objects), and are sometimes designed to look like us, even having human faces and
expressions. Two of the most prominent examples of humanoid robots are Hanson
Robotics’ Sophia and Boston Dynamics’ Atlas.

68
Types Of Robots
• Autonomous robots: operate independently of human operators. These robots are
usually designed to carry out tasks in open environments that do not require human
supervision. They are quite unique because they use sensors to perceive the world
around them, and then employ decision-making structures to take the optimal next
step based on their data and mission.

69
Types Of Robots
• Teleoperated robots are semi-autonomous bots that use a wireless network to
enable human control from a safe distance. These robots usually work in extreme
geographical conditions, weather, circumstances, etc.

70
Types Of Robots

• Augmented robots either enhance current human capabilities or replace the


capabilities a human may have lost. The field of robotics for human augmentation
is a field where science fiction could become reality very soon, with bots that have
the ability to redefine the definition of humanity by making humans faster and
stronger. Some examples of current augmenting robots are robotic prosthetic limbs
or exoskeletons used to lift hefty weights.

71
Thank You
አመሰግናለሁ !
Galatoomaa!
Wish you all the best !

You might also like