0% found this document useful (0 votes)
67 views81 pages

AI Intro1

This document provides information about an Artificial Intelligence course, including: 1. The course is taught by Dr. Syed Musharaf Ali and will cover topics like neural networks, machine learning, problem solving, and deep learning. 2. Students will be evaluated based on assignments, quizzes, a midterm exam, and a final exam. 3. The course will utilize Google Classroom for online materials and collaboration.

Uploaded by

saad younas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views81 pages

AI Intro1

This document provides information about an Artificial Intelligence course, including: 1. The course is taught by Dr. Syed Musharaf Ali and will cover topics like neural networks, machine learning, problem solving, and deep learning. 2. Students will be evaluated based on assignments, quizzes, a midterm exam, and a final exam. 3. The course will utilize Google Classroom for online materials and collaboration.

Uploaded by

saad younas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 81

Artificial Intelligence

CS-451

Instructor: Dr Syed Musharaf Ali

ROOM G-104-DSE IIUI


Ph# 051-9019724 Ext-2724
Course Outline
# Contents Check List
1 Introduction to Artificial Intelligence
2 Intelligent Agents
3 Neural Networks
4 Machine Learning
5 Solving Problems  
6 Game Theory  
7 Computer vision  
8 Robotics  
9 Genetic Algorithms  
10 Deep Learning
Course work and Evaluation

# Module Weight (%)

1 Assignments/Quizzes 20
3 Mid Term 20

4 Final 60
Google Class Room

AI-F14 A fcrylqu
AI-F14 B 9xbze2
What is Artificial Intelligence ?
Artificial:

 Something produced by human beings rather than occurring


naturally

Intelligence:

 The ability to acquire knowledge and apply skills.

 Ability to figure out what do to when you do not know what to do


under the given environment or scenario
What is Artificial Intelligence ?

 The science and engineering of making intelligent machines,


especially intelligent computer programs.

 Artificial Intelligence is a way of making a computer, a computer-


controlled robot, or a software think intelligently, in the similar
manner the intelligent humans think.
Goal of Artificial Intelligence

To integrate all acquired


knowledge disciplines into
one system that can
understand, think, learn,
and behave like humans.
Applications of Artificial Intelligence

 Finance
 Intelligent Robotics
 Gaming
 Medicine
 Web
 Natural language processing
 Expert systems
 Vision systems
 Speech recognition
 Handwriting recognition
 etc………
Artificial Intelligence Approach
Agents and Environments:

 AI approach is called intelligent agent

 An agent is anything that can perceive its environment through


sensors and acts upon that environment through effectors.

 A function of an agent that maps sensors to effectors is called


Control-Policy of the agent.
Artificial Intelligence Approach
Agents and Environments:

Sensors

Agent Enviroment

Effectors

Perception-Action Cycle
Intelligent Agent
An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through effectors

A rational agent is one that does the right action. The right action is
the one that will cause the agent to be the most successful.

Performance measure is the criteria that determine how successful


an agent is e.g. % accuracy achieved, amount of work done, energy
consumed, time in seconds etc.
Intelligent Agent
An ideal rational agent:

For each possible percept sequence, an ideal rational agent should


do whatever action is expected to maximize its performance
measure, on the basis of the evidence(information) provided by the
percept sequence and whatever built-in knowledge the agent has.
Intelligent Agent
Mapping:
An agent’s behaviour depends only on its percept sequence

We can describe any particular agent by making a table of the action


it takes in response to each possible percept sequence.
Percept
Action
Sequence

P1 A1

P2 A2

P3 A3

Mapping from percept sequnce to actions


Intelligent Agent
Ideal Mapping:

If mappings describe agents, then ideal mapping describe ideal


agents.

Specifying which action an agent ought to take in response to any


given percept sequence provides a design for an ideal agent.
Intelligent Agent
Autonomy:

A system is autonomous, if it’s behaviour is determined by its own


experience or learning

If the agent’s actions are based completely on built-in knowledge,


then we say that the agent lacks autonomy.

It would be reasonable to provide an artificial agent with some built-


in knowledge as well as ability to learn.

A truly autonomous intelligent agent should be able to operate


successfully in a wide variety of environments, given sufficient time
to adapt
Agent Program

Agent program: A function that implements the agent mapping from


precepts to actions.

Computing device on which program will run, we call architecture

The architecture makes the percepts from the sensors available to


the program, run the program, and feeds the program’s action to the
effectors.

Agent = architecture + program


Agent Program

Designing agent program, we need to keep in mind

 Possible percepts and actions


 Performance measures (Goals)
 What sort of environment it will operate
Agent type Percepts Actions Goals Enviroments
Taxi driver Cameras, Steer, Safe, fast, Roads, traffic,
(self driving car) speedometer, accelarate, legal, pedestrians,
GPS, sonar, brake, talk to comfortable customers
microphone etc passenger trip,
maximize
profits
Environments
Environment provides percepts/information to the agent, and agent
done actions on the environment
Properties of Environments

Fully observable vs. partially observable:

If an agent's sensors give it access to the complete state of the


environment at each point in time then we say that the environment
is fully observable/accessible to that agent e.g.

Game of chess (fully observable)


Poker, self driving car (partially observable)
Properties of Environments

Deterministic vs. nondeterministic:

If the next state of the environment is completely determined by the


current state and the actions selected by the agent, then we say
environment is deterministic e.g.

Chess (deterministic)
Dice, poker, taxi driver (non deterministic)

If environment is fully observable and deterministic then agent need


not to worry about uncertainty
Properties of Environments

Episodic vs. non episodic:

The agent's experience is divided into atomic "episodes" (each


episode consists of the agent perceiving and then performing a single
action), and the quality of action in each episode depends only on
the episode itself.

Subsequent episodes do not depend on what actions occur in


previous episodes e.g.

Part-picking robot (Episodic)


Chess, poker, Taxi driving (nonepisodic)
Properties of Environments

Static vs. dynamic:

The environment can change while an agent is deliberating then we


say the environment is dynamic for that agent otherwise it is static
e.g.

Chess (static)
Taxi driving (dynamic)
Properties of Environments

Discrete (vs. continuous):

A limited number of distinct, clearly defined percepts and actions ,


we say that environment is discrete e.g.

Chess (discrete)… there are fixed number of possible moves on each


turn

Driving (continuous)
Properties of Environments

 Different environment types require different agent programs to


deal with them effectively.

 Environment that is partially observable, non episodic, dynamic


and continuous is hardest for artificial intelligent agent to deal
with.
Agent Program

There are four types of agents

1. Simple reflex agents


2. Model-based reflex agents
3. Goal-based agents
4. Utility-based agents
Simple Reflex Agents
They choose actions only based on the current percept.

They are rational only if a correct decision is made only on the basis
of current precept.

Their environment is completely observable.

Condition-Action Rule − It is a rule that maps a state (condition) to


an action.
Simple Reflex Agents
This agent function only succeeds when the environment is fully
observable

e.g. If the car-in-front is braking then initiate braking.


If hand is in fire then pull away hand
If there is a rock then pick it up (Mars lander)
Model-based Reflex Agents
Model-based reflex agents are made to deal with partial
accessibility

They use a model of the world to choose their actions. They


maintain an internal state.

Model − The knowledge about how the things happen in the world.

Internal State − It is a representation of unobserved aspects of


current state depending on percept history.

Updating the state requires the information about −


1. How the world evolves.
2. How the agent’s actions affect the world.
Model-based Reflex Agents

e.g. This time out mars Lander after picking up its first sample, it
stores this in the internal state of the world around it so when it come
across the second same sample it passes it by and saves space for
other samples.
Goal-based Reflex Agents
Goal-based agents further expand on the capabilities of the model-
based agents, by using "goal" information.

Goal information describes situations that are desirable.

This allows the agent a way to choose among multiple


possibilities/actions, selecting the one which reaches a goal state.

Search and planning are the subfields of artificial intelligence


devoted to finding action sequences that achieve the agent's goals.
Goals-based Reflex Agents
Utility-based Reflex Agents

Just having goals isn’t good enough because often we may have
several actions which all satisfy our goal so we need some way of
working out the most efficient one.

A utility function maps each state after each action to a real number
representing how efficiently each action achieves the goal.

This is useful when we either have many actions all solving the same
goal
Utility-based Reflex Agents

e.g. Shortest path finding algorithm.


Chapter 3 - Problem Solving
Problem Solving

 The process of working through details of a problem to reach a


solution

 Problem solving may include mathematical or systematic


operations

 Can be a gauge of an individual's critical thinking skills


Problem Solving Agent

 Simple reflex agent are unable to plan ahead.

 Their actions are determined only by the current percept

 They have no knowledge of what their actions do nor of what they


are trying to achieve

 Problem-solving agent decide what to do by finding sequences of


actions that lead to desirable states (goal) as well as maximizes the
performance measure
Problem Solving Agent

 Goal Formulation: is the first step in problem solving and it is based


on the current situation and the agent’s performance measure. It
helps to simplify the decision problem. Goal is normally considered
to be set of states in which the Goal is satisfied

 Problem Formulation: is the process of deciding what actions and


states to consider, given a goal
Problem Solving Agent

 Search is the process of looking for a sequence of actions that


reaches the goal

 A search algorithm takes a problem as input and returns a solution


in the form of an action sequence. Once a solution is found, the
actions it recommends can be carried out. This is called the
execution phase.
Problem Solving Agent
 Example

On holiday in Romania; currently in Arad. Non-Refundable ticket of


flight that leaves tomorrow from Bucharest

 Formulate goal: be in Bucharest

 Formulate problem:
states: various cities,
actions: drive between cities.

 Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras,


Bucharest
Problem Solving Agent
 Example
Well Defined Problems and Solutions
Problem: A problem is really a collection of information that the agent
will use to decide what to do. Basic elements of the problem
definitions are the states and actions

Initial state: The initial state that the agent knows itself in

Operator: The set of possible actions available to the agent. The term
operator is used to denote the description of an action in terms of
which state will be reached by carrying out the action in particular

State Space: State space is the set of all states reachable from the
initial state by any sequence of actions
Well Defined Problems and Solutions
Path: A Path in the state space is simply any sequence of actions
leading from one state to another

Goal Test: which the agent can apply to a single state description to
determine if it is a goal state

Path Cost Function: Is a function that assigns a cost to a path


Well Defined Problems and Solutions
Input to search algorithms:

Solution: The output of a search algorithm is a solution, that is, a path


from the initial state to a state that satisfies the goal test.

Performance Measure:

Total Cost = Search cost + Path cost

Where search is associated with the time and memory required to


find a solution
Well Defined Problems and Solutions
Examples: The 8-Puzzle Formulation http://mypuzzle.org/sliding

Consists of a 3x3 board with eight numbered tiles and a blank space.
A tile adjacent to the blank space can slide into the space.
Well Defined Problems and Solutions
Examples: The 8-Puzzle Formulation
States: a state description specifies the location of each of the eight tiles in one of
the nine squares. For efficiency, it is useful to include the location of the blank

Operators: blank moves left, right, up, or down.

Goal test: state matches the goal configuration

Path cost: each step costs 1, so the path cost is just the length of the path.
Well Defined Problems and Solutions
Examples: The 8-Queen Formulation

The goal of the 8-queens problem is to place eight queens on a chessboard such
that no queen attacks any other. (A queen attacks any piece in the same row,
column or diagonal.)

Goal test: 8 queens on board, none attacked.


Path cost: zero.
States: any arrangement of 0 to 8 queens on
board.
Operators: add a queen to any square.
Searching for Solutions
Generating/Expanding the State:

For the current state, test if it is a Goal state?

If not then apply operator/action to generate new set of states. This


process is called Expanding the a new set of states

Search Strategy:

We continue to choose, test and expand until a solution is found, or


until there are no more states to be expanded. The choice of which
state to expand first is determined by the search strategy
Searching for Solutions
Search Tree:

It is helpful to think search process as building up a search tree that


is superimposed over the state space

Root of the search tree is a search node corresponding to the initial


state

The Leaf nodes of the tree correspond to states that do not have
successors in the tree, either because
1. They have not been expanded yet
2. Or they are expanded, but generated the empty set

At each step, search algorithm choose one leaf node to expand


Searching for Solutions
Search Tree:

Corresponding

State Space Search Tree


Searching for Solutions
Search Tree vs State space:

 There are an infinite number of paths in state spaces


Search tree has infinite number of nodes

 Node is a data structure used to represent the search tree for a


particular problem instance as generated by particular algorithm

State represents a configuration or set of configuration of the


world

 Nodes have depths and parents, whereas states do not


Searching for Solutions
Search Tree:

 It is possible for two different nodes to contain the same state, if


that state is generated via two different sequences of actions

 Collection of nodes or states that are waiting to be expanded are


called frontiers
Search Strategies
Criteria for evaluation of a search strategy:

1. Completeness: Strategy must gaurantee to find a solution, if


there exist one

2. Time complexity: How much time it takes to find a solution

3. Space complexity: How much memory it needs

4. Optimality: does the strategy find the highest-quality solution


when there are different solutions
Search Strategies
Uninformed Search / Blind Search:

1. They have no information about the number of steps or the path


cost from the current state to the goal state.

2. All they can do is to distinguish a goal state from a non-goal state

Informed Search / Heuristic Search:

Search strategy that ranks alternatives in search algorithm at each


branching step based on available information to decide which
branch to follow.
Breadth-First Search
1. Root node is expanded first

2. Then all the nodes generated by root node are expanded next
and then their successors and so on

3. If there is a solution then BFS is guaranteed to find it.

4. If there are several solutions, BFS always find the shallowest goal
state first

Breadth-first search trees after 0,1,2, and 3 node expansion


Breadth-First Search

Breadth First Search


Complete Yes
Time Complexity ?
Space Complexity ?
Optimality Yes

Breadth-first search trees after 0,1,2, and 3 node expansion


Breadth-First Search
Suppose branching factor is b

The root of the search tree generates b nodes at the first level, each of
which generates b more nodes, for a total of b² at the second level.

Each of these generates b more nodes,


³ yielding b³ nodes at the third
level, and so on.

Now suppose that the solution for this problem has a path length of d.

Then the maximum number of nodes expanded before finding a


solution is

Total number of nodes at level d = 1 + b + b² + b³ + • • • + bͩͩ


Breadth-First Search
Complexity factor = O(bͩͩ)

If b = 10, opens 1000 nodes/second, 100 bytes/node => Computationally expensive

³
Breadth-First Search

Complexity factor = O(bͩͩ)

Exponential complexity search problems suitable for small instance


problems

Breadth First Search


Complete Yes
Time Complexity Exponential
Space Complexity Exponential
Optimality Yes
Uniform Cost Search(UCS)

1. BFS finds the shallowest goal state, but it may not be the least-
cost solution

2. UCS modifies the BFS by always expanding the lowest–cost node


on the frontier (measured by Path-cost)

3. First solution found is gauranteed to be the cheapest solution


Uniform Cost Search(UCS)
Uniform Cost Search(UCS)
Depth-First Search(DFS)
Depth-first search (DFS) always expands one of the nodes at the
deepest level of the tree. Only when the search hits a dead end (a non-
goal node with no expansion) does the search go back and expand
nodes at shallower levels

If branching factor is b and depth level is d, then DFS opens (1+bd)


nodes
Depth-First Search(DFS)
DFS BFS

Branching factor 2, depth level 2

DFS opens nodes = 1+b*d = 5 BFS opens nodes = 1 + b + b² = 7

Branching factor 2, depth level 4

DFS opens nodes = 9 BFS opens nodes = 31


Depth-First Search(DFS)

For very large input size, If DFS made wrong choice then it
is not complete neither optimal

DFS
S

G
Depth-First Search(DFS)

Depth First Search


Complete No
Time Complexity exponential
Space Complexity bd
Optimality No
Machine Learning
Machine Learning

Machine learning is a type of artificial


intelligence (AI) that provides
computers with the ability to learn
without being explicitly programmed.

Machine learning focuses on the


development of computer programs
that can change/adapt when exposed
to new data.
Machine Learning
Applications:

Adaptive websites,
Bioinformatics,
Computer vision,
Game playing,
Internet fraud detection,
Information retrieval,
Medical diagnosis,
Economics,
Natural language processing,
Online advertising,
Robotics,
Search engines,
………..…………….
Machine Learning
There are three types of Learning:

1. Supervised learning: The computer is presented with example inputs and their
desired outputs, and the goal is to learn a general rule or function
that maps inputs to outputs.

2. Unsupervised learning: No labels are given to the learning algorithm, leaving it


on its own to find structure in its input. Discovering hidden patterns in data.

3. Reinforcement learning: A computer program interacts with a dynamic


environment in which it must perform a certain goal (such as driving vehicle or
playing a game against an opponent).
The program is provided feedback in terms of rewards and punishments as it
navigates its problem space.
Machine Learning
Categorization of machine learning tasks:

1. Classification: Inputs are divided into two or more classes, and the learner must
produce a model that assigns unseen inputs to these classes. This is typically
tackled in a supervised way e.g. spam filtering is an example of classification,
where the inputs are email (or other) messages and the classes are "spam" and
"not spam".

2. Regression: The outputs are continuous rather than discrete, e.g temperature,
currency rates, stock rates etc. Supervised learning

3. Clustering: a set of inputs is to be divided into groups. Unlike in classification,


the groups are not known beforehand, making this typically an unsupervised
task.
Machine Learning
Example: Supervised Learning based Classification using Naïve Bayes
Classifier:

Problem: Classify whether a given person is a male or a female based


on the measured features. The features include height, weight, and
foot size.
Machine Learning
Example: Supervised Learning based Classification using Naïve Bayes
Classifier:

Training Set
Gender height (feet) weight (lbs) foot size(inches)
male 6 180 12
male 5.92 190 11
male 5.58 170 12
male 5.92 165 10
female 5 100 6
female 5.5 150 8
female 5.42 130 7
female 5.75 150 9
Machine Learning
Example: Supervised Learning based Classification using Naïve Bayes
Classifier:

Naïve Bayes Classifier:

In simple words
Machine Learning
Example: Supervised Learning based Classification using Naïve Bayes
Classifier:

The classifier created from the training set using a Gaussian


distribution assumption

Where σ is variance of each feature set, µ is the mean of each feature


set
x is the feature, v is the value of a feature x and c is the class of feature
x
σc and µc are the mean variance and mean feature value of the given class c
Machine Learning
Example: Supervised Learning based Classification using Naïve Bayes
Classifier:

Training:
Gender mean variance mean variance mean (foot variance
(height) (height) (weight) (weight) size) (foot size)
Male 5.855 0.0350 176.25 1.2292e+02 11.25 9.1667e-01
Female 5.4175 0.0972 132.5 5.5833e+02 7.5 1.6667e+00
Machine Learning
Example: Supervised Learning based Classification using Naïve Bayes
Classifier:

Testing: Sample to be classified as a male or female.


Gender height (feet) weight (lbs) foot size(inches)
Sample 6 130 8

We assume that we have equi-probable classes so

P(male)= P(female) = 0.5

This prior probability distribution might be based on our knowledge


of frequencies in the larger population, or on frequency in the
training set
Machine Learning
Example: Supervised Learning based Classification using Naïve Bayes
Classifier:

Testing:

We have to determine which posterior is greater, male or


female for the given sample
Machine Learning
Example: Supervised Learning based Classification using Naïve Bayes
Classifier:
Testing:
We have to determine which posterior is greater, male or female for
the given sample
Machine Learning
Example: Supervised Learning based Classification using Naïve Bayes
Classifier:

Testing:
Male Posterior
Machine Learning
Example: Supervised Learning based Classification using Naïve Bayes
Classifier:

Testing:
Female Posterior
Machine Learning
Example: Supervised Learning based Classification using Naïve Bayes
Classifier:

Result:

As we found that Female Posterior is greater than Male posterior so


the given sample is of Female

Gender height (feet) weight (lbs) foot size(inches)


Sample 6 130 8

Source : https://en.wikipedia.org/wiki/Naive_Bayes_classifier

Example: Gender classification

You might also like