Ai 1
Ai 1
2 MARKS QUESTIONS
1. Define AI.
Artificial Intelligence (AI) is the branch of computer science dedicated to creating
systems and machines capable of performing tasks that typically require human intelligence.
This includes activities such as learning, reasoning, problem-solving, perception, language
understanding, and decision-making.
· automated reasoning to use the stored information to answer questions and to draw
new conclusions;
· machine learning to adapt to new circumstances and to detect and extrapolate patterns.
Agent:
- Simple reflex agent
- Model-based reflex agent
- Goal-based agent
- Utility-based agent
- Learning agent
Environment:
- Fully observable environment
- Partially observable environment
- Deterministic environment
- Probabilistic environment
- Episodic environment
- Sequential environment
- Dynamic environment
- Static environment
Agent-Environment Interaction:
- Perception
- Action
- Sensor
- Actuator
● Definition: The agent function defines the mapping from percept histories to actions.
In simpler terms, it specifies what action the agent should take in response to a
sequence of percepts (observations or inputs).
● Nature: It is an abstract concept that describes the agent's behavior in terms of its
input-output mapping without detailing how this mapping is implemented.
● Example: For a vacuum-cleaning agent, the agent function could be to "clean any
dirty square it perceives".
Agent Program:
Omniscience Rationality
Definition: Complete and infinite knowledge Definition: The ability to make decisions that
about all aspects of the environment and maximise performance based on available
outcomes of all possible actions. information and prior knowledge.
Knowledge Scope: Knows the actual outcome of Knowledge Scope: Limited to the agent’s
every action. percepts and prior knowledge.
21. Write PEAS for a medical diagnosis system. (ref below table)
Agent Type: Medical diagnosis System
P(performance measure): healthy patient, minimum costs,lawsuits.
E(Environment): patient, hospital,staff.
A(Actuators): Display questions, tests,diagnoses,treatments,referrals.
S(Sensors): Keyboard entry of symptoms,findings,patient’s answer.
22. Write PEAS for a satellite image analysis system.. (ref below table)
23. Write PEAS for a part-picking robot. . (ref below table)
24. Write PEAS for a refinery controller. . (ref below table)
25. Write PEAS for an Interactive English tutor. . (ref below table)
26. Write PEAS for a Vacuum cleaner agent.
PEAS (Performance measure, Environment, Actuators, Sensors) description for a vacuum
cleaner agent:
● Performance measure: Cleanliness of the floor, efficient use of power, time taken to
clean, minimal noise.
● Environment: Varied floor types (carpet, tile, hardwood), obstacles (furniture, walls),
different room layouts.
● Actuators: Wheels for movement, vacuum suction, brushes, beater bar, dirt container.
● Sensors: Dirt sensors, cliff sensors, bump sensors, wheel encoders, cameras or
infrared sensors for navigation.
27. What are fully observable and partially observable Environments? Give an example.
*Fully observable Environment
If an agent's sensors give it access to the complete state of the environment at each point in
time, then we say that the task environment is fully observable.
A task environment is effectively fully observable if the sensors detect all aspects that are
relevant to the choice of action; relevance, in turn, depends on the performance measure.
Fully observable environments are convenient because the agent need not maintain any
internal state to keep track of the world.
Example:chess game-the entire board is visible to both players
*Partially observable Environment
An environment might be partially observable because of noisy and inaccurate sensors or
because parts of the state are simply missing from the sensor data
for example, a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in
other squares, and an automated taxi cannot see what other drivers are thinking.
Definition The agent's sensors give it access to the The agent's sensors do not have access
complete state of the environment at each point to the complete state of the
in time. environment at each point in time.
Effectiveness Effectively fully observable if sensors detect all Sensors are noisy, inaccurate, or parts
aspects relevant to the choice of action. of the state are missing from the sensor
Relevance depends on the performance data.
measure.
Convenience Convenient because the agent need not Requires the agent to maintain some
maintain any internal state to keep track of the internal state to infer missing
world. information or deal with uncertainty.
Example Scenario A scenario where all relevant information is A vacuum agent with a local dirt sensor
available to the agent, e.g., a chess game where cannot detect dirt in other squares; an
all pieces are visible. automated taxi cannot perceive other
drivers' intentions.
Goal-based agents
Utility-based agents
Decision Process Plan and execute actions to Evaluate and choose actions
reach the goal based on utility
● These agents have the model, "which is knowledge of the world" and based
on the model they perform actions.
Characteristics
● Only works if the environment is fully observable.
● Lacking history, easily get stuck in infinite loops
● One solution is to randomize actions
16.Briefly explain model based reflex agent with a diagram.
The most effective way to handle partial observability is for the agent to keep track of the
part of the world it can't see now. That is, the agent should maintain some sort of internal
state that depends on the percept history and thereby reflects at least some of the unobserved
aspects of the current state. Updating this internal state information as time goes by requires
two kinds of knowledge to be encoded in the agent program. First, we need some information
about how the world evolves independently of the agent-for example, that an overtaking car
generally will be closer behind than it was a moment ago. Second, we need some information
about how the agent's own actions affect the world-for example, that when the agent turns the
steering wheel clockwise, the car turns to the right or that after driving for five minutes
northbound on the freeway one is usually about five miles north of where one was five
minutes ago. This knowledge about "how the world working - whether implemented in
simple Boolean circuits or in complete scientific theories-is called a model of the world. An
agent that uses such a MODEL-BASED model is called a model-based agent.