IAI Unit1 Alt
IAI Unit1 Alt
Goals of AI
Components of AI
Mathematics
Biology
Psychology
Sociology
Computer Science
Neuroscience
Statistics
Advantages of AI
Disadvantages of AI
Applications of AI
History of AI
The future of AI holds immense potential to transform our world. Continued research and
development will likely lead to even more intelligent and sophisticated machines that can
significantly impact various aspects of society.
AI can be categorized in two main ways: based on capabilities and based on functionality.
1. Reactive Machines:
Simplest form of AI with no memory or past experience.
React solely to the current situation (e.g., IBM Deep Blue chess system).
2. Limited Memory:
Can store and utilize past experiences for a limited time.
Examples include self-driving cars that use recent information to navigate.
3. Theory of Mind AI:
Under development, aims to understand human emotions, beliefs, and enable
social interaction.
4. Self-Aware AI:
Entirely hypothetical concept of machines with consciousness and self-
awareness.
Types of AI Agents
AI agents can be classified based on their perceived intelligence and capabilities. All agents
can improve their performance over time. Here are five categories:
2. Goal-Based Agent:
Introduces the concept of goals to guide decision-making.
Expands on model-based agents by actively seeking to achieve goals.
Requires planning and searching capabilities.
3. Utility-Based Agent:
Similar to goal-based agents but incorporates a utility measure to evaluate
success in different states.
Chooses actions that maximize utility based on a pre-defined function.
4. Learning Agent:
Can learn from past experiences and adapt its behavior.
Consists of four key components: learning element, critic, performance element,
and problem generator.
Continuously learns, analyzes performance, and seeks improvement
opportunities.
Agents in Artificial Intelligence
An AI system can be defined as the study of the rational agent and its environment. The
agents sense the environment through sensors and act on their environment through
actuators. An AI agent can have mental properties such as knowledge, belief, intention, etc.
What is an Agent?
An agent can be anything that perceives its environment through sensors and acts upon that
environment through actuators. An Agent runs in the cycle of perceiving, thinking, and
acting. An agent can be:
Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and
even we are also agents.
Before moving forward, we should first know about sensors, effectors, and actuators.
Sensor:
A sensor is a device which detects the change in the environment and sends the information
to other electronic devices. An agent observes its environment through sensors.
Actuators:
Actuators are the component of machines that convert energy into motion. The actuators are
only responsible for moving and controlling a system. An actuator can be an electric motor,
gears, rails, etc.
Effectors:
Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms,
fingers, wings, fins, and display screens.
Intelligent Agents:
An intelligent agent is an autonomous entity which acts upon an environment using sensors
and actuators for achieving goals. An intelligent agent may learn from the environment to
achieve its goals. A thermostat is an example of an intelligent agent.
Rational Agent:
A rational agent is an agent which has clear preferences, models uncertainty, and acts in a
way to maximize its performance measure with all possible actions. A rational agent is said
to perform the right things. AI is about creating rational agents to use for game theory and
decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets a positive reward and for each wrong
action, an agent gets a negative reward.
Rationality:
Structure of an AI Agent:
The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program. It can be
viewed as:
Following are the main three terms involved in the structure of an AI agent:
PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
rational agent, then we can group its properties under PEAS representation model. It is
made up of four words:
P: Performance measure
E: Environment
A: Actuators
S: Sensors
Here, the performance measure is the objective for the success of an agent's behavior.
Intelligent Agents in AI
What are Intelligent Agents?
Intelligent agents are a core concept in Artificial Intelligence (AI). They are essentially
autonomous entities that can:
Perceive their environment: This is done through sensors, which can be physical (like
cameras in robots) or digital (like software that reads files).
Take actions: These actions are carried out by actuators, which can be physical (like
robot arms) or digital (like software that displays information on a screen).
Learn and adapt: Intelligent agents can improve their performance over time by
learning from their experiences in the environment.
Achieve goals: They have specific objectives and make decisions based on what will
help them reach those goals.
Rational agents: Make decisions that maximize their expected utility based on their
knowledge and beliefs.
Learning agents: Continuously learn and improve their performance through
experience.
Mobile agents: Can move around their environment to gather information or perform
actions.
Problem-Solving Agents in AI
Problem-solving agents are a fundamental concept in Artificial Intelligence. They utilize
search techniques as universal methods to tackle specific problems and deliver optimal
solutions. These agents are goal-based, meaning they strive to achieve a desired outcome
through the search process.
Search algorithms are a cornerstone of Artificial Intelligence (AI), providing a framework for
solving problems by exploring a set of possible solutions. This topic delves into the essential
concepts and various search algorithms employed in AI.
Problem-Solving Agents
Four key properties are used to compare the efficiency of search algorithms:
Completeness: Guarantees finding a solution if one exists for any given input.
Optimality: Ensures the solution found has the lowest cost compared to all other
solutions (applicable only to certain algorithms).
Time Complexity: Measures the time required for the algorithm to complete the
search.
Space Complexity: Represents the maximum amount of storage space needed during
the search process.
Search algorithms can be categorized based on their approach to exploring the search
space:
Basics:
Syntax (Pseudocode):
function BFS(problem):
frontier = Queue() # Initialize an empty queue
frontier.push(problem.initial_state)
explored = set() # Initialize an empty set to store explored states
if problem.goal_test(state):
return solution(state)
return failure
Example: Finding the shortest path in a maze.
Analysis:
Strengths:
Guaranteed to find a solution if one exists at a shallow depth.
Easy to implement.
Weaknesses:
Can be inefficient for deep search spaces due to high space complexity.
Applications:
Basics:
Syntax (Pseudocode):
function DFS(problem):
frontier = Stack() # Initialize an empty stack
frontier.push(problem.initial_state)
explored = set() # Initialize an empty set to store explored states
if problem.goal_test(state):
return solution(state)
for action in problem.actions(state):
next_state = problem.result(state, action)
if next_state not in explored and next_state not in frontier:
frontier.push(next_state)
return failure
Example: Finding a path in a maze that reaches the goal (doesn't necessarily guarantee the
shortest path).
Analysis:
Strengths:
Can be more space-efficient than BFS for deep search spaces.
May find a solution faster if the goal is located along a deep path.
Weaknesses:
May get stuck in deep, dead-end paths and take a long time to backtrack.
Not optimal for finding the shortest path.
Applications:
Hill-climbing search
Analysis:
Strengths:
Simple to implement.
Efficient for finding good solutions quickly, especially when the global optimum is
likely near the starting state.
Weaknesses:
Can get stuck in local maxima (or minima), never reaching the global optimum.
Performance depends on the starting state and the shape of the search space.
Applications:
Syntax (Pseudocode):
function Hill-Climbing(problem):
current_state = problem.initial_state
while True:
neighbors = problem.actions(current_state)
better_neighbor = None
for neighbor in neighbors:
next_state = problem.result(current_state, neighbor)
if problem.value(next_state) > problem.value(current_state):
better_neighbor = next_state
break
if better_neighbor is None:
return current_state # Reached a local maximum
current_state = better_neighbor
Basics:
Syntax (Pseudocode):
temperature = schedule.next_temperature(temperature)
return current_state
Analysis:
Strengths:
Higher probability of finding the global optimum compared to hill-climbing.
Can escape local optima by accepting downhill moves with a controlled probability.
Weaknesses:
More complex to implement than hill-climbing.
Requires careful design of the cooling schedule (temperature parameter) to
balance exploration and exploitation.
Applications:
Note: This response incorporates the strengths of both Response A and Response B,
addressing their potential shortcomings. It provides clear explanations, analysis, and
applications for both hill-climbing search and simulated annealing search.
Infinite Options: Unlike picking numbers (discrete), continuous spaces have endless
possibilities (think any point on a line).
Neighborhood Matters: Defining "nearby" states is crucial (e.g., how close are points
on the line?).
Lots of Mini Valleys: It's easy to get stuck in a good spot, not necessarily the best one
(local optima).
Approaches:
Idea: Imagine a ball rolling downhill (steeper = faster). We use the "slope" (gradient) to
guide our search for the lowest point.
Example: Finding the minimum of a simple curve.
Strengths: Efficient for smooth landscapes, easy to understand.
Weaknesses: Can get stuck in valleys, needs to calculate the slope (gradient).
Random Search: Like blindfolded exploration, trying random spots to see if they're
lower.
Simulated Annealing: Similar to metal cooling, allows uphill moves sometimes to
escape valleys, then gets stricter as it searches.
Strengths: More likely to escape local traps than random search.
Weaknesses: Might take longer to find the best spot.
General Tips:
Starting Point: Where you begin your search can make a big difference.
Knowing When to Stop: Don't wander forever, set a limit on how long to search.
Adjusting Your Tools: Fine-tuning parameters can improve your search results.
Remember: Finding the absolute best spot isn't always guaranteed, but local search can get
you pretty close in these vast, continuous landscapes.