Artificial Intelligence All 5 Units
Artificial Intelligence All 5 Units
Artificial Intelligence All 5 Units
Syllabus
1. Introduction:
a. Definition
b. Future of Artificial Intelligence
c. Characteristics of Intelligent Agents
d. Typical Intelligent Agents
e. Problem Solving Approach to Typical AI Problems
1 Introduction
Artificial Intelligence (AI) is a branch of computer science that aims to create machines
capable of intelligent behavior. This includes learning, reasoning, problem-solving, per-
ception, and language understanding.
1.1 Definition
AI can be defined as the simulation of human intelligence processes by machines, espe-
cially computer systems. These processes include learning (the acquisition of information
and rules for using the information), reasoning (using rules to reach approximate or def-
inite conclusions), and self-correction.
5. Smart Cities and Urban Planning: AI will play a key role in designing smart
cities with optimized traffic management, energy-efficient buildings, and improved
public safety measures. AI-driven analytics can provide insights into urban devel-
opment and help reduce environmental impact.
7. Ethics and Governance: The future of AI also involves addressing ethical con-
cerns related to privacy, bias, and decision-making transparency. Developing robust
AI governance frameworks and ethical guidelines is crucial to ensure AI technologies
are aligned with societal values and human rights.
• Simple Reflex Agents: These agents operate based on a predefined set of rules
that map a specific situation to an action. They do not consider the history of
previous states, making them effective in fully observable environments. However,
they struggle with complex or partially observable environments. Example: A
thermostat adjusting temperature based on current readings.
• Goal-based Agents: These agents go beyond immediate actions and are designed
to achieve specific goals. They choose actions based on a set of goals they aim to
accomplish, using goal information to guide decision-making. Goal-based agents
often use search and planning algorithms to find the best path to achieve their
objectives. Example: A delivery robot navigating a warehouse to pick up and
deliver packages to specific locations.
• Learning Agents: These agents have the ability to learn from their experiences
and improve their performance over time. A learning agent has four components: a
learning element (improves the agent’s performance), a performance element (selects
external actions), a critic (provides feedback), and a problem generator (suggests
actions that will lead to new knowledge). Example: A recommendation system
learning from user behavior to improve future suggestions.
Definition:
Breadth-First Search (BFS) explores all nodes at the current depth level before
moving to the next, ensuring the shortest path is found in unweighted graphs.
Usage:
BFS is used when the goal is to find the shortest path or the closest solution. It is
applicable in scenarios like social network analysis, GPS navigation, and network
broadcasting.
Mechanism:
Uses a queue data structure to maintain the nodes to be explored. Nodes are visited
level by level, ensuring that nodes closer to the start node are visited first.
Properties:
– Breadth-First Search (BFS) starts at the root node (like node A in the
image) and explores all its direct neighbors first before moving to nodes at the
next level.
∗ Start at the Root: Begin at the root node (A).
∗ Expand Neighbors: Visit all direct neighbors of A, such as nodes B and C.
∗ Move to Next Level: Once all neighbors of the current level are visited,
move to the next level. For example, after visiting B and C, move to D,
E, F, and G in order.
– The process continues until all nodes have been visited or until the desired
node is found. For example, after reaching node G, the search stops because
all nodes have been explored.
Definition:
Depth-First Search (DFS) explores as far as possible along a branch before back-
tracking. It is used for scenarios where solutions are deep in the search tree.
Usage:
DFS is ideal for situations such as solving puzzles, analyzing game trees, and topo-
logical sorting in directed acyclic graphs (DAGs).
Mechanism:
Uses a stack data structure (either explicitly or via recursion) to explore nodes as
deep as possible along each branch before backtracking.
Properties:
– Completeness: Not guaranteed, as DFS may get stuck in loops if the graph
contains cycles.
– Optimality: Does not guarantee the shortest path.
– Time Complexity: O(V + E), where V is the number of vertices and E is the
number of edges.
– Space Complexity: O(V) in case of recursion.
4 A* Search
Definition:
A* Search is a combination of uniform-cost search and heuristics to prioritize nodes,
finding the least-cost path efficiently by considering both the cost to reach a node
and an estimated cost to the goal.
Usage:
Commonly used in pathfinding and graph traversal problems, such as video games
and AI for robotics.
Mechanism:
Uses a priority queue to explore nodes. The priority is determined by a cost func-
tion f (n) = g(n) + h(n), where g(n) is the cost to reach node n from the start, and
h(n) is the heuristic estimate of the cost from n to the goal.
Properties:
Heuristics
Heuristics are rules of thumb or educated guesses that help guide the search process
toward a solution more quickly. They are used in algorithms to make decisions based
on approximate information. A heuristic provides an estimate of the cost to reach
the goal from a given state, which allows the search to prioritize certain paths.
Types of Heuristics
– Admissible Heuristics:
An admissible heuristic is one that never overestimates the cost to reach the
goal. This means the heuristic provides a lower bound on the actual cost,
ensuring that search algorithms like A* find the optimal solution.
∗ Example: In a pathfinding problem, the straight-line distance (Euclidean
distance) between the current position and the goal is an admissible heuris-
tic since it never overestimates the true distance.
– Inadmissible Heuristics:
An inadmissible heuristic may overestimate the actual cost to reach the goal.
While this can speed up the search process, it does not guarantee that the
solution found will be optimal.
Optimization
Optimization is the process of finding the best solution from a set of possibilities.
It involves maximizing or minimizing an objective function while satisfying certain
constraints.
Examples:
– Genetic Algorithms:
∗ Mimic natural selection to evolve better solutions over multiple genera-
tions.
∗ Use Case: Ideal for problems with large or complex search spaces.
– Simulated Annealing:
∗ A probabilistic technique that occasionally accepts worse solutions to es-
cape local optima and gradually focuses on better solutions.
∗ Use Case: Useful for optimization problems with many local optima.
Example: To solve a maze, the A* search algorithm can be applied, which eval-
uates each possible move based on both the actual cost to reach a point and an
estimated cost from that point to the goal, ensuring an efficient path to the exit.
Syllabus
• Problem Solving Methods
Search Strategies
Uninformed Search
Uninformed search strategies do not have any additional information about states beyond
the problem definition. They explore the search space blindly. Common uninformed
search methods include:
• Breadth-First Search (BFS): Explores all nodes at the present depth before
moving to the next level. Example: Navigating through a maze.
• A* Search: Combines the cost to reach the node and the estimated cost to the
goal. Example: Pathfinding in maps.
• Greedy Best-First Search: Expands the node that is closest to the goal according
to the heuristic.
Explanation: Greedy Best-First Search (GBFS) is a search algorithm that selects
the node that appears to be closest to the goal based on a given heuristic function
h(n). The heuristic function estimates the cost from the current node n to the
goal. GBFS is called ”greedy” because it always chooses the path that looks most
promising, based solely on the heuristic value. It does not consider the path cost
already incurred, only the estimated cost to reach the goal.
– Basic Concept: Starts with an empty assignment and makes a series of incre-
mental assignments to variables. When a constraint is violated, the algorithm
backtracks to the most recent decision point and tries a different path.
– Advantages: Guarantees finding a solution if one exists and is simpler to
implement compared to more advanced techniques.
– Heuristics:
∗ Minimum Remaining Values (MRV): Chooses the variable with the
fewest legal values.
∗ Degree Heuristic: Chooses the variable with the most constraints on
remaining variables.
∗ Least Constraining Value: Chooses the value that rules out the fewest
choices for neighboring variables.
– Limitations: Can be slow if the problem is large or if many constraints are
present.
– Improvements:
∗ Forward Checking: Keeps track of remaining legal values for the unas-
signed variables.
∗ Constraint Learning: Records constraints discovered during search to
avoid exploring the same failed paths.
Game Playing
Game playing in Artificial Intelligence involves designing agents that can make optimal
decisions in competitive environments. These agents use search algorithms and heuristics
to evaluate possible moves and select the most advantageous one.
– Definition: Involves making the best possible moves to maximize the chance
of winning in a two-player zero-sum game.
– Minimax Algorithm: A decision rule for minimizing the possible loss for
a worst-case scenario. In games like chess, it aims to maximize a player’s
minimum payoff.
• Alpha-Beta Pruning:
• Prolog Programming
• Unification
• Forward Chaining
• Backward Chaining
• Resolution
• Knowledge Representation
• Ontological Engineering
• Events
1
Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637
Components of FOPL:
• Objects: The entities in the domain of discourse. Example: Socrates, Cats, Hu-
mans.
Example:
∀x P (x) → P (a)
If ”All humans are mortal” is true, we can infer ”Socrates is mortal.”
Existential Introduction:
If a specific object has a property, we can infer that some object in the domain has this
property.
Example:
P (a) → ∃x P (x)
If ”Socrates is mortal,” we can infer that ”There exists a mortal.”
Existential Elimination:
If an existential statement is true, we can introduce a new symbol to represent that object.
Example:
∃x P (x) → P (c) (where c is a new constant symbol)
If ”There exists a mortal,” we can introduce a constant, say ”Socrates,” to represent
the mortal.
Universal Introduction:
If a statement holds for an arbitrary object in the domain, we can infer that the statement
holds for all objects in the domain.
Example:
P (a) → ∀x P (x)
If ”Socrates is mortal” applies to any human, then ”All humans are mortal.”
Objects:
Relations:
Functions:
Example:
FOPL allows for reasoning over these statements, which is more powerful than propo-
sitional logic.
Predicate Logic
Predicate Logic (also called First Order Logic) is a formal system in which we can express
statements about objects and their properties or relations.
Basic Components:
• Predicates: Represent properties or relationships between objects. Example: P (x)
(where P is the predicate and x is the variable).
• Quantifiers:
• Logical Connectives:
– ¬ (Negation or NOT)
– ∧ (Conjunction or AND)
– ∨ (Disjunction or OR)
– → (Implication)
Examples:
Prolog Programming
Prolog is a logic programming language based on first-order predicate logic. It is widely
used in AI for tasks such as symbolic reasoning, natural language processing, and knowl-
edge representation. Prolog works by defining facts, rules, and queries to infer logical
conclusions.
human(socrates).
mortal(X) :- human(X).
Explanation: This Prolog code states that Socrates is a human, and all humans
are mortal. If you query ‘mortal(socrates).‘, Prolog will answer ‘true‘.
• Expert Systems: Prolog is used to develop systems that mimic the decision-
making ability of a human expert.
Prolog Code:
parent(john, mary).
parent(mary, susan).
grandparent(X, Y) :- parent(X, Z), parent(Z, Y).
Explanation: This code defines parent relationships and a rule for determining if
someone is a grandparent. Querying ‘grandparent(john, susan).‘ will return ‘true‘.
Prolog Code:
connected(a, b).
connected(b, c).
path(X, Y) :- connected(X, Y).
path(X, Y) :- connected(X, Z), path(Z, Y).
Explanation: This Prolog program finds a path between two nodes in a graph.
Querying ‘path(a, c).‘ will return ‘true‘ since there is a path from ‘a‘ to ‘c‘ through
‘b‘.
Unification
Unification is the process of making two logical expressions identical by finding substitu-
tions. It plays a key role in reasoning algorithms like resolution.
Unification Algorithm
The unification algorithm determines if two terms can be unified and, if so, provides the
substitution needed to make them identical.
Pseudocode:
Steps:
1. If ‘term1‘ and ‘term2‘ are identical, return the empty substitution set.
2. If ‘term1‘ is a variable, return the substitution set that replaces ‘term1‘ with
‘term2‘ if ‘term2‘ does not contain ‘term1‘.
3. If ‘term2‘ is a variable, return the substitution set that replaces ‘term2‘ with
‘term1‘ if ‘term1‘ does not contain ‘term2‘.
4. If ‘term1‘ and ‘term2‘ are function symbols with different names or arities,
return failure.
5. If ‘term1‘ and ‘term2‘ are function symbols with the same name and arity,
recursively unify their arguments.
Example:
To unify ‘knows(Richard, x)‘ with ‘knows(Richard, john)‘, follow these steps:
Unification Example:
Explanation:
• Both terms have the same function symbol ‘knows‘ and the same arity.
• Unify the second arguments: ‘x‘ and ‘john‘. The result of unification is the
substitution {x 7→ john}.
Forward Chaining
Forward chaining is a data-driven inference technique that starts from known facts and
applies rules to infer new facts until a goal is reached. It is commonly used in production
systems and rule-based expert systems.
• Goal-Independent: Does not require a specific goal to start; it works until all
possible conclusions are drawn.
• Complete: Can derive all possible conclusions from the given facts and rules if the
system is finite.
• Efficiency: Can be inefficient if the number of rules and facts is large due to the
potentially high number of inferences.
• Incremental: New facts are added as they are derived, which can help in dynamic
situations.
Example:
Facts:
- If it rains, the ground gets wet.
- It is raining.
Forward Chaining Process:
- From ”It is raining,” we infer ”The ground gets wet.”
Backward Chaining
Backward chaining is a goal-driven inference technique that works backward from the goal
to determine the necessary conditions. It is used to find the steps or conditions required
to achieve a specific goal.
• Selective: Only focuses on deriving facts related to the goal, potentially making it
more efficient when the goal is specific.
• Incomplete: May not find all possible facts or solutions unless the goal is well-
defined and all possible conditions are considered.
Example:
Definition
In predicate logic, resolution is a process that involves combining two clauses to derive
a new clause, known as a resolvent. The resolution rule operates on pairs of clauses that
contain complementary literals, which are literals that are negations of each other.
Key Points: - **Clauses**: Disjunctions of literals. - **Complementary Literals**:
A literal and its negation. - **Unification**: The process of making literals identical
through substitutions.
Resolution Process
The resolution process involves the following steps:
1. **Convert to Conjunctive Normal Form (CNF):** Transform the formulas into a
standard form where they are represented as a conjunction of disjunctions of literals. 2.
**Select Pairs of Clauses:** Choose pairs of clauses that contain complementary literals.
3. **Unify the Literals:** Find a substitution that makes the literals identical. 4. **Re-
solve the Clauses:** Eliminate the complementary literals and combine the remaining
literals to form a new clause. 5. **Repeat or Conclude:** Continue the process until a
contradiction is found or no more resolvents can be generated.
Example of Resolution
Consider the following clauses:
Thus, Q(a) ∨ R(a) is the result of resolving P (x) ∨ Q(x) with ¬P (a) ∨ R(a).
article amsmath tcolorbox geometry
a4paper, margin=1in
1. Clarity
A knowledge representation should be clear and unambiguous, allowing users and systems
to understand and interpret the information without confusion.
2. Precision
The representation should accurately capture the details of the knowledge domain, avoid-
ing any loss of information or misrepresentation.
3. Consistency
The knowledge representation must be free from contradictions. Consistency ensures that
the information does not contain conflicting statements.
4. Expressiveness
It should be able to represent a wide range of concepts, relationships, and inferences.
Expressiveness allows the representation to capture complex and nuanced information.
5. Efficiency
The system should support efficient storage, retrieval, and processing of knowledge. Ef-
ficiency ensures that operations on the knowledge base do not become computationally
prohibitive.
6. Flexibility
Good knowledge representation should be adaptable to changes in the knowledge domain.
Flexibility allows the system to evolve with new information and updates.
7. Scalability
The representation should handle increasing amounts of knowledge without a significant
decrease in performance. Scalability ensures that the system can grow and manage larger
data sets effectively.
8. Usability
The representation should be user-friendly, facilitating easy interaction and manipulation
by users or systems. Usability includes intuitive interfaces and straightforward query
mechanisms.
Example: In a medical knowledge base, clarity and precision are crucial to ensure
that diagnoses and treatment recommendations are accurate and reliable.
Ontological Engineering
Ontological Engineering is concerned with the creation and management of ontologies,
which are structured frameworks for organizing knowledge. Here are the key points:
3. Objects: Specifies objects within these categories, detailing their properties and
attributes.
5. Semantic Clarity: Ensures that the ontology provides clear and consistent defini-
tions for all terms and relationships.
6. Reusability: Designed for reuse across different applications and systems, enhanc-
ing interoperability.
Example:
• Attributes: Both categories and objects have attributes that describe their prop-
erties (e.g., ”Dog” might have attributes like ”breed,” ”size”).
• Relationships: Objects can have relationships with other objects within or across
categories (e.g., ”Dog” might be related to ”Owner”).
Example:
Events
Events represent occurrences or actions that happen in the world and are used in knowl-
edge representation to model time-dependent phenomena.
• Definition: Events are actions or occurrences that have effects on the world.
• Temporal Aspect: Events are often time-dependent, meaning they occur at spe-
cific times or intervals.
• State Changes: Events can cause changes in the state of objects or the environ-
ment.
• Causality: Events can have causal relationships, where one event leads to another.
• Mental Objects: Concepts or entities within the mind, like beliefs or desires, that
an agent can think about.
• Representation: Mental events and objects are often used to model cognitive
processes in artificial intelligence.
Example: ”Believing it will rain” is a mental event that reflects an agent’s internal
state or thought process.
• Inference Rules: Apply rules based on the relationships between categories (e.g.,
”If an object belongs to category A and category A has property X, then the object
also has property X”).
• Consistency Checking: Ensure that inferences are consistent with the defined
relationships and properties within the category system.
• Applicability: Used when complete knowledge is not available, allowing for rea-
sonable assumptions based on general knowledge.
• Reasoning Process: Involves making initial assumptions and then updating them
as new information is acquired.
Example: Assuming that a bird can fly is a default reasoning approach. However,
when encountering a penguin, this assumption is revised based on specific knowledge
about penguins being flightless.
Syllabus:
• Agent communication
2. Reason and plan: Based on its observations, the agent must reason and
plan its actions by evaluating possible outcomes and determining the best
course of action.
3. Take action: The agent acts upon the environment using actuators, such as
motors, software commands, or signals.
Example: A self-driving car uses its sensors (cameras, radar, etc.) to perceive
the environment, plans routes based on traffic data, takes actions like steering and
braking, and learns from driving experiences to improve its performance.
3. Learning: Agents have the ability to improve their performance over time
by learning from their experiences. They adapt their strategies or behavior
based on feedback or patterns detected in data.
4. Memory: Some agents need to store information about past actions, obser-
vations, or decisions. Memory helps in learning and can be used for long-term
strategy planning or adapting behavior in recurring situations.
5. Action: The physical or virtual actions that an agent takes to achieve its
goals. For a physical agent, this could involve moving objects or interacting
with devices. For a software agent, actions could include sending messages,
updating records, or triggering specific events.
• Perception: Sensors detect other vehicles, pedestrians, road signs, and lane
markings. The car uses LIDAR, cameras, and radar to perceive its surround-
ings.
• Learning: The car learns from past driving experiences to improve its nav-
igation and decision-making over time. For example, it can learn how to
handle specific intersections more efficiently.
• Memory: The car stores past data on routes, traffic patterns, and sensor
input, helping it refine its driving decisions and anticipate future conditions.
Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637
Agent Communication
2. Agent Communication:
Agents need to communicate with each other to share information, collaborate,
and coordinate their actions. Communication can be either verbal, using a shared
language, or non-verbal through signals and changes in the environment.
Key Communication Protocols:
3. Message Passing: Agents can send and receive messages to request actions
or provide information to other agents in the system.
Example: Two agents in a healthcare system may argue about the best course of
treatment for a patient, presenting evidence and counterarguments until they agree
on the most suitable option.
• Trust and reputation systems help agents decide whom to collaborate with
in uncertain environments.
Syllabus
TU
• AI Applications
• Language Models
AK
• Information Retrieval
• Information Extraction
• Machine Translation
• Speech Recognition
RL
• Robots and Hardware
• Perception
O
• Planning and Moving
W
AI Applications
Artificial Intelligence (AI) has a vast range of applications across different sectors. The
following are some of the most significant applications of AI:
H
EC
IT
1. Language Models
Language models are algorithms designed to understand, generate, and process human
language. They play a crucial role in various tasks like speech recognition, language
translation, and text generation. One of the most powerful examples is GPT (Generative
Pretrained Transformer), which can generate human-like text.
Language models are essential components in the field of Natural Language Processing
(NLP). Their primary function is to predict the likelihood of a sequence of words. By
doing so, they help in understanding and generating meaningful text.
TU
How Language Models Work:
Language models work by analyzing large datasets of text, learning the patterns and
structure of language. They create mathematical representations of words and phrases,
AK
allowing them to predict the next word in a sentence based on the previous ones. This
predictive capability is foundational for various applications.
Applications:
• Speech Recognition: Language models improve the accuracy of recognizing spo-
ken language by predicting the next possible word in the sentence.
RL
• Machine Translation: They help in translating languages by understanding con-
text and structure in the original text and generating accurate translations.
• Text Generation: Advanced models like GPT can generate human-like text,
O
enabling applications in content creation, chatbots, and virtual assistants.
W
H
EC
IT
2. Information Retrieval
Information retrieval refers to the process of obtaining relevant information from large
datasets. It is widely used in various applications, most notably search engines. Here are
key points about information retrieval:
TU
• AI Algorithms: Advanced algorithms, often AI-based, help identify the most
pertinent information by analyzing the query and dataset.
AK
• Relevance Ranking: Algorithms assess relevance by matching keywords, context,
and other factors to rank the results.
• Efficiency: The goal is to provide fast and accurate retrieval, minimizing irrelevant
or redundant information.
• Example: Google Search uses AI algorithms to retrieve the most relevant web
pages based on user input.
RL
O
W
H
EC
IT
3. Information Extraction
Information extraction is the process of transforming unstructured data into structured,
meaningful information. Key points about information extraction include:
• Definition: Extracting specific, structured data (e.g., names, dates, entities) from
unstructured text or documents.
TU
• Automation: Reduces the manual effort required to sift through large amounts of
text for specific information.
AK
RL
O
W
timent analysis.
• Example: Siri and Alexa are powered by NLP to process voice commands and
respond in a natural language.
5. Machine Translation
Machine translation involves using AI to automatically translate text from one language
to another. Key points include:
• Neural Networks: The use of neural machine translation has greatly improved
TU
the accuracy and fluency of translations.
AK
ances.
• Example: Google Translate uses neural machine translation to translate text be-
tween different languages.
6. Speech Recognition
RL
Speech recognition technology converts spoken language into text. Key points about
speech recognition include:
• Definition: AI-driven systems that recognize and transcribe spoken language into
O
written text.
• Training: AI models are trained using large datasets of speech to learn and recog-
W
• Accuracy: Accuracy has significantly improved with the use of deep learning and
neural networks.
EC
TU
AK
RL
O
W
H
EC
IT
• Definition: Robots integrated with AI systems that allow them to perceive, pro-
cess, and interact with their surroundings.
TU
object recognition, and decision-making.
AK
logistics, and household chores.
• Example: Boston Dynamics’ robots, like Spot, use AI to navigate complex terrains
and complete tasks autonomously.
RL
O
W
H
EC
IT
• Electric Power Scheme: This component provides the necessary electrical energy
required to power the entire system. It supplies electricity to the PC control board
and other connected devices.
TU
AK
• PC Control Board: The control board acts as the central processing unit of the
robot. It receives input signals from various sensors and peripherals, processes the
data, and sends commands to other components. It interfaces with sensors like the
LIDAR and camera for environmental awareness and vision.
• LIDAR: The Light Detection and Ranging (LIDAR) sensor is used for mapping
RL
the environment and obstacle detection. It provides distance measurements that
help the robot understand its surroundings and navigate safely.
• Camera: The camera provides visual input to the PC control board, enabling
O
object recognition, video streaming, and image processing for tasks like navigation
or manipulation.
W
• Motor Drive: The motor drive system receives commands from the STM32 micro-
controller to control the motors. It is responsible for driving the wheels or actuators
of the robot, ensuring proper movement and speed control.
IT
• Displayer: The displayer provides visual feedback to the user, showing important
system information or real-time data such as video feed, system status, or sensor
outputs.
• Sound Device: This device generates sound output based on the commands from
the PC control board. It can be used for communication with the user or for
providing audible alerts and notifications during operation.
In this system, the PC control board serves as the main interface that interacts with
both input devices (LIDAR, Camera, IMU) and output devices (Motor drive, Displayer,
Sound device). The STM32 microcontroller plays a crucial role in processing real-time
tasks and ensuring that motor control is executed properly. Together, these components
allow the robot to perceive its environment, make decisions, and execute movements
effectively.
8. Perception
Perception in AI refers to the ability of machines to interpret sensory inputs like images,
TU
sounds, and touch. Key points about perception include:
• Definition: The capability of AI systems to process and interpret data from sen-
sory inputs.
• Sensory Inputs: Includes images, sounds, touch, and other environmental data.
AK
• Computer Vision: Analyzes visual data from cameras to recognize and under-
stand objects and scenes.
• Audio Processing: Involves interpreting sounds and speech for applications like
voice assistants and audio analysis. RL
• Applications: Crucial in self-driving cars, facial recognition systems, and object
detection technologies.
• Challenges: Includes handling diverse data types, varying conditions, and high-
dimensional data.
O
• Example: Self-driving cars like Tesla use AI perception to detect obstacles, recog-
nize traffic signs, and navigate roads.
W
• Applications: Used in autonomous vehicles, drones, and robotic systems for effi-
cient operation and task completion.
• Example: Autonomous drones use AI to plan flight routes, avoid obstacles, and
adjust their path in real-time during flight.
TU
AK
RL
O
W
H
EC
IT
10