Artificial Intelligence All 5 Units

Download as pdf or txt
Download as pdf or txt
You are on page 1of 50

lOMoARcPSD|50645637

Artificial Intelligence All 5 Units

Artificial Intellegence (Dr. A.P.J. Abdul Kalam Technical University)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU

ITECH WORLD AKTU


Subject Name: Artificial Intelligence
Subject Code: BCAI501

Syllabus
1. Introduction:

a. Definition
b. Future of Artificial Intelligence
c. Characteristics of Intelligent Agents
d. Typical Intelligent Agents
e. Problem Solving Approach to Typical AI Problems

1 Introduction
Artificial Intelligence (AI) is a branch of computer science that aims to create machines
capable of intelligent behavior. This includes learning, reasoning, problem-solving, per-
ception, and language understanding.

1.1 Definition
AI can be defined as the simulation of human intelligence processes by machines, espe-
cially computer systems. These processes include learning (the acquisition of information
and rules for using the information), reasoning (using rules to reach approximate or def-
inite conclusions), and self-correction.

1.2 Future of Artificial Intelligence


The future of AI is a dynamic and rapidly evolving landscape, with numerous advance-
ments and potential applications across different fields. Below are seven key points high-
lighting the future prospects of AI:

1. Healthcare and Medicine: AI is revolutionizing healthcare through applications


in predictive diagnostics, personalized medicine, and robotic surgery. For example,
AI-powered tools can analyze medical images to detect early signs of diseases, such
as cancer, with greater accuracy and speed.

2. Autonomous Vehicles: The development of self-driving cars and drones is one of


the most prominent future applications of AI. These vehicles utilize AI algorithms
for real-time decision-making, enabling safer and more efficient transportation.

3. Natural Language Processing (NLP): AI is advancing the field of NLP, en-


hancing the capabilities of virtual assistants like Siri and Alexa. Future applica-
tions include more intuitive chatbots, real-time translation services, and improved
human-computer interactions.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

4. Finance and Trading: AI is transforming the finance sector by providing tools


for fraud detection, algorithmic trading, credit scoring, and personalized financial
advice. Machine learning models are used to analyze vast amounts of data to
identify market trends and optimize investment strategies.

5. Smart Cities and Urban Planning: AI will play a key role in designing smart
cities with optimized traffic management, energy-efficient buildings, and improved
public safety measures. AI-driven analytics can provide insights into urban devel-
opment and help reduce environmental impact.

6. Education and Learning: AI-powered platforms are revolutionizing education


by offering personalized learning experiences, automating administrative tasks, and
providing real-time feedback. For instance, AI can analyze student performance
data to adapt teaching methods and materials for individualized learning paths.

7. Ethics and Governance: The future of AI also involves addressing ethical con-
cerns related to privacy, bias, and decision-making transparency. Developing robust
AI governance frameworks and ethical guidelines is crucial to ensure AI technologies
are aligned with societal values and human rights.

1.3 Characteristics of Intelligent Agents


An intelligent agent is an autonomous entity that perceives its environment and takes
actions to achieve specific goals. Key characteristics include:

• Autonomy: Acts independently without direct human control.

• Reactivity: Quickly responds to changes in the environment.

• Proactiveness: Initiates actions to achieve defined objectives.

• Social Ability: Interacts and communicates with other agents or humans.

• Adaptability: Learns from experiences and adapts to new situations.

• Rationality: Chooses actions that maximize performance or utility.

• Persistence: Continues pursuing goals despite challenges or obstacles.

1.4 Typical Intelligent Agents


Intelligent agents are designed to perform tasks autonomously by perceiving their envi-
ronment and taking appropriate actions. Below are the typical types of intelligent agents:

• Simple Reflex Agents: These agents operate based on a predefined set of rules
that map a specific situation to an action. They do not consider the history of
previous states, making them effective in fully observable environments. However,
they struggle with complex or partially observable environments. Example: A
thermostat adjusting temperature based on current readings.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

• Model-based Reflex Agents: Unlike simple reflex agents, model-based agents


maintain an internal model of the world, which helps them handle partially ob-
servable environments. They use this internal state to keep track of aspects of
the environment that are not immediately perceptible. The model helps predict
the outcome of actions, improving decision-making. Example: A self-driving car
keeping track of nearby vehicles and predicting their movements.

• Goal-based Agents: These agents go beyond immediate actions and are designed
to achieve specific goals. They choose actions based on a set of goals they aim to
accomplish, using goal information to guide decision-making. Goal-based agents
often use search and planning algorithms to find the best path to achieve their
objectives. Example: A delivery robot navigating a warehouse to pick up and
deliver packages to specific locations.

• Utility-based Agents: Utility-based agents are an extension of goal-based agents,


designed to handle situations where multiple possible goals can be pursued. They
evaluate different possible actions based on a utility function, which assigns a nu-
merical value representing the desirability of each outcome. The agent chooses
actions that maximize the expected utility. Example: An autonomous trading sys-
tem selecting trades that maximize profit while minimizing risk.

• Learning Agents: These agents have the ability to learn from their experiences
and improve their performance over time. A learning agent has four components: a
learning element (improves the agent’s performance), a performance element (selects
external actions), a critic (provides feedback), and a problem generator (suggests
actions that will lead to new knowledge). Example: A recommendation system
learning from user behavior to improve future suggestions.

• Collaborative Agents: These agents work in a multi-agent environment where


they must cooperate or compete with other agents. They are designed to achieve
goals that require interaction and coordination with other agents. Example: Agents
in an online multiplayer game collaborating to achieve common objectives or com-
pete against each other.

• Hybrid Agents: These agents combine multiple agent architectures to leverage


the strengths of different types. For instance, an agent might use both reactive and
deliberative strategies to make decisions, balancing immediate response with long-
term planning. Example: A rescue robot that uses reflex actions to avoid obstacles
while also planning a path to the target.

1.5 Problem Solving Approach to Typical AI Problems


AI problem-solving often requires exploring a vast space of possible solutions to find the
optimal or satisfactory outcome. The following are common approaches used in AI to
tackle such problems:

• Search Algorithms: These algorithms systematically explore the possible states


or configurations of a problem to find a solution. Examples include:

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

2 Breadth-First Search (BFS)

Definition:
Breadth-First Search (BFS) explores all nodes at the current depth level before
moving to the next, ensuring the shortest path is found in unweighted graphs.

Usage:
BFS is used when the goal is to find the shortest path or the closest solution. It is
applicable in scenarios like social network analysis, GPS navigation, and network
broadcasting.

Mechanism:
Uses a queue data structure to maintain the nodes to be explored. Nodes are visited
level by level, ensuring that nodes closer to the start node are visited first.

Properties:

– Completeness: BFS guarantees finding a solution if one exists.


– Optimality: It finds the shortest path in an unweighted graph.
– Time Complexity: O(V + E), where V is the number of vertices and E is the
number of edges.
– Space Complexity: O(V), due to storing all nodes in memory.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

Breadth-First Search (BFS) Process

– Breadth-First Search (BFS) starts at the root node (like node A in the
image) and explores all its direct neighbors first before moving to nodes at the
next level.
∗ Start at the Root: Begin at the root node (A).
∗ Expand Neighbors: Visit all direct neighbors of A, such as nodes B and C.
∗ Move to Next Level: Once all neighbors of the current level are visited,
move to the next level. For example, after visiting B and C, move to D,
E, F, and G in order.
– The process continues until all nodes have been visited or until the desired
node is found. For example, after reaching node G, the search stops because
all nodes have been explored.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

3 Depth-First Search (DFS)

Definition:
Depth-First Search (DFS) explores as far as possible along a branch before back-
tracking. It is used for scenarios where solutions are deep in the search tree.

Usage:
DFS is ideal for situations such as solving puzzles, analyzing game trees, and topo-
logical sorting in directed acyclic graphs (DAGs).

Mechanism:
Uses a stack data structure (either explicitly or via recursion) to explore nodes as
deep as possible along each branch before backtracking.

Properties:

– Completeness: Not guaranteed, as DFS may get stuck in loops if the graph
contains cycles.
– Optimality: Does not guarantee the shortest path.
– Time Complexity: O(V + E), where V is the number of vertices and E is the
number of edges.
– Space Complexity: O(V) in case of recursion.

First Search (DFS) Process

– Depth-First Search (DFS) explores a graph by selecting a path and travers-


ing it as deeply as possible before backtracking.
∗ Start at the Root: Begin at the root node (A).

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

∗ Expand One Branch: Explore a single branch as deeply as possible. For


example, start at A, explore its successor B, and continue down to D,
reaching a dead end.
∗ Backtrack: After reaching a dead end, backtrack to the most recent unex-
plored node. For example, backtrack from D to B, then explore remaining
successors of B, such as E.
∗ Continue Exploration: Repeat the process of exploring as deeply as pos-
sible, then backtracking as needed. After exploring all nodes of B, move
to the right side node C, then F, and finally G.
– The process continues until all nodes have been visited or until the search
terminates. In this case, after exploring node G, all nodes have been visited,
and the search terminates.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

4 A* Search

Definition:
A* Search is a combination of uniform-cost search and heuristics to prioritize nodes,
finding the least-cost path efficiently by considering both the cost to reach a node
and an estimated cost to the goal.

Usage:
Commonly used in pathfinding and graph traversal problems, such as video games
and AI for robotics.

Mechanism:
Uses a priority queue to explore nodes. The priority is determined by a cost func-
tion f (n) = g(n) + h(n), where g(n) is the cost to reach node n from the start, and
h(n) is the heuristic estimate of the cost from n to the goal.

Properties:

– Completeness: Guarantees finding a solution if one exists.


– Optimality: Finds the least-cost path if the heuristic is admissible (never over-
estimates).
– Time Complexity: Depends on the heuristic; worst-case is exponential.
– Space Complexity: Can be high due to storing all nodes in memory.

Heuristics

Heuristics are rules of thumb or educated guesses that help guide the search process
toward a solution more quickly. They are used in algorithms to make decisions based
on approximate information. A heuristic provides an estimate of the cost to reach
the goal from a given state, which allows the search to prioritize certain paths.

Types of Heuristics

– Admissible Heuristics:
An admissible heuristic is one that never overestimates the cost to reach the
goal. This means the heuristic provides a lower bound on the actual cost,
ensuring that search algorithms like A* find the optimal solution.
∗ Example: In a pathfinding problem, the straight-line distance (Euclidean
distance) between the current position and the goal is an admissible heuris-
tic since it never overestimates the true distance.
– Inadmissible Heuristics:
An inadmissible heuristic may overestimate the actual cost to reach the goal.
While this can speed up the search process, it does not guarantee that the
solution found will be optimal.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

∗ Example: In the same pathfinding problem, if the heuristic assumes


obstacles that don’t exist, it could overestimate the distance, leading to a
potentially suboptimal path.

Optimization

Optimization is the process of finding the best solution from a set of possibilities.
It involves maximizing or minimizing an objective function while satisfying certain
constraints.

Examples:

– Genetic Algorithms:
∗ Mimic natural selection to evolve better solutions over multiple genera-
tions.
∗ Use Case: Ideal for problems with large or complex search spaces.
– Simulated Annealing:
∗ A probabilistic technique that occasionally accepts worse solutions to es-
cape local optima and gradually focuses on better solutions.
∗ Use Case: Useful for optimization problems with many local optima.

Example: To solve a maze, the A* search algorithm can be applied, which eval-
uates each possible move based on both the actual cost to reach a point and an
estimated cost from that point to the goal, ensuring an efficient path to the exit.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

Artificial Intelligence (BCAI501) ITECH WORLD AKTU

ITECH WORLD AKTU


Artificial Intelligence (BCAI501)

Unit 2: Problem Solving Methods

Syllabus
• Problem Solving Methods

• Search Strategies: Uninformed, Informed, Heuristics, Local Search

• Algorithms and Optimization Problems

• Searching with Partial Observations

• Constraint Satisfaction Problems: Constraint Propagation, Backtracking Search

• Game Playing: Optimal Decisions in Games, Alpha-Beta Pruning, Stochastic Games

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

Artificial Intelligence (BCAI501) ITECH WORLD AKTU

Problem Solving Methods

Definition: Problem solving in AI involves using algorithms to find solutions


to given problems, either by searching through a space of possible solutions or by
employing heuristics to make informed decisions.

Search Strategies

Uninformed Search
Uninformed search strategies do not have any additional information about states beyond
the problem definition. They explore the search space blindly. Common uninformed
search methods include:

• Breadth-First Search (BFS): Explores all nodes at the present depth before
moving to the next level. Example: Navigating through a maze.

• Depth-First Search (DFS): Explores as far as possible along a branch before


backtracking. Example: Solving a puzzle by trying all possible moves.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

Artificial Intelligence (BCAI501) ITECH WORLD AKTU

Informed Search (Heuristic Search)


Informed search uses problem-specific knowledge beyond the problem definition. It uses
heuristics to find solutions more efficiently:

• A* Search: Combines the cost to reach the node and the estimated cost to the
goal. Example: Pathfinding in maps.

• Greedy Best-First Search: Expands the node that is closest to the goal according
to the heuristic.
Explanation: Greedy Best-First Search (GBFS) is a search algorithm that selects
the node that appears to be closest to the goal based on a given heuristic function
h(n). The heuristic function estimates the cost from the current node n to the
goal. GBFS is called ”greedy” because it always chooses the path that looks most
promising, based solely on the heuristic value. It does not consider the path cost
already incurred, only the estimated cost to reach the goal.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

Artificial Intelligence (BCAI501) ITECH WORLD AKTU

Local Search Algorithms and Optimization Problems


Local search algorithms operate using a single current state and try to improve it itera-
tively. Common methods:

• Hill Climbing: An iterative optimization algorithm that starts with an arbi-


trary solution to a problem and attempts to find a better solution by incrementally
changing a single element of the solution. At each step, Hill Climbing selects the
neighboring state with the highest value according to an objective function. The
process continues until it reaches a peak, where no neighbor has a higher value.
Explanation: Hill Climbing is a local search algorithm that operates by evaluating
the neighboring states of the current state. It moves to the neighbor with the highest
value (for maximization problems) or the lowest value (for minimization problems).
The algorithm terminates when it reaches a state that is better than all of its
neighbors, known as a ”local maximum” or ”local minimum.” Hill Climbing can be
quick but is prone to getting stuck in local optima, ridges, or plateaus. Variants like
Stochastic Hill Climbing, Random Restart Hill Climbing, or Simulated Annealing
are used to overcome these limitations. .

• Simulated Annealing: A probabilistic optimization technique used to approxi-


mate the global optimum of a given function.
Key Points:
– Basic Idea: Mimics the annealing process in metallurgy where a material is
heated and then slowly cooled to remove defects, allowing it to reach a stable,
low-energy state.
– Exploration vs. Exploitation: Balances between exploring new solutions
and exploiting known good solutions. At higher temperatures, it is more likely
to accept worse solutions to avoid local minima.
– Probability of Acceptance: Accepts a new solution based on a probability
that decreases with time (temperature) and increases if the solution is better.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

Artificial Intelligence (BCAI501) ITECH WORLD AKTU

– Cooling Schedule: Gradually lowers the temperature according to a pre-


defined schedule. A slower cooling rate increases the chances of finding the
global optimum.
– Avoiding Local Minima: By allowing occasional moves to worse solutions,
the algorithm can escape from local minima and continue searching for a global
optimum.
– Termination: The process stops when the system reaches a stable state, or
the temperature reaches near zero, making further exploration unlikely.

Constraint Satisfaction Problems (CSPs)


Constraint Satisfaction Problems (CSPs) are mathematical problems defined as a set of
objects whose state must satisfy a number of constraints or limitations.

• Constraint Propagation: A technique used to reduce the search space by deduc-


ing variable domains based on the constraints.

– Basic Concept: Propagates the implications of a constraint through the vari-


ables involved, thereby reducing the number of possible values that variables
can take.
– Types:
∗ Node Consistency: Ensures each variable meets its unary constraints.
∗ Arc Consistency: Ensures that for every value of a variable, there is a
consistent value of a connected variable.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

Artificial Intelligence (BCAI501) ITECH WORLD AKTU

∗ Path Consistency: Extends arc consistency to triples of variables to


ensure any consistent pair of assignments can be extended to a third vari-
able.
– Advantages: Reduces the need for backtracking by narrowing down the
search space before searching begins.
– Limitations: Might not completely eliminate the need for search; only sim-
plifies the problem.

• Backtracking Search: A depth-first search algorithm that incrementally builds


candidates for the solution and abandons each candidate (“backtracks”) as soon as
it determines that the candidate cannot possibly be completed to a valid solution.

– Basic Concept: Starts with an empty assignment and makes a series of incre-
mental assignments to variables. When a constraint is violated, the algorithm
backtracks to the most recent decision point and tries a different path.
– Advantages: Guarantees finding a solution if one exists and is simpler to
implement compared to more advanced techniques.
– Heuristics:
∗ Minimum Remaining Values (MRV): Chooses the variable with the
fewest legal values.
∗ Degree Heuristic: Chooses the variable with the most constraints on
remaining variables.
∗ Least Constraining Value: Chooses the value that rules out the fewest
choices for neighboring variables.
– Limitations: Can be slow if the problem is large or if many constraints are
present.
– Improvements:
∗ Forward Checking: Keeps track of remaining legal values for the unas-
signed variables.
∗ Constraint Learning: Records constraints discovered during search to
avoid exploring the same failed paths.

Game Playing
Game playing in Artificial Intelligence involves designing agents that can make optimal
decisions in competitive environments. These agents use search algorithms and heuristics
to evaluate possible moves and select the most advantageous one.

• Optimal Decisions in Games:

– Definition: Involves making the best possible moves to maximize the chance
of winning in a two-player zero-sum game.
– Minimax Algorithm: A decision rule for minimizing the possible loss for
a worst-case scenario. In games like chess, it aims to maximize a player’s
minimum payoff.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

Artificial Intelligence (BCAI501) ITECH WORLD AKTU

– Strategy: Considers all possible moves and countermoves by the opponent to


choose the optimal move that leads to the most favorable outcome.
– Limitations: The computation can be very intensive for games with large
search spaces (e.g., Go, Chess).

• Alpha-Beta Pruning:

– Definition: An optimization technique for the minimax algorithm that re-


duces the number of nodes evaluated in the search tree.
– How it Works: Maintains two values, Alpha (the best value that the max-
imizer can guarantee) and Beta (the best value that the minimizer can guar-
antee).
– Pruning: If the value of a node is found to be worse than the current alpha
or beta value, it stops evaluating further, effectively ”pruning” the search tree
and reducing computational time.
– Advantage: Significantly decreases the number of nodes that are evaluated
by the minimax algorithm without affecting the final result, making it more
efficient.
– Stochastic Games:
∗ Definition: Games that incorporate elements of chance or randomness,
making them more complex to analyze and solve.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

Artificial Intelligence (BCAI501) ITECH WORLD AKTU

∗ Characteristics: Unlike deterministic games (e.g., Chess), stochastic


games have outcomes that depend on both players’ strategies and ran-
dom events (e.g., rolling dice in Backgammon).
∗ Strategies: Use probabilistic methods and decision theory to compute
the optimal strategy that maximizes the expected payoff, considering all
possible outcomes.
∗ Approach: Monte Carlo simulations and Expectimax algorithms are of-
ten used to handle the uncertainty in these games.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU


Artificial Intelligence (BCAI501)
Unit 3: Knowledge Representation

Syllabus: Knowledge Representation


• First Order Predicate Logic

• Prolog Programming

• Unification

• Forward Chaining

• Backward Chaining

• Resolution

• Knowledge Representation

• Ontological Engineering

• Categories and Objects

• Events

• Mental Events and Mental Objects

• Reasoning Systems for Categories

• Reasoning with Default Information

First Order Predicate Logic


First Order Predicate Logic (FOPL) extends propositional logic by adding quantifiers
and predicates. It is useful for representing more complex statements involving objects,
relations, and functions.

1
Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 2

Components of FOPL:
• Objects: The entities in the domain of discourse. Example: Socrates, Cats, Hu-
mans.

• Relations: Describes relationships between objects. Example: loves(Socrates, Phi-


losophy).

• Functions: Maps objects to objects. Example: mother(John) returns the mother


of John.

Inference Rules in FOPL:


Universal Elimination:
If a property holds for all objects, then it holds for any specific object in the domain.

Example:
∀x P (x) → P (a)
If ”All humans are mortal” is true, we can infer ”Socrates is mortal.”

Existential Introduction:
If a specific object has a property, we can infer that some object in the domain has this
property.

Example:
P (a) → ∃x P (x)
If ”Socrates is mortal,” we can infer that ”There exists a mortal.”

Existential Elimination:
If an existential statement is true, we can introduce a new symbol to represent that object.

Example:
∃x P (x) → P (c) (where c is a new constant symbol)
If ”There exists a mortal,” we can introduce a constant, say ”Socrates,” to represent
the mortal.

Universal Introduction:
If a statement holds for an arbitrary object in the domain, we can infer that the statement
holds for all objects in the domain.

Example:
P (a) → ∀x P (x)
If ”Socrates is mortal” applies to any human, then ”All humans are mortal.”

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 3

Examples of Objects, Relations, and Functions:

Objects:

• Socrates, Dogs, Humans.

Relations:

• loves(Socrates, P hilosophy): Socrates loves Philosophy.

• parent(John, M ary): John is the parent of Mary.

Functions:

• mother(John): The function returns the mother of John.

• age(Socrates): Returns Socrates’ age.

Example:

Statement: All humans are mortal.


Predicate Logic Representation:
∀x(Human(x) → M ortal(x))

FOPL allows for reasoning over these statements, which is more powerful than propo-
sitional logic.

Predicate Logic
Predicate Logic (also called First Order Logic) is a formal system in which we can express
statements about objects and their properties or relations.

Basic Components:
• Predicates: Represent properties or relationships between objects. Example: P (x)
(where P is the predicate and x is the variable).

• Quantifiers:

– Universal Quantifier: ∀ (for all)


– Existential Quantifier: ∃ (there exists)

• Logical Connectives:

– ¬ (Negation or NOT)
– ∧ (Conjunction or AND)
– ∨ (Disjunction or OR)
– → (Implication)

• Variables: Represent objects or individuals in the domain.

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 4

Examples:

Example 1: Universal Quantifier


∀x (Human(x) → M ortal(x))
Meaning: For all x, if x is a human, then x is mortal.

Example 2: Existential Quantifier


∃x (Dog(x) ∧ Brown(x))
Meaning: There exists some x such that x is a dog and x is brown.

Causal Form Examples in Predicate Logic


Example 1: Rain Causes the Ground to be Wet

Statement: If it rains, the ground gets wet.


Causal Form:
∀x (Rain(x) → W et(Ground))
Explanation: For all instances of x, if it rains, then the ground becomes wet.

Example 2: Fire Causes Heat

Statement: If there is a fire, the surrounding area becomes hot.


Causal Form:
∀x (F ire(x) → Hot(Area(x)))
Explanation: For all x, if there is a fire, the area becomes hot.

Example 3: Being a Student Causes Studying

Statement: If someone is a student, they study.


Causal Form:
∀x (Student(x) → Study(x))
Explanation: For all x, if x is a student, then x studies.

Example 4: Hunger Causes Eating

Statement: If someone is hungry, they will eat.


Causal Form:
∀x (Hungry(x) → Eat(x))
Explanation: For all x, if x is hungry, then x will eat.

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 5

Prolog Programming
Prolog is a logic programming language based on first-order predicate logic. It is widely
used in AI for tasks such as symbolic reasoning, natural language processing, and knowl-
edge representation. Prolog works by defining facts, rules, and queries to infer logical
conclusions.

Example: Basic Prolog Code

Fact: Socrates is a human.


Prolog Code:

human(socrates).
mortal(X) :- human(X).

Explanation: This Prolog code states that Socrates is a human, and all humans
are mortal. If you query ‘mortal(socrates).‘, Prolog will answer ‘true‘.

Applications of Prolog Programming


Prolog is applied in various domains of artificial intelligence and computer science, in-
cluding:

• Symbolic Reasoning: Prolog is used to represent and manipulate symbols and


their relationships.

• Natural Language Processing (NLP): Prolog helps in parsing and interpreting


human languages.

• Expert Systems: Prolog is used to develop systems that mimic the decision-
making ability of a human expert.

• Knowledge Representation: Prolog allows complex relationships between enti-


ties to be modeled easily.

• Automated Theorem Proving: Prolog’s logic-based structure allows it to auto-


matically prove theorems in formal systems.

Examples of Prolog Applications


Example 1: Family Relationships

Prolog Code:

parent(john, mary).
parent(mary, susan).
grandparent(X, Y) :- parent(X, Z), parent(Z, Y).

Explanation: This code defines parent relationships and a rule for determining if
someone is a grandparent. Querying ‘grandparent(john, susan).‘ will return ‘true‘.

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 6

Example 2: Pathfinding in Graphs

Prolog Code:

connected(a, b).
connected(b, c).
path(X, Y) :- connected(X, Y).
path(X, Y) :- connected(X, Z), path(Z, Y).

Explanation: This Prolog program finds a path between two nodes in a graph.
Querying ‘path(a, c).‘ will return ‘true‘ since there is a path from ‘a‘ to ‘c‘ through
‘b‘.

article tcolorbox amsmath

Unification
Unification is the process of making two logical expressions identical by finding substitu-
tions. It plays a key role in reasoning algorithms like resolution.

Conditions for Unification


Unification of two terms is possible if:

• Both terms are identical.

• Terms can be made identical through substitutions.

• The substitutions do not lead to any contradictions or inconsistencies.

Unification Algorithm
The unification algorithm determines if two terms can be unified and, if so, provides the
substitution needed to make them identical.
Pseudocode:

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 7

Algorithm: Unify(term1, term2)


Input: Two terms, ‘term1‘ and ‘term2‘
Output: A substitution set that makes ‘term1‘ and ‘term2‘ identical or failure

Steps:

1. If ‘term1‘ and ‘term2‘ are identical, return the empty substitution set.

2. If ‘term1‘ is a variable, return the substitution set that replaces ‘term1‘ with
‘term2‘ if ‘term2‘ does not contain ‘term1‘.

3. If ‘term2‘ is a variable, return the substitution set that replaces ‘term2‘ with
‘term1‘ if ‘term1‘ does not contain ‘term2‘.

4. If ‘term1‘ and ‘term2‘ are function symbols with different names or arities,
return failure.

5. If ‘term1‘ and ‘term2‘ are function symbols with the same name and arity,
recursively unify their arguments.

Example:
To unify ‘knows(Richard, x)‘ with ‘knows(Richard, john)‘, follow these steps:

Unification Example:

Unify(knows(Richard, x), knows(Richard, john))

Explanation:

• Both terms have the same function symbol ‘knows‘ and the same arity.

• The first arguments of both terms are identical (‘Richard‘).

• Unify the second arguments: ‘x‘ and ‘john‘. The result of unification is the
substitution {x 7→ john}.

Therefore, the unification results in the substitution {x 7→ john}.

article tcolorbox amsmath

Forward Chaining
Forward chaining is a data-driven inference technique that starts from known facts and
applies rules to infer new facts until a goal is reached. It is commonly used in production
systems and rule-based expert systems.

Properties of Forward Chaining


• Data-Driven: Starts with available facts and applies inference rules to generate
new facts.

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 8

• Goal-Independent: Does not require a specific goal to start; it works until all
possible conclusions are drawn.

• Complete: Can derive all possible conclusions from the given facts and rules if the
system is finite.

• Efficiency: Can be inefficient if the number of rules and facts is large due to the
potentially high number of inferences.

• Incremental: New facts are added as they are derived, which can help in dynamic
situations.

Example:

Facts:
- If it rains, the ground gets wet.
- It is raining.
Forward Chaining Process:
- From ”It is raining,” we infer ”The ground gets wet.”

Backward Chaining
Backward chaining is a goal-driven inference technique that works backward from the goal
to determine the necessary conditions. It is used to find the steps or conditions required
to achieve a specific goal.

Properties of Backward Chaining


• Goal-Driven: Starts with a specific goal and works backward to determine the
necessary facts or conditions.

• Selective: Only focuses on deriving facts related to the goal, potentially making it
more efficient when the goal is specific.

• Incomplete: May not find all possible facts or solutions unless the goal is well-
defined and all possible conditions are considered.

• Dynamic: Can handle changing goals and conditions effectively, as it focuses on


achieving specific objectives.

• Recursive: Often involves recursive calls to determine the conditions needed to


achieve the goal.

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 9

Example:

Goal: Is the ground wet?


- To determine this, we check if it rained. If yes, then the goal is achieved.
Backward Chaining Process:
- Start with the goal ”The ground is wet.”
- To satisfy this goal, check if ”It rains” is true.
- If ”It rains” is true (based on the known facts), then the goal ”The ground is wet”
is achieved.

article booktabs geometry


a4paper, margin=1in

Comparison of Forward Chaining and Backward Chain-


ing

Aspect Forward Chaining Backward Chaining


Approach Data-driven; starts from knownGoal-driven; starts from a specific
facts and applies rules to infergoal and works backward to de-
new facts. termine necessary conditions.
Starting Point Begins with available facts andBegins with a specific goal and
applies inference rules. checks if conditions meet the goal.
Focus Generates new facts and conclu-Determines conditions or facts
sions based on existing facts. needed to achieve the goal.
Efficiency Can be inefficient with a largeGenerally more efficient with spe-
number of rules and facts. cific goals due to selective focus.
Completeness Can derive all possible facts if theMay not find all solutions unless
system is finite. the goal and conditions are well-
defined.
Usage Common in production systems,Useful in querying, problem-
rule-based expert systems. solving, and diagnostic systems.
Handling of New Facts Continuously adds new facts asFocuses on specific goals and con-
they are derived. ditions, less dynamic.

Table 1: Comparison of Forward Chaining and Backward Chaining

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 10

article amsmath tcolorbox

Resolution in Predicate Logic


Resolution is a fundamental inference rule used in automated theorem proving and logic
programming. It is particularly useful in predicate logic for deriving contradictions from
a set of clauses, thus proving the unsatisfiability of the clauses. In essence, resolution
helps to determine whether a set of logical statements is contradictory.

Definition
In predicate logic, resolution is a process that involves combining two clauses to derive
a new clause, known as a resolvent. The resolution rule operates on pairs of clauses that
contain complementary literals, which are literals that are negations of each other.
Key Points: - **Clauses**: Disjunctions of literals. - **Complementary Literals**:
A literal and its negation. - **Unification**: The process of making literals identical
through substitutions.

Resolution Process
The resolution process involves the following steps:
1. **Convert to Conjunctive Normal Form (CNF):** Transform the formulas into a
standard form where they are represented as a conjunction of disjunctions of literals. 2.
**Select Pairs of Clauses:** Choose pairs of clauses that contain complementary literals.
3. **Unify the Literals:** Find a substitution that makes the literals identical. 4. **Re-
solve the Clauses:** Eliminate the complementary literals and combine the remaining
literals to form a new clause. 5. **Repeat or Conclude:** Continue the process until a
contradiction is found or no more resolvents can be generated.

Example of Resolution
Consider the following clauses:

Clause 1: P (x) ∨ Q(x)


Clause 2: ¬P (a) ∨ R(a)

To resolve these clauses:


1. **Identify Complementary Literals:** - Complementary literals: P (x) and ¬P (a).
- Substitution: x → a.
2. **Apply the Resolution Rule:** - Substitute x with a in Clause 1: P (a) ∨ Q(a). -
Combine with Clause 2: R(a) ∨ Q(a).

Resolved Clause: Q(a) ∨ R(a)

Thus, Q(a) ∨ R(a) is the result of resolving P (x) ∨ Q(x) with ¬P (a) ∨ R(a).
article amsmath tcolorbox geometry
a4paper, margin=1in

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 11

Properties of Good Knowledge Representation


Good knowledge representation is crucial for effective reasoning and decision-making in
artificial intelligence systems. The following properties are essential for a robust knowledge
representation system:

1. Clarity
A knowledge representation should be clear and unambiguous, allowing users and systems
to understand and interpret the information without confusion.

2. Precision
The representation should accurately capture the details of the knowledge domain, avoid-
ing any loss of information or misrepresentation.

3. Consistency
The knowledge representation must be free from contradictions. Consistency ensures that
the information does not contain conflicting statements.

4. Expressiveness
It should be able to represent a wide range of concepts, relationships, and inferences.
Expressiveness allows the representation to capture complex and nuanced information.

5. Efficiency
The system should support efficient storage, retrieval, and processing of knowledge. Ef-
ficiency ensures that operations on the knowledge base do not become computationally
prohibitive.

6. Flexibility
Good knowledge representation should be adaptable to changes in the knowledge domain.
Flexibility allows the system to evolve with new information and updates.

7. Scalability
The representation should handle increasing amounts of knowledge without a significant
decrease in performance. Scalability ensures that the system can grow and manage larger
data sets effectively.

8. Usability
The representation should be user-friendly, facilitating easy interaction and manipulation
by users or systems. Usability includes intuitive interfaces and straightforward query
mechanisms.

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 12

Example: In a medical knowledge base, clarity and precision are crucial to ensure
that diagnoses and treatment recommendations are accurate and reliable.

Ontological Engineering
Ontological Engineering is concerned with the creation and management of ontologies,
which are structured frameworks for organizing knowledge. Here are the key points:

1. Definition: Ontological Engineering involves designing and implementing ontolo-


gies to represent knowledge domains.

2. Categories: Identifies and defines categories (e.g., ”Person,” ”Organization”) that


group related concepts.

3. Objects: Specifies objects within these categories, detailing their properties and
attributes.

4. Relationships: Establishes relationships between categories and objects (e.g., ”works


for,” ”is a member of”).

5. Semantic Clarity: Ensures that the ontology provides clear and consistent defini-
tions for all terms and relationships.

6. Reusability: Designed for reuse across different applications and systems, enhanc-
ing interoperability.

7. Scalability: Capable of growing and adapting as new concepts and relationships


are introduced.

Example:

Example: Defining categories like ”Person” and ”Organization” and relationships


such as ”works for” in an ontology for a corporate knowledge base.

article amsmath tcolorbox geometry


a4paper, margin=1in

Categories and Objects


Categories help organize objects into groups based on shared characteristics. Objects are
individual instances that belong to these categories.

• Categories: Groupings of entities with common properties.

• Objects: Specific entities or instances within a category.

• Hierarchy: Categories can be arranged in a hierarchical structure (e.g., ”Animal”


¿ ”Mammal” ¿ ”Dog”).

• Attributes: Both categories and objects have attributes that describe their prop-
erties (e.g., ”Dog” might have attributes like ”breed,” ”size”).

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 13

• Relationships: Objects can have relationships with other objects within or across
categories (e.g., ”Dog” might be related to ”Owner”).

• Inheritance: Objects inherit characteristics from their categories (e.g., a ”Dog”


inherits properties from ”Animal”).

• Examples: Categories help in organizing and retrieving information efficiently.

Example:

Example: A ”Dog” is an instance of the ”Animal” category. ”Animal” is a category


that includes various types of animals, and ”Dog” is a specific example within this
category.

Events
Events represent occurrences or actions that happen in the world and are used in knowl-
edge representation to model time-dependent phenomena.

• Definition: Events are actions or occurrences that have effects on the world.

• Temporal Aspect: Events are often time-dependent, meaning they occur at spe-
cific times or intervals.

• State Changes: Events can cause changes in the state of objects or the environ-
ment.

• Causality: Events can have causal relationships, where one event leads to another.

• Representation: Events are often represented in knowledge bases to capture the


dynamics of a system.

• Tracking: Tracking events helps in understanding and predicting future states


based on past occurrences.

• Examples: Events are used in various domains, including planning, scheduling,


and simulations.

Example: ”Rain” is an event that causes the ”Ground Wet” event.

Mental Events and Mental Objects


Mental events and mental objects refer to internal states or occurrences within an agent’s
mind, such as beliefs, desires, or intentions.

• Mental Events: Internal occurrences or changes in an agent’s mind, such as


thoughts, beliefs, or intentions.

• Mental Objects: Concepts or entities within the mind, like beliefs or desires, that
an agent can think about.

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 14

• Beliefs: Representations of the agent’s understanding or knowledge about the


world.

• Desires: Internal states reflecting the agent’s wants or goals.

• Intentions: Plans or decisions made by the agent to achieve certain goals.

• Representation: Mental events and objects are often used to model cognitive
processes in artificial intelligence.

• Influence: Mental states can influence decision-making and behavior.

Example: ”Believing it will rain” is a mental event that reflects an agent’s internal
state or thought process.

Reasoning Systems for Categories


Reasoning systems for categories enable AI systems to make logical inferences based on
the hierarchical and relational structure of categories. These systems leverage the cate-
gorization of objects to deduce new information or make decisions.

• Categorization: Objects are grouped into categories based on shared characteris-


tics.

• Inference Rules: Apply rules based on the relationships between categories (e.g.,
”If an object belongs to category A and category A has property X, then the object
also has property X”).

• Hierarchy Utilization: Use hierarchical relationships (e.g., subcategories) to make


inferences (e.g., if a subcategory inherits properties from a parent category).

• Logical Deduction: Draw logical conclusions based on the known properties of


categories and their relationships.

• Property Inheritance: Categories inherit properties from their parent categories


(e.g., if ”Animal” can move, then ”Dog,” as a type of ”Animal,” can move).

• Consistency Checking: Ensure that inferences are consistent with the defined
relationships and properties within the category system.

• Application: Used in various AI applications, including classification systems,


decision-making, and expert systems.

Example: If ”Dog” is a subcategory of ”Animal,” and ”Animal” can move, then


it logically follows that a ”Dog,” as a member of the ”Animal” category, can also
move.

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU 15

Reasoning with Default Information


Default reasoning involves making assumptions based on typical or default cases when
complete information is unavailable. These assumptions can be revised or retracted when
new or more specific information becomes available.

• Default Assumptions: Start with typical or common scenarios (e.g., assuming


that a bird can fly).

• Revisability: Default assumptions can be overridden by more specific information


(e.g., discovering that a specific bird does not fly).

• Applicability: Used when complete knowledge is not available, allowing for rea-
sonable assumptions based on general knowledge.

• Flexibility: Provides a mechanism to handle incomplete or uncertain information


while remaining adaptable to new data.

• Reasoning Process: Involves making initial assumptions and then updating them
as new information is acquired.

• Handling Exceptions: Allows for the identification and processing of exceptions


to default rules (e.g., exceptions like penguins that cannot fly).

• Examples in AI: Used in expert systems, decision-making processes, and problem-


solving scenarios where all details are not known upfront.

Example: Assuming that a bird can fly is a default reasoning approach. However,
when encountering a penguin, this assumption is revised based on specific knowledge
about penguins being flightless.

Artificial Intelligence (BCAI501)


Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU

ITECH WORLD AKTU


Subject: Artificial Intelligence
Subject Code: BCAI501

UNIT 4: SOFTWARE AGENTS

Syllabus:

• Architecture for Intelligent Agents

• Agent communication

• Negotiation and Bargaining

• Argumentation among Agents

• Trust and Reputation in Multi-agent systems

What is an Intelligent Agent?


An intelligent agent is a system capable of perceiving its environment through
sensors and acting upon that environment using actuators to achieve specific goals.
The agent uses artificial intelligence techniques to process data, reason, learn, and
make decisions autonomously.
Example: A thermostat is a simple example of an intelligent agent. It senses the
temperature in a room and automatically adjusts the heating or cooling system to
maintain a desired temperature.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

Four Rules for an AI Agent:


AI agents operate based on four key rules:

1. Perceive the environment: The agent continuously observes its environ-


ment using sensors to gather information.

2. Reason and plan: Based on its observations, the agent must reason and
plan its actions by evaluating possible outcomes and determining the best
course of action.

3. Take action: The agent acts upon the environment using actuators, such as
motors, software commands, or signals.

4. Learn and adapt: A key component of an intelligent agent is the abil-


ity to learn from past actions and adapt its behavior over time to improve
performance and decision-making.

Example: A self-driving car uses its sensors (cameras, radar, etc.) to perceive
the environment, plans routes based on traffic data, takes actions like steering and
braking, and learns from driving experiences to improve its performance.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

Architecture for Intelligent Agents


1. Architecture for Intelligent Agents:
Intelligent agents are systems that perceive their environment and take actions
autonomously to achieve specific goals. The architecture of these agents defines
the structure, components, and behavior that allow them to operate efficiently. A
well-designed architecture ensures the agent can handle complex tasks, adapt to
changing environments, and learn from experience.
Components of Agent Architecture:

1. Perception: The ability of an agent to sense or perceive the environment.


This is done using sensors (for physical agents) or input data (for software
agents). It allows the agent to receive information about the world.

2. Reasoning: Agents need to analyze and make decisions based on input


data. This includes logical reasoning, planning, and prediction, often using
algorithms or AI techniques such as machine learning or rule-based systems.

3. Learning: Agents have the ability to improve their performance over time
by learning from their experiences. They adapt their strategies or behavior
based on feedback or patterns detected in data.

4. Memory: Some agents need to store information about past actions, obser-
vations, or decisions. Memory helps in learning and can be used for long-term
strategy planning or adapting behavior in recurring situations.

5. Action: The physical or virtual actions that an agent takes to achieve its
goals. For a physical agent, this could involve moving objects or interacting
with devices. For a software agent, actions could include sending messages,
updating records, or triggering specific events.

6. Communication: Many intelligent agents interact with other agents or hu-


mans. This requires a communication mechanism to share information, re-
quest help, or coordinate actions. Communication can be direct (e.g., through
messages) or indirect (e.g., via environmental changes).

7. Autonomy: Agents can operate without direct human intervention. They


are capable of taking decisions and actions based on their perception and
reasoning capabilities, aiming to achieve set goals.

Example: Consider a self-driving car. Its architecture would include:

• Perception: Sensors detect other vehicles, pedestrians, road signs, and lane
markings. The car uses LIDAR, cameras, and radar to perceive its surround-
ings.

• Reasoning: Algorithms analyze the best route based on real-time traffic


data, road conditions, and safety requirements. The car predicts the behavior
of nearby objects (e.g., cars, pedestrians).

• Learning: The car learns from past driving experiences to improve its nav-
igation and decision-making over time. For example, it can learn how to
handle specific intersections more efficiently.

• Memory: The car stores past data on routes, traffic patterns, and sensor
input, helping it refine its driving decisions and anticipate future conditions.
Downloaded by dhruv gupta ([email protected])
lOMoARcPSD|50645637

ITECH WORLD AKTU

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

Agent Communication
2. Agent Communication:
Agents need to communicate with each other to share information, collaborate,
and coordinate their actions. Communication can be either verbal, using a shared
language, or non-verbal through signals and changes in the environment.
Key Communication Protocols:

1. KQML (Knowledge Query and Manipulation Language): A high-


level language used for agent communication, enabling agents to share knowl-
edge, ask queries, and perform actions based on responses.

2. FIPA-ACL: A standard communication language developed by the Founda-


tion for Intelligent Physical Agents (FIPA), used for exchanging information
and messages between agents.

3. Message Passing: Agents can send and receive messages to request actions
or provide information to other agents in the system.

4. Direct Communication: Agents communicate directly with each other


using predefined protocols to share real-time information.

5. Indirect Communication (Stigmergy): Agents leave traces in the en-


vironment that influence the behavior of other agents, common in swarm
intelligence systems.

Multi-agent Systems (MAS):


In multi-agent systems, multiple agents interact and cooperate to solve problems
or achieve goals that are too complex for a single agent. Each agent may have
specialized capabilities, and they communicate to share tasks, coordinate actions,
and avoid conflicts.
Example: In a multi-agent system for online shopping, one agent may request
product details from another agent, while a third agent processes payments. They
coordinate through communication protocols like FIPA-ACL to provide a seamless
shopping experience.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

Negotiation and Bargaining


3. Negotiation and Bargaining:
Negotiation refers to the process where agents interact to reach a mutual agree-
ment. Bargaining is a specific form of negotiation where agents propose offers and
counteroffers to settle on a favorable outcome.
Types of Negotiation:

• Competitive Negotiation: Agents aim to maximize their own gain, often


at the expense of the other party.

• Collaborative Negotiation: Agents work together to find a solution that


benefits both parties equally, also known as a win-win scenario.

• Distributive Negotiation: A fixed amount of resources is divided between


agents, leading to a win-lose outcome.

• Integrative Negotiation: Agents look for ways to create additional value,


increasing the potential for both parties to benefit.

• Multi-party Negotiation: Involves more than two agents, where coordi-


nation and communication play a critical role in reaching agreements.

• Automated Negotiation: Agents use algorithms to negotiate au-


tonomously, making decisions based on predefined strategies or learning
mechanisms.

• Negotiation Tactics: Agents may use tactics such as persuasion, conces-


sions, or threats to influence the outcome of the negotiation.

Example: In an e-commerce setting, buyer agents and seller agents negotiate


the price of a product. Buyer agents propose lower prices, while seller agents offer
counterprices. They reach an agreement through a series of offers and counteroffers.

Argumentation among Agents


4. Argumentation among Agents:
Argumentation is the process where agents engage in structured dialogue to resolve
conflicts or establish the validity of a proposition.
Types of Argumentation:

• Persuasive Argumentation: Agents attempt to convince others.

• Deliberative Argumentation: Agents collaborate to explore multiple


viewpoints.

Example: Two agents in a healthcare system may argue about the best course of
treatment for a patient, presenting evidence and counterarguments until they agree
on the most suitable option.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU

Trust and Reputation in Multi-Agent Systems


5. Trust and Reputation in Multi-agent Systems:
In multi-agent systems, trust refers to an agent’s confidence in another’s actions,
while reputation is the collective belief about an agent’s reliability based on its past
behavior.
Building Trust:

• Trust develops through consistent and reliable interactions.

• Reputation is built through feedback or ratings from other agents.

• Trust and reputation systems help agents decide whom to collaborate with
in uncertain environments.

Example: In a peer-to-peer network, agents with a high reputation are trusted


more for data sharing. New agents can gain trust based on recommendations from
others.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

ITECH WORLD AKTU


Artificial Intelligence (BCAI501)
UNIT 5

Syllabus

TU
• AI Applications

• Language Models

AK
• Information Retrieval

• Information Extraction

• Natural Language Processing

• Machine Translation

• Speech Recognition
RL
• Robots and Hardware

• Perception
O
• Planning and Moving
W

AI Applications
Artificial Intelligence (AI) has a vast range of applications across different sectors. The
following are some of the most significant applications of AI:
H
EC
IT

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

1. Language Models
Language models are algorithms designed to understand, generate, and process human
language. They play a crucial role in various tasks like speech recognition, language
translation, and text generation. One of the most powerful examples is GPT (Generative
Pretrained Transformer), which can generate human-like text.
Language models are essential components in the field of Natural Language Processing
(NLP). Their primary function is to predict the likelihood of a sequence of words. By
doing so, they help in understanding and generating meaningful text.

TU
How Language Models Work:
Language models work by analyzing large datasets of text, learning the patterns and
structure of language. They create mathematical representations of words and phrases,

AK
allowing them to predict the next word in a sentence based on the previous ones. This
predictive capability is foundational for various applications.

Applications:
• Speech Recognition: Language models improve the accuracy of recognizing spo-
ken language by predicting the next possible word in the sentence.
RL
• Machine Translation: They help in translating languages by understanding con-
text and structure in the original text and generating accurate translations.

• Text Generation: Advanced models like GPT can generate human-like text,
O
enabling applications in content creation, chatbots, and virtual assistants.
W
H
EC
IT

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

2. Information Retrieval
Information retrieval refers to the process of obtaining relevant information from large
datasets. It is widely used in various applications, most notably search engines. Here are
key points about information retrieval:

• Definition: The process of fetching relevant data or information from a large


repository, typically in response to user queries.

TU
• AI Algorithms: Advanced algorithms, often AI-based, help identify the most
pertinent information by analyzing the query and dataset.

• Applications: Commonly used in search engines, library databases, and e-commerce


platforms.

AK
• Relevance Ranking: Algorithms assess relevance by matching keywords, context,
and other factors to rank the results.

• Efficiency: The goal is to provide fast and accurate retrieval, minimizing irrelevant
or redundant information.

• Example: Google Search uses AI algorithms to retrieve the most relevant web
pages based on user input.
RL
O
W
H
EC
IT

3. Information Extraction
Information extraction is the process of transforming unstructured data into structured,
meaningful information. Key points about information extraction include:

• Definition: Extracting specific, structured data (e.g., names, dates, entities) from
unstructured text or documents.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

• Purpose: Helps in organizing large text datasets by identifying key information


and making it accessible for further processing.

• Techniques: Utilizes natural language processing (NLP), machine learning, and


pattern recognition techniques to extract relevant data.

• Applications: Common in fields such as legal document analysis, biomedical re-


search, and data mining.

TU
• Automation: Reduces the manual effort required to sift through large amounts of
text for specific information.

• Example: AI systems used in legal document analysis extract key information,


such as contract terms or important dates, from contracts or case files.

AK
RL
O
W

4. Natural Language Processing (NLP)


Natural Language Processing (NLP) enables computers to understand and process human
H

language. Key points about NLP include:


EC

• Definition: A field of AI focused on the interaction between computers and human


(natural) languages.

• Goal: To enable machines to interpret, analyze, and generate human language.

• Techniques: Involves text analysis, tokenization, part-of-speech tagging, and sen-


IT

timent analysis.

• Applications: Widely used in virtual assistants, sentiment analysis, machine


translation, and chatbots.

• Example: Siri and Alexa are powered by NLP to process voice commands and
respond in a natural language.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

5. Machine Translation
Machine translation involves using AI to automatically translate text from one language
to another. Key points include:

• Definition: Automatic translation of text or speech from one language to another


using AI models.

• Neural Networks: The use of neural machine translation has greatly improved

TU
the accuracy and fluency of translations.

• Applications: Used in global communication, content localization, and cross-


language information retrieval.

• Challenges: Includes handling idiomatic expressions, context, and cultural nu-

AK
ances.

• Example: Google Translate uses neural machine translation to translate text be-
tween different languages.

6. Speech Recognition
RL
Speech recognition technology converts spoken language into text. Key points about
speech recognition include:

• Definition: AI-driven systems that recognize and transcribe spoken language into
O
written text.

• Training: AI models are trained using large datasets of speech to learn and recog-
W

nize speech patterns.

• Applications: Common in virtual assistants, transcription services, and voice-


activated controls.
H

• Accuracy: Accuracy has significantly improved with the use of deep learning and
neural networks.
EC

• Example: Dragon NaturallySpeaking is an AI-based speech recognition software


used for dictation and transcription.
IT

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

TU
AK
RL
O
W
H
EC
IT

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

7. Robots and Hardware


AI-powered robots can perceive their environment, make decisions, and autonomously
perform tasks. Key points about robots and hardware include:

• Definition: Robots integrated with AI systems that allow them to perceive, pro-
cess, and interact with their surroundings.

• Capabilities: AI enables robots to perform complex tasks, such as navigation,

TU
object recognition, and decision-making.

• Autonomy: Robots can operate without human intervention, adapting to changes


in their environment.

• Applications: Widely used in manufacturing, healthcare (e.g., surgery robots),

AK
logistics, and household chores.

• Advancements: AI advancements in machine learning and computer vision have


improved robots’ efficiency and adaptability.

• Example: Boston Dynamics’ robots, like Spot, use AI to navigate complex terrains
and complete tasks autonomously.
RL
O
W
H
EC
IT

The robot hardware architecture is composed of multiple interconnected components


that work in harmony to ensure proper functionality. The following key hardware com-
ponents are involved:

• Electric Power Scheme: This component provides the necessary electrical energy
required to power the entire system. It supplies electricity to the PC control board
and other connected devices.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

TU
AK
• PC Control Board: The control board acts as the central processing unit of the
robot. It receives input signals from various sensors and peripherals, processes the
data, and sends commands to other components. It interfaces with sensors like the
LIDAR and camera for environmental awareness and vision.

• LIDAR: The Light Detection and Ranging (LIDAR) sensor is used for mapping
RL
the environment and obstacle detection. It provides distance measurements that
help the robot understand its surroundings and navigate safely.

• Camera: The camera provides visual input to the PC control board, enabling
O
object recognition, video streaming, and image processing for tasks like navigation
or manipulation.
W

• IMU (Inertial Measurement Unit): The IMU provides acceleration, angular


velocity, and sometimes magnetic field information, which helps in tracking the
robot’s orientation and movement. It is essential for maintaining balance and sta-
bility.
H

• STM32 Microcomputer: This microcontroller manages real-time control of the


robot’s subsystems. It works in tandem with the PC control board to ensure smooth
execution of motor commands, sensor reading, and overall control.
EC

• Motor Drive: The motor drive system receives commands from the STM32 micro-
controller to control the motors. It is responsible for driving the wheels or actuators
of the robot, ensuring proper movement and speed control.
IT

• Displayer: The displayer provides visual feedback to the user, showing important
system information or real-time data such as video feed, system status, or sensor
outputs.

• Sound Device: This device generates sound output based on the commands from
the PC control board. It can be used for communication with the user or for
providing audible alerts and notifications during operation.

In this system, the PC control board serves as the main interface that interacts with
both input devices (LIDAR, Camera, IMU) and output devices (Motor drive, Displayer,

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

Sound device). The STM32 microcontroller plays a crucial role in processing real-time
tasks and ensuring that motor control is executed properly. Together, these components
allow the robot to perceive its environment, make decisions, and execute movements
effectively.

8. Perception
Perception in AI refers to the ability of machines to interpret sensory inputs like images,

TU
sounds, and touch. Key points about perception include:

• Definition: The capability of AI systems to process and interpret data from sen-
sory inputs.

• Sensory Inputs: Includes images, sounds, touch, and other environmental data.

AK
• Computer Vision: Analyzes visual data from cameras to recognize and under-
stand objects and scenes.

• Audio Processing: Involves interpreting sounds and speech for applications like
voice assistants and audio analysis. RL
• Applications: Crucial in self-driving cars, facial recognition systems, and object
detection technologies.

• Challenges: Includes handling diverse data types, varying conditions, and high-
dimensional data.
O
• Example: Self-driving cars like Tesla use AI perception to detect obstacles, recog-
nize traffic signs, and navigate roads.
W

9. Planning and Moving


Planning and moving in AI involve formulating action sequences to achieve goals and
H

enabling robots to navigate environments. Key points include:

• Definition of Planning: The process of creating a sequence of actions to accom-


EC

plish a specific objective.

• Action Formulation: Involves determining the necessary steps and decisions to


reach the goal.
IT

• Decision-Making: AI systems assess various options to choose the most effective


action sequence.

• Definition of Moving: Refers to the robot’s ability to physically navigate and


maneuver within an environment.

• Navigation: Includes path planning, obstacle avoidance, and real-time adjust-


ments during movement.

• Applications: Used in autonomous vehicles, drones, and robotic systems for effi-
cient operation and task completion.

Downloaded by dhruv gupta ([email protected])


lOMoARcPSD|50645637

• Example: Autonomous drones use AI to plan flight routes, avoid obstacles, and
adjust their path in real-time during flight.

TU
AK
RL
O
W
H
EC
IT

10

Downloaded by dhruv gupta ([email protected])

You might also like