0% found this document useful (0 votes)
37 views34 pages

21.DTC AI Unit1

Uploaded by

debnath.soumi19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views34 pages

21.DTC AI Unit1

Uploaded by

debnath.soumi19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 34

Artificial Intelligence

Prepared By:
Reeta Mishra
[email protected]
CSE-3 yrs-C
Unit-1

• AI Definition, Problems
• Foundations of Artificial Intelligence,
• Techniques, Models
• Defining Problem as a state space search
• Production system, Intelligent Agents
• Agents: Agents and Environments, Characteristics,
• Search methods
• Issues in the design of search problems
• Real time example to handle search problem
• Revision ,Quiz-1
AI Definition:

• Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn
like humans. The goal of AI is to create systems that can perform tasks that typically require human intelligence, such as
visual perception, speech recognition, decision-making, and language translation. AI can be categorized into two main types:
Narrow AI, which is designed for a specific task, and General AI, which has the ability to perform any intellectual task that a
human being can.

Artificial + Intelligence =AI (man made thinking)


Real-Time Example: Virtual Personal Assistants

• One prevalent real-time example of AI is virtual personal assistants. These AI systems are designed to understand natural
language and perform various tasks, helping users with daily activities. Let's take the example of "Siri" by Apple:
• Siri:
• Functionality: Siri is a virtual personal assistant that operates on Apple devices, such as iPhones, iPads, and Macs.
• Natural Language Processing (NLP): Siri utilizes advanced NLP algorithms to understand and respond to user commands
and questions in everyday language.
• Task Execution: Users can ask Siri to set reminders, send messages, make phone calls, schedule appointments, provide
weather updates, and more.
• Learning and Adaptation: Siri learns from user interactions and adapts its responses over time, improving its ability to
understand individual preferences and speech patterns.
• Context Awareness: Siri demonstrates a level of context awareness by remembering previous queries and maintaining a
conversational context during interactions.
• Integration with Third-Party Apps: Siri integrates with various third-party applications, allowing users to control and interact
with a wide range of services using voice commands.
Problems and Challenges in AI:

• Bias and Fairness:


• AI systems can inherit and perpetuate biases present in the data used for training. This can result in biased outcomes,
discriminatory decisions, and unfair treatment.
• Lack of Transparency:
• Many AI models, especially deep learning models, are considered black boxes because it's challenging to understand
how they reach specific decisions. Lack of transparency can lead to distrust and limit the adoption of AI in critical
areas.
• Ethical Concerns:
• The use of AI raises ethical questions, especially in areas such as privacy, surveillance, and the potential for job
displacement. Decisions made by AI systems can have significant social and ethical implications.
• Security Risks:
• AI systems can be vulnerable to attacks and manipulation. Adversarial attacks, where input data is subtly modified to
deceive the AI, are a growing concern. Ensuring the security of AI systems is crucial.
• Data Privacy:
• AI systems often rely on vast amounts of data. The collection and use of personal data raise concerns about privacy.
Unauthorized access or misuse of sensitive data can lead to serious consequences.
• Lack of Standardization:
• The absence of standardized protocols and guidelines for developing and deploying AI systems makes it
challenging to ensure consistency, interoperability, and accountability across the AI industry.
• Job Displacement:
• The automation of tasks through AI and robotics has the potential to displace certain jobs, leading to
unemployment and a need for reskilling the workforce.
• Regulatory Challenges:
• Developing appropriate regulations for AI is challenging due to the rapidly evolving nature of the technology.
Striking a balance between encouraging innovation and ensuring responsible use is a complex task for
policymakers.
• Robustness and Reliability:
• AI systems may not always perform reliably in real-world, dynamic environments. Ensuring the robustness of
AI models and their ability to handle unexpected situations is a significant challenge.
• Explainability:
• Understanding and explaining the decisions made by AI systems is crucial, especially in critical applications
like healthcare and finance. Achieving explainability in complex models remains a significant challenge.
Foundations of Artificial Intelligence,

• The foundations of Artificial Intelligence (AI) are built upon several key principles, concepts, and technologies. Here's an
overview of the foundational elements of AI:
• Machine Learning (ML):
• Definition: Machine learning is a subset of AI that involves the development of algorithms that enable machines to
learn from data.
• Importance: ML algorithms allow systems to improve their performance on a specific task as they are exposed to
more data, without being explicitly programmed.
• Data:
• Role: Data is the fuel for AI systems. High-quality, diverse, and relevant data is essential for training accurate and
effective AI models.
• Importance: AI models rely on data to identify patterns, make predictions, and perform various tasks.
• Algorithms:
• Definition: Algorithms are step-by-step instructions or procedures for solving a specific problem or accomplishing a
particular task.
• Importance: AI algorithms enable machines to process information, make decisions, and learn from data.
• Neural Networks:
• Definition: Neural networks are computational models inspired by the structure and function of the human
brain. They are used in deep learning.
• Importance: Neural networks excel at tasks like image recognition, natural language processing, and
pattern recognition.
• Natural Language Processing (NLP):
• Definition: NLP is a subfield of AI that focuses on the interaction between computers and human language.
• Importance: NLP enables machines to understand, interpret, and generate human-like language,
facilitating communication between humans and machines.
• Computer Vision:
• Definition: Computer vision involves teaching machines to interpret and make decisions based on visual
data, such as images and videos.
• Importance: Computer vision is crucial for tasks like image recognition, object detection, and facial
recognition.
• Expert Systems:
• Definition: Expert systems are AI programs designed to mimic the decision-making abilities of a human
expert in a specific domain.
• Importance: Expert systems can provide specialized knowledge and recommendations, making them
valuable in fields like medicine and finance.
• Robotics:
• Definition: Robotics involves the design, construction, and operation of robots that can perform tasks
autonomously or semi-autonomously.
• Importance: AI-powered robots can navigate and interact with the physical world, contributing to fields such
as manufacturing, healthcare, and exploration.
• Ethics and Responsible AI:
• Importance: As AI systems become more prevalent, addressing ethical considerations, bias, transparency,
and accountability is crucial to ensure responsible development and deployment.
• Cognitive Computing:
• Definition: Cognitive computing is an interdisciplinary area that aims to simulate human thought processes
using AI algorithms.
• Importance: Cognitive computing systems can understand, reason, and learn, making them suitable for
complex problem-solving.
AI Techniques, Models

• Machine Learning (ML):


• Definition: ML is a subset of AI that focuses on the development of algorithms allowing systems to learn patterns and
make decisions without explicit programming.
• Types:
• Supervised Learning: Learn from labeled data with input-output pairs.
• Unsupervised Learning: Extract patterns from unlabeled data.
• Reinforcement Learning: Learn by interacting with an environment and receiving feedback.
• Deep Learning:
• Definition: Deep learning is a subset of ML that involves neural networks with multiple layers (deep neural networks).
• Models:
• Convolutional Neural Networks (CNNs): Excellent for image and video analysis.
• Recurrent Neural Networks (RNNs): Suitable for sequential data like time series and natural language.
• Natural Language Processing (NLP):
• Definition: NLP involves the interaction between computers and human language.
• Models:
• Transformer Models: E.g., BERT (Bidirectional Encoder Representations from Transformers), GPT
(Generative Pre-trained Transformer).
• Recurrent Neural Networks (RNNs): Often used for sequence-to-sequence tasks.
• Computer Vision:
• Definition: Computer vision enables machines to interpret and understand visual information.
• Models:
• Faster R-CNN (Region-based Convolutional Neural Network): Object detection model.
• YOLO (You Only Look Once): Another popular object detection model.
• Recommender Systems:
• Definition: Recommender systems predict and suggest items or content based on user preferences.
• Models:
• Collaborative Filtering: Recommends items based on user behavior and preferences.
• Content-Based Filtering: Recommends items similar to those the user has liked before.
• Expert Systems:
• Definition: Expert systems emulate the decision-making abilities of a human expert in a specific domain.
• Models:
• Rule-Based Systems: Use a set of predefined rules to make decisions.
• Knowledge Graphs: Represent knowledge in a structured format.
• Genetic Algorithms:
• Definition: Genetic algorithms are optimization algorithms inspired by the process of natural selection.
• Application: Used for optimization problems in search and optimization tasks.
• Swarm Intelligence:
• Definition: Swarm intelligence models are inspired by the collective behavior of decentralized, self-organized
systems.
• Models:
• Ant Colony Optimization (ACO): Used for optimization problems.
• Particle Swarm Optimization (PSO): Applied to optimization and search problems.
• Reinforcement Learning:
• Definition: Reinforcement learning involves agents learning by interacting with an environment and receiving
feedback in the form of rewards or penalties.
• Models:
• Deep Q Network (DQN): Used in discrete action spaces.
• Proximal Policy Optimization (PPO): Suitable for continuous action spaces.
• Adversarial Models:
• Definition: Adversarial models involve the creation of adversarial examples to test the robustness of AI
systems.
• Models:
• Generative Adversarial Networks (GANs): Used to generate realistic synthetic data.
Defining Problem as a state space search

• It is a common approach in artificial intelligence and computer science. This framework is particularly useful when solving
problems that involve reaching a goal state through a sequence of actions.
• Problem Representation:
• State Space: A state space is a set of all possible states that a system can be in. Each state represents a particular
configuration or situation.
• Initial State: The starting point of the problem.
• Goal State: The desired or target state that the system aims to reach.
• Operators or Actions:
• Operators: These are the actions or transitions that can be applied to move from one state to another. Operators
define the permissible moves or changes in the system.
• Path:
• Path: A path is a sequence of states connected by a series of operators. It represents a solution or a trajectory from
the initial state to the goal state.
• Search Space:
• Search Space: The search space encompasses all possible paths and states that can be explored to find a solution. It
is the set of all potential sequences of actions.
• Search Algorithm:
• Search Algorithm: An algorithm is employed to systematically explore the search space, evaluating paths
and states to find a solution efficiently.
• Example: The 8-Puzzle Problem:
• Let's illustrate the concept using the classic 8-puzzle problem, where you have a 3x3 grid with eight numbered
tiles and one empty space. The goal is to rearrange the tiles from the initial configuration to a specified goal
configuration.
• State Space: All possible arrangements of the 8 tiles and the empty space.
• Initial State: The starting arrangement of the tiles.
• Goal State: The desired configuration of the tiles.
• Operators/Actions: Possible moves, such as sliding a tile into the empty space.
• Path: A sequence of moves from the initial state to the goal state.
• Search Algorithm: A search algorithm (e.g., A* search) is used to explore possible moves and find the optimal
path from the initial state to the goal state.
Production System

• A production system is a model used in artificial intelligence and computer science to represent the knowledge-based
components of an intelligent system. It consists of a set of production rules, which are conditional statements that describe
how the system should behave based on its current state and the incoming information. The key components of a production
system include:
• Production Rules:
• These rules are in the form of condition-action pairs. If certain conditions are met, then a specific action is taken.
• Working Memory:
• Working memory holds the current state or information about the system. It is a dynamic storage area where the
system can access and modify data.
• Inference Engine:
• The inference engine is responsible for matching the conditions of production rules with the contents of the working
memory. When a match is found, the associated action is triggered.
• Control Strategy:
• The control strategy defines the order in which production rules are applied. It determines how the system prioritizes
and selects rules for execution.
Example

• Example: Diagnostic System


• Let's consider a diagnostic system as an example.
• The production rules might be in the form of
• "IF symptoms are A, B, and C THEN diagnose as condition X."
• The working memory would contain information about the observed symptoms, and the inference engine would match this
information with the conditions in the rules to suggest a diagnosis.
Intelligent Agents:

• An intelligent agent is a system that perceives its environment, reasons about it, and takes actions to achieve goals.
Intelligent agents are a fundamental concept in artificial intelligence and are designed to operate autonomously in dynamic
and unpredictable environments. Key components of intelligent agents include:
• Perception:
• The agent receives information about its environment through sensors. This input is used to build a representation of
the current state.
• Reasoning/Cognition:
• The agent processes the information, reasons about its current state, and decides on a course of action. This may
involve problem-solving, decision-making, and planning.
• Actuation/Action:
• The agent executes actions in the environment through actuators. These actions are chosen based on the agent's
goals and the perceived state of the environment.
• Learning:
• Intelligent agents often have the ability to learn from experience. This can involve adapting to changing environments,
improving performance over time, and acquiring new knowledge.
• .
• Goal-Oriented:
• Intelligent agents have goals or objectives that guide their actions. They operate with the aim of achieving
specific outcomes in their environment.

• Example: Autonomous Driving System


• Consider an autonomous driving system as an example of an intelligent agent. The system perceives the
environment through sensors (such as cameras and lidar), reasons about the current traffic situation and road
conditions, and takes actions (steering, accelerating, braking) to navigate safely to a destination. The system may
learn from past experiences, adapting its behavior to different driving scenarios.
Agents and Environments

• In the context of artificial intelligence, agents and environments are fundamental concepts that describe the components and
interactions of intelligent systems. Let's explore the characteristics of agents and environments:
• Agents and Environments:
• Agent:
• Definition: An agent is an entity that perceives its environment, reasons about it, and takes actions to achieve goals.
Agents can be hardware or software entities capable of autonomous decision-making.
• Environment:
• Definition: The environment is the external system or context in which the agent operates. It includes everything
outside the agent that can potentially influence or be influenced by the agent's actions.
• Perception (Agent):
• Characteristic: Agents have sensors or perceptual mechanisms to gather information from their environment. This
information is used to build an internal representation of the external world.
• Action (Agent):
• Characteristic: Agents have actuators or effectors that allow them to perform actions in the environment.
These actions are chosen based on the agent's goals and its interpretation of the current state.
• State (Environment):
• Characteristic: The environment has a state that represents the current configuration or situation. This
state is influenced by the agent's actions and external factors.
• Dynamic (Environment):
• Characteristic: Environments can be dynamic, meaning they can change over time. The state of the
environment may evolve due to the agent's actions or external events.
• Observable (Environment):
• Characteristic: Environments may be fully observable or partially observable. In a fully observable
environment, the agent has complete information about the current state. In a partially observable
environment, some aspects may be hidden.
• Deterministic or Stochastic (Environment):
• Characteristic: Environments may be deterministic, where the next state is completely determined by the
current state and agent action, or stochastic, where there is some degree of randomness in state
transitions.
• Episodic or Sequential (Environment):
• Characteristic: Environments may be episodic, where each episode is a self-contained interaction without
influence from previous episodes, or sequential, where actions have consequences and influence future
states.
• Rationality (Agent):
• Characteristic: Agents are designed to be rational, meaning they choose actions that maximize the
expected utility or achieve their goals in the given environment.
• Example: Robotic Vacuum Cleaner
• Consider a robotic vacuum cleaner as an example. The vacuum cleaner (agent) perceives the environment through
sensors that detect dirt and obstacles. It reasons about its current state, decides on actions such as moving and
cleaning, and executes these actions using actuators. The environment, in this case, includes the room with its
layout, furniture, dirt, and obstacles.
• Understanding the characteristics of agents and environments is crucial for designing intelligent systems that can
operate effectively in a variety of contexts. These concepts provide a framework for modeling and analyzing the
interactions between intelligent entities and their surroundings.
Search Method
• In the field of Artificial Intelligence (AI), various methods and techniques are used to solve problems, learn
from data, and make intelligent decisions. Here are some common search methods in AI:
• Brute Force Search: This method involves systematically examining all possible solutions until a satisfactory
solution is found. It is typically used for small search spaces due to its high computational complexity.
• Breadth-First Search (BFS): BFS explores all neighbor nodes at the present depth before moving on to
nodes at the next depth level. It guarantees the shortest path to the goal in an unweighted graph.
• Depth-First Search (DFS): DFS explores as far as possible along each branch before backtracking. It uses
less memory compared to BFS but may get trapped in infinite loops in graphs with cycles.
• Iterative Deepening Depth-First Search (IDDFS): IDDFS is a combination of BFS and DFS. It performs DFS
to a certain depth and iteratively increases the depth limit until the goal is found.
• A Search Algorithm*: A* is a heuristic search algorithm that finds the least-cost path from the initial node to the
goal node. It evaluates nodes by combining the cost to reach the node and the estimated cost to reach the
goal through that node.
• Greedy Best-First Search: Greedy Best-First Search selects the path that appears to be the best, based on
heuristic information, without considering the overall cost to reach the goal.
• Beam Search: Beam Search is a heuristic search algorithm that explores a graph by expanding the most promising node in
a limited set of nodes called the beam width.
• Constraint Satisfaction Problems (CSP): CSP is a technique for solving problems where variables need to be assigned
values subject to constraints. Backtracking and constraint propagation are common methods used in CSP.
• Genetic Algorithms (GA): GA is a metaheuristic search algorithm inspired by the process of natural selection. It uses
techniques such as mutation, crossover, and selection to evolve solutions to optimization and search problems.
• Simulated Annealing: Simulated Annealing is a probabilistic optimization algorithm that mimics the annealing process in
metallurgy. It is used to find near-optimal solutions by allowing uphill moves with decreasing probability as the algorithm
progresses.
• Tabu Search: Tabu Search is a metaheuristic search method that guides a local heuristic search procedure to explore the
solution space beyond local optimality.
• Ant Colony Optimization (ACO): ACO is a metaheuristic inspired by the foraging behavior of ants. It is used to find good
paths through graphs by simulating the way ants find paths between their colonies and food sources.
Issues in the design of search problems in AI

• The design of search problems in AI involves various considerations and challenges. Here are some of the key issues:
• Problem Representation: One of the fundamental aspects of designing a search problem is how the problem is
represented. The representation should capture all relevant information about the problem domain while also being efficient
for search algorithms to operate on.
• State Space Complexity: The size and complexity of the state space impact the efficiency of search algorithms. Problems
with large state spaces require more sophisticated search techniques to explore the space effectively.
• Search Space Structure: The structure of the search space, including branching factors and depth, affects the choice of
search algorithm. Irregular or complex structures may require specialized search techniques or heuristics to guide the search
effectively.
• Search Algorithm Selection: Choosing the appropriate search algorithm depends on various factors such as problem
characteristics, available computational resources, and desired solution quality. Different algorithms have different strengths
and weaknesses in terms of time complexity, space complexity, and optimality guarantees.
• Heuristic Information: In many search problems, heuristic information can be used to guide the search process by
estimating the cost or value of states. Designing effective heuristics requires domain knowledge and understanding of the
problem structure.
• Optimality vs. Efficiency: There is often a trade-off between finding an optimal solution and the computational resources
required to do so. In some cases, it may be acceptable to sacrifice optimality for efficiency, especially in problems with large
state spaces.
• Incomplete Information: Some search problems involve incomplete or uncertain information about the environment.
Dealing with uncertainty requires specialized search techniques, such as probabilistic search algorithms or methods for
handling partial observability.
• Dynamic Environments: In dynamic environments where the state of the system changes over time, search algorithms
must be able to adapt to changes and make decisions in real-time. This may involve techniques such as online search or
replanning.
• Multiple Objectives: In multi-objective search problems, there may be conflicting goals or objectives that need to be
optimized simultaneously. Designing effective search algorithms for multi-objective optimization requires considering trade-
offs between competing objectives.
• Scalability: As the size of the problem increases, the scalability of search algorithms becomes a critical issue. Scalable
search techniques are necessary to handle large-scale problems efficiently.
Real time example to handle search problem

• Search Problem: The search problem in route planning involves finding the shortest or fastest path from a starting location
to a destination while considering various factors such as distance, traffic conditions, road closures, and user preferences.
• Solution Approach:
• Problem Representation: The map data is represented as a graph, where nodes represent intersections or locations, and
edges represent roads connecting them. Each edge has associated attributes such as distance, speed limit, and current
traffic conditions.
• Search Algorithm Selection: A popular algorithm for solving route planning problems is A* (A-star) search algorithm. A*
combines the benefits of both breadth-first and heuristic search, efficiently exploring the search space while using heuristics
to guide the search towards the goal.
• Heuristic Information: In route planning, heuristics such as the straight-line distance (or estimated travel time) between the
current node and the goal node are used to guide the search. This heuristic helps A* prioritize nodes that are closer to the
goal.
• Dynamic Environment Handling: Mapping applications like Google Maps continuously receive real-time updates about
traffic conditions, accidents, road closures, and other events that affect travel time. The search algorithm dynamically adjusts
its pathfinding based on this information to provide users with the most up-to-date routes.
• Multiple Objectives: Some mapping applications allow users to specify preferences such as avoiding toll roads, highways,
or preferring scenic routes. The search algorithm incorporates these preferences while finding the optimal path.
• Scalability: Mapping applications need to handle large-scale maps and millions of route requests simultaneously. Efficient
data structures and algorithms are used to ensure scalability and responsiveness.
• User Interaction: Mapping applications provide user interfaces where users can input their starting point, destination, and
preferences. The application then uses the search algorithm to compute and display the optimal route on the map.
• Feedback Loop: Mapping applications often incorporate user feedback and historical data to improve route
recommendations over time. This feedback loop helps the system learn from user behavior and adapt its route planning
strategies accordingly.

You might also like