0% found this document useful (0 votes)
10 views

Introduction To AI

Uploaded by

vedangtripathi45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Introduction To AI

Uploaded by

vedangtripathi45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

07-May-24

When and Who proposed AI?


• John McCarthy, an American computer scientist, who was a pioneer
and an inventor, coined the term Artificial Intelligence (AI) in his
1955 under the proposal for the 1956 Dartmouth Conference, the
first artificial intelligence conference.

• According to him, AI is The science and engineering of making


Artificial Intelligence and Machine Learning intelligent machines, especially intelligent computer programs.

TEXTBOOKS/LEARNING RESOURCES:
a) T. M. Mitchell, Machine Learning (1st ed.), McGraw Hill, 2017. ISBN 978-1259096952.
b) E. Alpaydin, Introduction to Machine Learning (4th ed.), Phi, 2020. ISBN 978-8120350786

REFERENCE BOOKS/LEARNING RESOURCES:


a) Oswald Campesato, Artificial Intelligence, Machine Learning, and Deep Learning (1st ed.),
Mercury Learning & Information, 2020. ISBN 9781683924665.

Artificial Intelligence
Brief History of AI
Artificial Intelligence = Man-made + Thinking power
• 1943: McCulloch & Pitts: Boolean circuit model of brain
• 1950: Turing's “Computing Machinery and Intelligence”
AI is a branch of information technology by which • 1952—69: Look, Ma, no hands!
we can create intelligent machines that can think • 1950s: Early AI programs, including Samuel's checkers
like a human, behave like a human, and also able to program, Newell & Simon's Logic Theorist, Gelernter's
make the decisions at its own Geometry Engine
• 1956: Dartmouth meeting: “Artificial Intelligence”
adopted

Why Learn AI? & Intelligence?


Brief History of AI
• AI is capable of learning through data
• 1965: Robinson's complete algorithm for logical reasoning
• AI is capable of teaching itself
• 1966—74: AI discovers computational complexity; Neural
network research almost disappears • AI can respond in real time
• 1969—79: Early development of knowledge-based systems • AI can achieve a greater degree of accuracy
• 1980—88: Expert systems industry booms
• 1988—93: Expert systems industry busts: `”AI Winter”
• Ability to take decisions.
• 1985—95: Neural networks return to popularity
• Ability to prove results
• 1988— Resurgence of probability; general increase in technical
depth, “Nouvelle AI”: ALife, GAs, soft computing • Ability to think logically
• 1995— Agents… • Ability to learn and improve

1
07-May-24

Forms of Intelligence Applications of AI


Artificial Intelligence:

• Linguistic–verbal intelligence: writers, journalists etc • Natural Language Processing (NLP).


• Computer Vision for object detection.
• Musical intelligence: musicians, singers and etc • Reinforcement Learning for optimizing control systems.
• AI-powered recommendation systems for personalized content
• Logical–mathematical intelligence: Mathematicians, engineering suggestions.
and etc
• Visual–spatial intelligence: Artists, architects and etc
• Bodily kinesthetic intelligence: players, dancers and etc Cloud Computing:
• Intra-personal intelligence: having Knowledge about self such as
• AI-based resource allocation and workload optimization in cloud
scientists, writers, and etc
infrastructure.
• Inter-personal intelligence: skills of analyzing others such as
• Anomaly detection to identify unusual activities and potential
psychologists, philosophers and etc.
security breaches.
• Predictive maintenance for cloud servers and data centers.

Applications of AI Applications of AI
Blockchain:

• Education • AI for analyzing and improving blockchain consensus


protocols.

• Healthcare • Smart contracts with AI components for self-executing


agreements.
• Automobiles • ML-based fraud detection and security enhancement in
blockchain networks.
• E-Commerce
etc Cyber Security:

AI-powered threat detection and intrusion prevention systems.


ML for analyzing patterns of cyber attacks and identifying potential
vulnerabilities.
Anomaly detection to identify unusual behavior in network traffic or user
activities.

Applications of AI Applications of AI
Full Stack:
Data Science: • AI-powered personalization in web applications to enhance

• Predictive modeling and forecasting based on large datasets. user experience.

• AI-driven data analysis to extract valuable insights and patterns from • ML for predictive analytics and recommendations in e-

complex data. commerce platforms.

• Automated data cleaning and preprocessing using AI techniques • Smart chatbots and virtual assistants for customer support.

Drones:
Gaming:
• AI-based object detection and tracking for autonomous
• AI opponents and NPCs (Non-Player Characters) with adaptive navigation.
behaviors and decision-making.
• ML for optimizing flight paths and improving battery
• Content generation to create dynamic game environments.
• AI-based game testing and debugging to identify potential issues in efficiency.
real-time.
• Computer vision and AI-enabled payload analysis for
specific applications

2
07-May-24

How AI Machines Learn

Unsupervised Learning
No labeled data is present, machine draw
Supervised Learning inferences from datasets and assign them
The machine learns from training data class labels.
labels.
and labels and make predictions.

Reinforcement Learning
The machine learns on its own, receiving
rewards and punishments and determining
from these what it should do
do..

Machine Learning (ML): Supervised Learning Example


Machine Learning (ML): Supervised Learning

Supervised
Learning

3
07-May-24

Machine Learning (ML): Supervised Learning Example Machine Learning (ML): Unsupervised Learning

Supervised learning is a type of machine learning method in which we provide sample labelled
data to the machine learning system in order to train it, and on that basis, it predicts the output.

The system creates a model using labelled data to understand the datasets and learn about each
data, once the training and processing are done then we test the model by providing a sample
data to check whether it is predicting the exact output or not.

The goal of supervised learning is to map input data with the output data.

The supervised learning is based on supervision, and it is the same as when a student learns
things in the supervision of the teacher. The example of supervised learning is spam filtering.

Supervised learning can be grouped further in two categories of algorithms: Classification and
Regression

Machine Learning (ML): Unsupervised Learning Machine Learning (ML): Unsupervised Learning Example

Machine Learning (ML): Reinforcement Learning Machine Learning (ML): Reinforcement Learning

4
07-May-24

AI and Environments relationship


AI Agent:
• An agent may be defined as anything that can perceive its environment
through sensors and acts upon that environment through effectors. Agent
• An agent, having mental properties such as knowledge, belief, intention,
and so on, runs in the cycle of perceiving, thinking, and acting. Environment
Examples:
• A human agent has sensory organs like eyes, ears, tongue, skin, and nose,
which work as sensors. On the other hand, it has hands, legs, and vocal tract,
which work as effectors.
• A robotic agent has cameras and infrared range finders, which act as
sensors. On the other hand, it has various motors acting as effectors.
• A software agent has keystrokes, files, received network packages, and
encoded bit strings, which work as sensors. On the other hand, it has sent For an AI agent, the following are the four important rules:
network packages, content displays on the screen, which work as effectors. Rule 1: It must have the ability to perceive the environment.
Rule 2: It must use observation to make decisions.
Rule 3: The decisions it makes should result in an action.
Rule 4: Every action it takes must be a rational action.

Vacuum-cleaner world Example


Agent
Environment

• Percepts: location and contents, e.g., [A, Dirty]


• Actions: Left, Right, Suck, NoOp
• The agent function maps from percept histories to actions: • Agent’s function look-up table
– For many agents this is a very large table
[f: P* A]

• The agent program runs on the physical architecture to


produce f
agent = architecture + program

30

5
07-May-24

Agent Terminologies Rationality of an Agent


1. The performance measure of an agent: It may be defined as the criteria • The rationality of an agent is concerned with the performance measure of that
determining how successful an agent is. agent.
2. The behavior of an agent: It may be defined as the action that an agent performs • There are following four factors on which the rationality of any agent depends
after any given sequence of percepts. upon:
3. Percept: An agent’s perceptual inputs at a given instance is called a percept. The Performance Measures (PM)of an agent.
4. Percept sequence: It may be defined as the history of all that an agent has Agent’s Percept Sequence (PS).
Agent’s Prior Knowledge (PK) about the environment.
perceived till now.
The Actions (A) an agent can carry out.
5. Sensor: Sensor, through which an agent observes its environment, is a device
(PM, PS, PK, A)
detecting the change in the environment and sending the information to other
devices.
6. Effectors: They are the devices affecting the environment. They can be hands, P.E.A.S representation
legs, arms, fingers, display screen, sent network packet, wings, fins, and so on. P.E.A.S representation is a type of model in which the properties of an AI agent or
7. Actuators: Actuators, only responsible for moving and controlling a system, are rational agent can be grouped. It consists of four words:
the components of machines that convert energy into motion. Examples of P: Performance measure
actuators can be electric motors, gears, rails, and so on. E: Environment
A: Actuators
S: Sensors

PEAS Examples Types of Agents


• Simple reflex agent
• Model-based reflex agent
• Goal-based agent
• Utility-based agent
• Learning agent
• Hybrid agent
• Hierarchical agent

Simple reflex agent Model-based reflex agent

1. They choose actions based only on the current percept and 1. In order to choose their actions, they use a model of the world.
2. It must keep track of the internal state, adjusted by each percept, that
ignore the rest of the percept history.
depends on the percept history.
2. They work based on the condition–action rule, which is a rule 3. They can handle partially observable environments.
that maps a state, that is, condition to an action. If the condition is 4. In order to update the agent’s state, it requires the following
true, the action is taken, otherwise not. For example, a room information:
cleaner agent works only if there is dirt in the room. How the world evolves?
3. Their environment is fully observable. How do the actions of agents affect the world?

6
07-May-24

Goal-Based agent Utility-based agent

1. They choose their actions and take decisions based on how far 1. In order to decide which is the best among multiple possible
they are currently from their goals – a description of a desirable alternatives, utility-based agents are used.
situation. 2. They choose their actions and take decisions based on a preference
2. Every action of such agents is intended to reduce the distance (utility) for every state.
from the goal. 3. Sometimes, achieving the desired goal is not enough because goals are
inadequate when:
3. This approach, that is goal-based, is more flexible than reflex
• We have conflicting goals and only a few among them can be achieved.
agents because the knowledge supporting a decision is explicitly • Goals have some uncertainty of being achieved.
modeled, which allows for modifications.

Learning agent
Hybrid agent
Hybrid agents combine features from different types of agents to leverage
multiple strategies. For example, an agent might use both reflex actions for certain
situations and goal-based planning for more complex tasks.

Hieratical agent
Hierarchical agents are structured in a hierarchy, with high-level agents
overseeing lower-level agents. However, the levels may differ based on the
complexity of the system.
1. Observation: The learning agent observes its environment
through sensors or other inputs.
2. Learning: The agent analyzes data using algorithms and
statistical models, learning from feedback on its actions and
performance.

Fully observable (vs. partially observable)


Types of Agents Environments
• Fully observable (vs. partially observable) • Fully Observable Environment: In this type of environment, the agent can
• Deterministic (vs. stochastic) directly observe the complete state of the environment at each point in time.
• Episodic (vs. sequential) – Example: Chess is a fully observable environment because the agent can
see the entire board and pieces.
• Static (vs. dynamic)
• Discrete (vs. continuous)
• Partially Observable Environment: Here, the agent cannot directly observe
• Single agent (vs. multiagent):
the complete state of the environment. It might have to maintain an internal
state or memory to keep track of past observations.
– Example: Self-driving cars navigating a city. The car's sensors provide
partial information (like nearby objects, traffic lights), but it must maintain
an internal state to account for objects that might be out of view.

7
07-May-24

Deterministic (vs. stochastic) Episodic (vs. sequential)


• Deterministic Environment: In these environments, the next state is • Episodic Environment: Here, the agent's experience is divided into
completely determined by the current state and agent's actions. episodes, where each episode is independent of others.
– Example: Chess, again, is deterministic. The result of a move is entirely – Example: Playing a single hand of poker. The agent makes decisions in one
based on the rules of the game. hand, and then the game ends, starting a new episode.

• Stochastic Environment: The next state can have some randomness or • Sequential Environment: The agent's actions have a lasting impact on future
uncertainty, even with the same action in the same state. states, and the agent's goal may require a sequence of actions.
– Example: A robot navigating a room with moving objects. Even if the – Example: A customer service chatbot assisting users. Each conversation is
robot takes the same path twice, the result may vary due to the movement part of a sequence, and the bot's actions in one conversation influence
of objects. future interactions.

Static (vs. dynamic) Discrete (vs. continuous)


• Static Environment: The environment does not change while the agent is • Discrete Environment: The state space, action space, or both are
deciding on its actions. discrete.
– Example: Tic-Tac-Toe. The board remains the same until the agent makes – Example: Board games like Tic-Tac-Toe or Chess. The possible
a move.
states and moves are discrete and finite.

• Dynamic Environment: The environment can change even if the agent is not
acting. • Continuous Environment: State space and/or action space are
– Example: Traffic control in a city. Traffic patterns change over time, and continuous, often requiring approximations or specialized algorithms.
the agent (traffic light system) must adapt to these changes. – Example: Controlling a robotic arm. The positions and velocities
of the arm's joints form a continuous space.

Single agent (vs. multiagent)


• Single-Agent System: In a single-agent environment, there is only one
autonomous agent interacting with the environment to achieve its
objectives.
Example: An autonomous vacuum cleaner cleaning a single room.

• Multi-Agent System (MAS): In a multi-agent environment, there are END


multiple autonomous agents, each with its own goals, observations,
and actions, interacting within the same environment.
Example: Traffic simulation with autonomous vehicles navigating
roads and intersections.

You might also like