Introduction To AI
Introduction To AI
TEXTBOOKS/LEARNING RESOURCES:
a) T. M. Mitchell, Machine Learning (1st ed.), McGraw Hill, 2017. ISBN 978-1259096952.
b) E. Alpaydin, Introduction to Machine Learning (4th ed.), Phi, 2020. ISBN 978-8120350786
Artificial Intelligence
Brief History of AI
Artificial Intelligence = Man-made + Thinking power
• 1943: McCulloch & Pitts: Boolean circuit model of brain
• 1950: Turing's “Computing Machinery and Intelligence”
AI is a branch of information technology by which • 1952—69: Look, Ma, no hands!
we can create intelligent machines that can think • 1950s: Early AI programs, including Samuel's checkers
like a human, behave like a human, and also able to program, Newell & Simon's Logic Theorist, Gelernter's
make the decisions at its own Geometry Engine
• 1956: Dartmouth meeting: “Artificial Intelligence”
adopted
1
07-May-24
Applications of AI Applications of AI
Blockchain:
Applications of AI Applications of AI
Full Stack:
Data Science: • AI-powered personalization in web applications to enhance
• AI-driven data analysis to extract valuable insights and patterns from • ML for predictive analytics and recommendations in e-
• Automated data cleaning and preprocessing using AI techniques • Smart chatbots and virtual assistants for customer support.
Drones:
Gaming:
• AI-based object detection and tracking for autonomous
• AI opponents and NPCs (Non-Player Characters) with adaptive navigation.
behaviors and decision-making.
• ML for optimizing flight paths and improving battery
• Content generation to create dynamic game environments.
• AI-based game testing and debugging to identify potential issues in efficiency.
real-time.
• Computer vision and AI-enabled payload analysis for
specific applications
2
07-May-24
Unsupervised Learning
No labeled data is present, machine draw
Supervised Learning inferences from datasets and assign them
The machine learns from training data class labels.
labels.
and labels and make predictions.
Reinforcement Learning
The machine learns on its own, receiving
rewards and punishments and determining
from these what it should do
do..
Supervised
Learning
3
07-May-24
Machine Learning (ML): Supervised Learning Example Machine Learning (ML): Unsupervised Learning
Supervised learning is a type of machine learning method in which we provide sample labelled
data to the machine learning system in order to train it, and on that basis, it predicts the output.
The system creates a model using labelled data to understand the datasets and learn about each
data, once the training and processing are done then we test the model by providing a sample
data to check whether it is predicting the exact output or not.
The goal of supervised learning is to map input data with the output data.
The supervised learning is based on supervision, and it is the same as when a student learns
things in the supervision of the teacher. The example of supervised learning is spam filtering.
Supervised learning can be grouped further in two categories of algorithms: Classification and
Regression
Machine Learning (ML): Unsupervised Learning Machine Learning (ML): Unsupervised Learning Example
Machine Learning (ML): Reinforcement Learning Machine Learning (ML): Reinforcement Learning
4
07-May-24
30
5
07-May-24
1. They choose actions based only on the current percept and 1. In order to choose their actions, they use a model of the world.
2. It must keep track of the internal state, adjusted by each percept, that
ignore the rest of the percept history.
depends on the percept history.
2. They work based on the condition–action rule, which is a rule 3. They can handle partially observable environments.
that maps a state, that is, condition to an action. If the condition is 4. In order to update the agent’s state, it requires the following
true, the action is taken, otherwise not. For example, a room information:
cleaner agent works only if there is dirt in the room. How the world evolves?
3. Their environment is fully observable. How do the actions of agents affect the world?
6
07-May-24
1. They choose their actions and take decisions based on how far 1. In order to decide which is the best among multiple possible
they are currently from their goals – a description of a desirable alternatives, utility-based agents are used.
situation. 2. They choose their actions and take decisions based on a preference
2. Every action of such agents is intended to reduce the distance (utility) for every state.
from the goal. 3. Sometimes, achieving the desired goal is not enough because goals are
inadequate when:
3. This approach, that is goal-based, is more flexible than reflex
• We have conflicting goals and only a few among them can be achieved.
agents because the knowledge supporting a decision is explicitly • Goals have some uncertainty of being achieved.
modeled, which allows for modifications.
Learning agent
Hybrid agent
Hybrid agents combine features from different types of agents to leverage
multiple strategies. For example, an agent might use both reflex actions for certain
situations and goal-based planning for more complex tasks.
Hieratical agent
Hierarchical agents are structured in a hierarchy, with high-level agents
overseeing lower-level agents. However, the levels may differ based on the
complexity of the system.
1. Observation: The learning agent observes its environment
through sensors or other inputs.
2. Learning: The agent analyzes data using algorithms and
statistical models, learning from feedback on its actions and
performance.
7
07-May-24
• Stochastic Environment: The next state can have some randomness or • Sequential Environment: The agent's actions have a lasting impact on future
uncertainty, even with the same action in the same state. states, and the agent's goal may require a sequence of actions.
– Example: A robot navigating a room with moving objects. Even if the – Example: A customer service chatbot assisting users. Each conversation is
robot takes the same path twice, the result may vary due to the movement part of a sequence, and the bot's actions in one conversation influence
of objects. future interactions.
• Dynamic Environment: The environment can change even if the agent is not
acting. • Continuous Environment: State space and/or action space are
– Example: Traffic control in a city. Traffic patterns change over time, and continuous, often requiring approximations or specialized algorithms.
the agent (traffic light system) must adapt to these changes. – Example: Controlling a robotic arm. The positions and velocities
of the arm's joints form a continuous space.