AI & ML Unit 1 Notes
AI & ML Unit 1 Notes
Introduction to AI:
Artificial Intelligence:
• Artificial Intelligence is a wide range branch of computer science.
• It concerned with building smart machines capable of performing tasks that
typically require human intelligence.
• It uses complex algorithm and methods to build machines that can make
decisions on their own.
• Narrow AI: Referred as weak AI, focuses on one automated specific task.
• General AI: It has ability to understand, learn and apply knowledge across task.
• Super AI: It is hypothetical software system with an intellectual scope beyond
human intelligence.
• Purely Reactive: These machines do not have any memory or data to work with,
specializing in just one field of work.
• Limited Memory: These machines collect previous data and continue adding it
to their memory.
• Theory of Mind: This kind of AI can understand thoughts and emotions, as well
as interact socially.
1943: The first work that is now generally recognized as AI was done.
1949: Donald Hebb (1949) demonstrated a simple updating rule for modifying the
connection strengths between neurons.
1958: John McCarthy had defined the high-level language.
1959: At IBM, Nathaniel Rochester and his colleagues produced some of the first AI
programs.
1980s: Neural networks which use a back propagation algorithm to train itself become
widely used in AI applications.
2011: IBM Watson beats champions Ken Jennings.
2016: The first “robot citizen,” a humanoid robot named Sophia, is created.
2018: Google releases natural language processing engine.
Advantages of AI:
• Reduced Human error
• Risk Avoidance
• Replacing repetitive jobs
• Digital Assistance
Disadvantages of AI:
• High cost of creation
• No emotions.
• Can’t think for itself.
AI Applications:
AI Applications in E-Commerce:
• Artificial Intelligence technology is used to create recommendation engines.
• These recommendations are made in accordance with their browsing history,
preference, and interests.
• It helps in improving the relationship with the customers and the loyalty towards
the brand.
AI Applications in Education:
• Artificial Intelligence can help educators with non-educational tasks.
• Digitization of content like video lectures, conferences, and text book guides
c.an be made using Artificial Intelligence.
• Artificial Intelligence helps create a rich learning experience.
AI Applications in Lifestyle:
• Automobile manufacturing companies like Toyota, Audi, Volvo, and Tesla use
machine learning to train computers to think and evolve like humans.
AI Applications in Robotics:
• Robotics is another field where artificial intelligence applications are commonly
used.
• Robots powered by AI.
AI Applications in Healthcare:
• AI uses the combination of historical data and medical intelligence.
• Ai applications are used in healthcare.
AI Applications in Agriculture:
• Artificial Intelligence is used to identify defects in soil.
• This is done using computer vision, robotics, and machine learning applications.
AI Applications in Gaming:
• AI can be used to create smart, human-like NPCs to interact with the players.
AI Application in Automobiles:
• Artificial Intelligence is used to build self-driving vehicles.
AI Applications in Social Media:
• Instagram
• Facebook
• Twitter
AI Applications in Chatbots:
• AI continues to improve, these chatbots can effectively resolve customer issues.
Problem Solving Agents:
Agent:
• An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
• Example: A human agent has eyes, ears, and other organs for sensors and hands,
legs, vocaltract, and so on for actuators.
• Agent Function: The agent function for an agent specifies the action taken by
the agent in response to any percept sequence.
• Task Environment: A task environment specification includes the performance
measure, the external environment, the actuators, and the sensors.
Steps in designing an agent:
• First step must always be to specify the task environment as fully as possible.
• The agent program implements the agent function.
• There exists a variety of basic agent program designs reflecting the kind of
information made explicit and used in the decision process.
• The designs vary in efficiency, compactness, and flexibility.
• The appropriate design of the agent program depends on the nature of the
environment.
• All agents can improve their performance through learning.
Types of Agent:
Simple reflex agents: It respond directly to percepts.
Utility Based reflex agents: It try to maximize their own expected “happiness.”
Problem-Solving Agent:
Definition:
• Problem-solving agent is a goal-based agent.
• It focus on goals using a group of algorithms and techniques to solve a well-
defined problem.
Steps Performed by Problem-solving agent:
Goal Formulation:
• It is the first and simplest step in problem-solving.
• It organizes the steps/sequence required to formulate one goal.
Problem Formulation:
1. Initial state: It is the starting state or initial step of the agent towards its goal.
2. Actions: It is the description of the possible actions available to the agent.
3. Transition Model: It describes what each action does.
4. Goal Test: It determines if the given state is a goal state.
5. Path cost: It assigns a numeric cost to each path that follows the goal.
Types of problem Approaches:
• Toy Problem
• Real-world Problem
Toy Problem: 8 Puzzle problem:
• Consider a 3x3 matrix with movable tiles numbered from 1 to 8 with a blank
space.
• The tile adjacent to the blank space can slide into that space.
• The objective is to reach a specified goal state similar to the goal state.
• The task is to convert the current state into goal state by sliding digits into the
blank space.
States: It describes the location of each numbered tiles and the blank tile.
Initial State: Can start from any state as the initial state.
Actions: Actions of the blank space is defined, i.e., either left, right, up or down.
Goal Test: It identifies whether have reached the correct goal-state.
Path Cost: The path cost is the number of steps in the path where the cost of each step
is 1.
Real World Problem: Route-Finding Problem:
• Consider the airline travel problems that must be solved by a travelplanning
Web site:
States: Each state obviously includes a location (e.g., an airport) and the current time.
Initial State: The user’s home airport.
Actions: Take any flight from the current location, in any seat class, leaving after the
current time.
Goal Test: A destination city.
Search Algorithms:
• A search algorithm takes a search problem as input and returns a solution.
Types of search algorithm:
• These algorithms can only generate the successors and differentiate between the
goal state and non goal state.
Advantages of DFS:
• Uses little memory
• Less time to reach goal state
Disadvantages of DFS:
• No certainty
• It performs deep searching
ii)Depth Limited Search:
• A depth-limited search algorithm is similar to depth-first search with a
predetermined limit.
• Depth-limited search can solve the drawback of the infinite path in the Depth-
first search.
• In this algorithm, the node at the depth limit will treat as it has no successor
nodes further.
Steps:
• Set a variable NODE to the initial state, i.e., the root node.
• Set a variable GOAL which contains the value of the goal state.
• Set a variable LIMIT which carries a depth-limit value.
• Loop each node by traversing in DFS manner till the depth-limit value.
• While performing the looping, start removing the elements from the stack in
LIFO order.
• If the goal state is found, return goal state. Else terminate the search.
Performance measure:
Completeness: does not guarantee to reach the goal node.
Optimality: Not optimal
Space complexity: O(bl)
Time complexity: s O(bl)
Advantages:
• Memory Efficient
Disadvantages:
• Incompleteness
• It may not be optimal if problem has more than one solution.
iii)Breadth First Search:
• It starts at the tree root and explores all of the neighbor nodes at the present
depth.
Performance measure:
Completeness: Finds the goal state
Optimality: Optimal solution
Space complexity: O(bd)
Advantages:
• Simple solution
• Easy to understand
Disadvantages:
• Large amount of memory
• Takes long time
iv)Iterative Deeping Search:
• This search is a combination of BFS and DFS, as BFS guarantees to reach the
goal node and DFS occupies less memory space.
• Therefore, iterative deepening search combines these two advantages of BFS
and DFS to reach the goal node.
• It gradually increases the depth-limit from 0,1,2 and so on and reach the goal
node.
Algorithm:
• Explore the nodes in DFS order.
• Set a LIMIT variable with a limit value.
• Loop each node up to the limit value and further increase the limit value
accordingly.
• Terminate the search when the goal state is found.
Performance Measure:
Completeness: May or may not reach goal
Optimality: Not optimal solution
Space complexity: O(bd).
Time complexity: O(d)
Advantages:
• Low cost
• Complete and give optimal solution
Disadvantages:
• It is concerned with expense of path
vi)Bidirectional Search:
• The strategy behind the bidirectional search is to run two searches
simultaneously--one forward search from the initial state and other from the
backside of the goal.
Performance Measures:
Completeness: Complete
Optimality: optimal
Time & Space complexity: O(bd/2)
Disadvantages: It requires lot of memory space.
Advantages:
• Works well with informed search problems.
• Fewer steps to reach goals
Disadvantages:
• Turn into unguided DFS in worst case
Plateau: It is a flat surface area where no uphill exists. It becomes difficult for the
climber to decide that in which direction he should move to reach goal point.
Ridges: It is a challenging problem where the person finds two or more local maxima
of same height commonly.
ii)Simulated Annealing:
• Simulated annealing is similar to the hill climbing algorithm.
• It works on the current situation.
• It picks a random move instead of picking the best move.
• This search technique was first used in 1980 to solve VLSI layout problems.
• It is also applied for factory scheduling and other large optimization tasks.
Advantages:
• Easy implementation
• Optimization
Disadvantages:
• Takes long time to find optimal solution
iii)Local Beam Search:
• The local beam search algorithm keeps track of states rather than just one,
• It begins with randomly k generated states.
• It selects k randomly generated states, and expands them at each step.
• If any state is a goal state, the search stops with success.
• Else it selects the best k successors from the complete list and repeats the same
process.
• In local beam search, the necessary information is shared between the parallel
search processes.
Disadvantages:
• Lack of diversity
• Expensive
Adversarial Search:
• Adversarial search is a game-playing technique where the agents are surrounded
by a competitive environment.
• Such conflicting goals give rise to the adversarial search.
Elements of Game Playing Search:
• S0: It is the initial state from where a game begins.
• PLAYER (s): It defines which player is having the current turn to make a move
in the state.
• ACTIONS (s): It defines the set of legal moves to be used in a state.
• RESULT (s, a): It is a transition model which defines the result of a move.
• TERMINAL-TEST (s): It defines that the game has ended and returns true.
• UTILITY (s,p): It defines the final value with which the game has ended.
• The price which the winner will get i.e.
o (-1): If the PLAYER loses.
o (+1): If the PLAYER wins.
o (0): If there is a draw between the PLAYERS.
Example:
• Game tic-tac-toe, has two or three possible outcomes.
• Either to win, to lose, or to draw the match with values +1,-1 or 0.
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -
∞.
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value
with +∞, and will find the 3rd layer node values. For node B= min(4,6) = 4 and For
node C= min (-3, 7) = -3
Step 4: Now it's a turn for Maximizer. For node A max(4, -3)= 4
Drawbacks:
• Explores each node in tree
• Increases time complexity
ii)Alpha-Beta Pruning:
• Alpha-beta pruning is an advance version of MINIMAX algorithm.
• Alpha-beta pruning reduces this drawback of minimax strategy.
• The method used in alpha-beta pruning is that it cutoff the search.
• Alpha-beta pruning works on two threshold values, i.e., ? (alpha) and ? (beta).
• ?: (alpha) - It is the best highest value, a MAX player can have. It is the lower
bound.
• ?:(beta) - It is the best lowest value, a MIN player can have. It is the upper
bound.
• The main condition which required for alpha-beta pruning is: α>=β.
Working Steps:
Step 1: At the first step the, Max player will start first move from node A where α= -∞
and β= +∞.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of
α is compared with firstly 2 and then 3, and the max (2, 3) =3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this
is a turn of Min. min (∞, 3) = 3.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The
current value of alpha will be compared with 5, so max (-∞, 5) = 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A.
Step 6: At node F, again the value of α will be compared with left child which is 0, and
max(3,0)= 3.
Step 7:
• If it starts coloring with the state SA, then any of the 3 colors can be chosen.
• Means, here there are 3 possibilities.
• when moving to the next node, it says WA and there are 2 colors that can be
chosen. That means it has 2 possibilities.
• The remaining states can be colored with the colors carefully.
• So the number of possibilities is 3 x 2 = 6.
• So the number of solutions is 6 x 3 = 18.
Comparison of Data Science, Artificial Intelligence and Machine Learning: