AI CT1 Notes
AI CT1 Notes
Part B
1.Explore the horizons of AI and explain in detail.
Firstly, AI may be defined as the branch of computer science that is concerned with the
automation of intelligent behaviour.
8. Social Media – AI moderates content, detects fake news, and personalizes user
experiences through recommendation algorithms.
9. Finance – AI detects fraud, automates trading, and provides predictive analytics for
better financial decision-making.
10. Gaming – AI enhances realistic graphics, NPC behavior, and personalized gaming
experiences through adaptive AI systems.
A cognitive model is a framework used to design AI systems that can think and learn similarly
to humans. It is inspired by cognitive psychology and neuroscience, enabling AI to replicate
mental processes such as:
Cognitive models enable chatbots, voice assistants (Siri, Alexa), and language translation
tools to understand and generate human-like responses.
b) Computer Vision
AI systems use cognitive models to recognize images, detect objects, and analyze visual data
(e.g., self-driving cars, facial recognition).
AI-powered robots use cognitive models to navigate environments, learn tasks, and interact
with humans.
AI in finance, healthcare, and business uses cognitive models for intelligent decision-making.
Future Applications:
Neuro-Symbolic AI: Combining neural networks and logic-based reasoning for better AI
interpretability.
AI-Driven Consciousness Research: Exploring AI systems that can self-reflect and improve
their own learning.
Brain-Computer Interfaces: Using AI to connect directly with the human brain for enhanced
cognitive abilities.
Complete Information: The agent can see all relevant aspects of the environment.
No Hidden Variables: There are no unknown factors affecting decision-making.
Perfect State Knowledge: The AI system always knows where it is and what actions to
take.
Limited Information: The agent can only observe part of the environment at a time.
Uncertainty: Some aspects of the world are hidden, requiring the AI to infer missing
details.
Need for Memory or Probabilistic Reasoning: AI may use history, probability, or models
to predict unknown states.
“What is an apple?”, the answer will be that an apple is “a fruit,” “has red, yellow, or green
color,” or “has a roundish shape.” These descriptions are symbolic because we utilize
symbols (color, shape, kind) to describe an apple.
8.Explain in detail the characteristics to be analysed for solving problems in AI
The seven characteristics to analyse problem-solving in AI are:
→ Some problems can be broken down into smaller sub-problems, solved separately,
and combined to form the final solution.
→ If a problem is decomposable, AI can use divide-and-conquer techniques.
→ If a problem is not decomposable, AI must solve it as a whole.
→ Relative Solution: The best solution depends on the situation and constraints.
→ State-based problems: The final state matters, not how AI got there.
→ Path-based problems: The steps taken to reach the goal are important.
Some problems require prior knowledge, while others can be solved purely by reasoning
or search.
This is a water jug problem, a classic example of a state-space search problem in AI and the
objective is to measure exactly 4 gallons of water using only a 5-gallon jug and a 3-gallon jug.
Problem Representation
• Allowed Actions:
1. Fill a jug completely from the pump.
3. Transfer water from one jug to another until either the receiving jug is full or the
pouring jug is empty.
(0,0)
↓ Fill 5-gallon
(5,0)
↓ Transfer to 3-gallon
(2,3)
↓ Empty 3-gallon
(2,0)
(0,2)
↓ Fill 5-gallon
(5,2)
↓ Transfer to 3-gallon
Yes, the problem can be decomposed into smaller subproblems, such as:
→ Learning: Using labelled data (shapes with known categories) to teach the toy
to recognize new shapes.
These subproblems are manageable and can be solved individually to form a
complete solution.
No, each step in teaching the toy is important. Ignoring or undoing previous steps could lead
to incorrect recognition or misclassification of shapes. For example, skipping the feature
extraction phase may prevent the toy from distinguishing between different shapes.
Yes, but with some assumptions. The problem's universe is largely predictable if we assume:
Relative, the definition of a "good solution" is relative here. The toy doesn't need to achieve
perfect recognition, but rather recognition that is accurate enough within a given threshold
(e.g., 90% accurate). The solution could vary based on:
State, the solution is primarily a state—once the toy recognizes a shape (e.g., circle, square,
or triangle), it has reached a final classification state. There isn’t a need for a continuous
"path" of intermediate steps beyond the initial recognition; once a shape is identified, the
task is complete.
High importance, knowledge plays a crucial role in the shape recognition task:
→ Feature knowledge: Understanding the properties that define each shape (e.g., a
square has four equal sides, a triangle has three sides, and a circle has no corners).
→ Training data: The toy needs a dataset of labelled shapes (with corresponding
features) to learn the patterns and recognize new shapes.
→ Algorithmic knowledge: The toy needs knowledge of how to apply machine learning
algorithms (e.g., decision trees, neural networks, etc.) for classification.
Yes, during the learning phase. Initially, the toy will need to interact with a person to learn
from labelled examples (supervised learning). The person might provide the toy with
different shapes, ensuring it learns to classify them correctly. However, once the toy is
trained, it may not need interaction unless it requires corrections or additional training data.
→ The environment is partially observable because the vacuum cleaning agent cannot
sense the state of the environment (whether a specific space is dirty or clean).
→ The agent's actions are deterministic (e.g., moving left or right, cleaning a spot), but
without sensors, it lacks real-time feedback to adjust its actions.
→ The agent cannot see where it has already cleaned, leading to multiple possible
states. Hence, it operates with incomplete information.
→ The environment is now observable because the agent has sensors to detect dirt or
cleanliness in its surroundings.
→ The state is deterministic because the agent can predict the outcome of its actions (if
it cleans a dirty spot, it becomes clean).
→ The agent can make decisions based on its observations, so there's a single state to
determine the next action (if dirty, clean; if clean, move).
c. Google Search
→ The environment is fully observable because the cities and distances between them
are known upfront.
→ The problem is deterministic because, given a set of cities and distances, the path
taken by the salesman is predictable (although there are many possible paths).
→ The state of the problem is well-defined: the salesman’s current location and the
cities yet to be visited, with no ambiguity in the environment.
→ Once the cities and distances are provided, there's a single state space to explore.
State Representation:
A state is represented as a 3x3 grid. The grid consists of 9 positions (indexed from 0 to 8),
with each position holding a value from 0 to 8.
The tiles numbered 1-8 represent the numbered tiles in the puzzle.
Initial State:
The initial state is a random configuration of the tiles (except the goal state configuration).
Actions:
The actions consist of sliding one of the tiles adjacent to the blank space (up, down, left, or
right) into the blank space. A tile can only move if it is adjacent to the blank space.
There are 4 possible actions from any position, but some may be restricted based on the
position of the blank space.
Successor Function:
The successor function generates all possible states that can be reached from a given state
by sliding a tile into the blank space. For each configuration of the board, the possible moves
can be computed based on the position of the blank space.
Goal Test:
The goal is reached when the state of the puzzle matches the goal configuration.
Path Cost:
The cost of each move is typically considered uniform (e.g., each move has a cost of 1).
5.Give PEAS description for online bus reservation system
The PEAS (Performance Measure, Environment, Actuators, and Sensors) framework is used
to define the working of an intelligent agent. Here’s how it applies to an Online Bus
Reservation System:
1. Performance Measure:
2. Environment:
The system operates in an environment consisting of:
3. Actuators:
4. Sensors:
6.Formulate the problem for the following and analyse their problem types
a.CHESS game ; b.Blocks world
a. Chess Game
1. State Representation:
→ A state in chess is just the arrangement of pieces on the board at any point in
time.
→ For example, a state would show where each piece (like a pawn, knight, or
queen) is placed on the 8x8 board. It also keeps track of whose turn it is (white or
black).
2. Initial State:
→ The initial state is the starting position in a chess game, where the pieces are in
their standard places at the beginning of the game.
→ White pieces are in the first two rows, black pieces are in the last two rows, and
both players have their pieces in the correct spots to start.
3. Actions:
The actions are the moves players make. For example, a pawn can move one square
forward, or a knight can move in an "L" shape. Each piece has specific rules for how it
can move.
The transition function is how the board changes after a move. If a player moves a
piece, the board updates to reflect that move. For example, if a knight moves, the
new board layout will show the knight in its new position.
5. Goal State:
The goal in chess is to checkmate the opponent’s king, which means you put their
king in a position where it can't escape capture. When that happens, the game is
won.
6. Path Cost:
Path cost is how we count the number of moves in the game. In chess, each move
counts as one action, so the cost is simply the number of moves made to reach a goal
state (e.g., checkmate).
Problem Type:
→ Partially Observable: Players can see the current state of the board, but they cannot
predict the opponent’s future moves.
b. Blocks World
1. State Representation:
A state in Blocks World shows where each block is and whether it's stacked on
another block or sitting on the table. It also tells whether a block is clear (meaning
nothing is on top of it).
2. Initial State:
The initial state is when all the blocks are on the table, and no blocks are stacked on
top of each other. The blocks are also "clear," meaning you can stack things on top of
them.
3. Actions:
The transition function describes what happens when you perform an action. For
example, if you stack one block on top of another, the state changes, and the new
configuration shows that one block is now stacked on top of the other.
5. Goal State:
The goal state is a specific arrangement of blocks. For example, the goal might be to
have Block A on top of Block B and Block B on top of Block C, or any other
arrangement specified at the start.
6. Path Cost:
Path cost refers to how many actions you need to take to get to the goal state. Each
move, like stacking or unstacking a block, has a cost of one. So, the total path cost is
the total number of actions needed to reach the goal.
Problem Type: