What Is Artificial Intelligence
What Is Artificial Intelligence
Artificial Intelligence, or AI, means making machines or computers smart so they can do work
like humans. With the help of AI, machines can think, learn, understand, and make decisions.
1. Narrow AI – This AI does only one specific task, like face detection or translation.
2. General AI – This is future AI that will be able to do anything a human can do.
Foundations of AI
The foundation of Artificial Intelligence (AI) is based on different subjects that help machines to
think and act smart. These include:
History of AI
1. 1950 – Alan Turing introduced the idea of a machine thinking like a human (Turing Test).
2. 1956 – The term "Artificial Intelligence" was first used at the Dartmouth Conference.
3. 1960s-70s – Early AI programs were developed for solving maths and playing games like
chess.
4. 1980s – Expert systems were created to solve real-life problems using rules.
5. 1997 – IBM’s Deep Blue defeated world chess champion Garry Kasparov.
6. 2010s – Machine Learning and Deep Learning became popular, leading to smart
assistants and self-driving cars.
7. Now – AI is used in almost every field – healthcare, transport, mobile apps, education,
electronics, and more.
AI Agent – Notes
Definition:
Recommend products,
Process orders,
Technologies Used:
Use:
o Rule-based systems
Problem-Solving Agents in AI
A problem-solving agent is a type of AI agent that takes a given problem, searches for solutions,
and selects the best one based on its goals. These agents are designed to solve problems step
by step by applying logical reasoning or algorithms.
1. Goal-Oriented:
o The agent perceives the environment using sensors and performs actions based
on its analysis.
o Example: In chess, the agent evaluates the game state (perception) and makes
the best move (action).
3. Search:
o Example: A navigation system finding the shortest path between two locations.
Components of Problem-Solving:
1. Initial State:
2. Actions:
o The possible steps or moves the agent can take (e.g., moving forward, turning left
or right).
3. State Space:
o The collection of all possible states the agent can reach from the initial state.
4. Goal State:
o The desired end condition the agent is trying to achieve (e.g., reaching the goal in
a maze).
o Breadth-First Search: Explores all possible moves at one level before moving
deeper.
o A Search*: Uses a heuristic to find the optimal path by considering the cost and
estimated distance to the goal.
3. Local Search:
o This involves searching for a solution by moving from one state to another, with
limited memory, and often used in optimization problems.
Puzzle Solvers: Solving puzzles like the 8-puzzle or Rubik’s cube using problem-solving
strategies.
Game AI: AI in board games like chess or tic-tac-toe, where the agent searches for the
best possible moves.
Conclusion:
Problem-solving agents are key to many AI applications where a specific goal must be reached,
and they rely on structured methods like search algorithms to find solutions. By using the right
search techniques, these agents can efficiently solve complex problems.
Problem Formulation and Search Strategies in AI
When we want to solve a problem using AI, we need to break it down into smaller parts that the
AI can understand and act on. This process is called problem formulation. It includes:
1. Initial State:
2. Actions:
3. State Space:
4. Goal State:
o Example: In the puzzle, the goal is to arrange the pieces in the correct order.
5. Path Cost:
o The cost (like distance or time) to go from the starting point to the goal.
To find a solution, the AI agent needs a search strategy to explore all possible actions. There are
different types:
These strategies explore without knowing extra information about the goal.
These strategies use extra information (called heuristics) to make better decisions faster.
1. A Search:*
o Combines BFS and DFS and uses a heuristic to find the best path.
1. Hill-Climbing:
o Cons: Can get stuck in a local solution (not the best overall).
2. Simulated Annealing:
Conclusion:
Problem formulation breaks the problem down so the AI can solve it. Search strategies help the
AI explore different solutions and find the best one.
Knowledge-based Agents in AI
A knowledge-based agent is an AI that uses stored knowledge to make decisions and take
actions. Instead of just reacting to immediate information, it uses facts and rules it has learned
to figure out what to do.
o This is where the agent stores information about the world (facts and rules).
2. Inference Engine:
o This part of the agent makes decisions by using the information in the knowledge
base. It applies rules and makes conclusions.
o Example: If the knowledge base says “It’s raining,” the inference engine can
decide that the agent should take an umbrella.
3. Actions:
o Example: The agent will grab the umbrella if it knows it’s raining.
1. Perception:
o The agent observes its environment (like reading sensors or getting input from
users).
2. Knowledge Representation:
o The agent stores facts about the world in its knowledge base.
3. Reasoning:
o Example: If it knows “If it rains, take an umbrella,” and it senses that it’s raining,
it will decide to take the umbrella.
4. Action:
o The agent takes action based on the decision. In this case, it will grab the
umbrella.
1. Expert Systems:
o These AI systems help solve complex problems by using a large knowledge base.
o These assistants use stored knowledge to answer questions or help with tasks.
o Example: If you ask Alexa for the weather, it uses its knowledge to answer.
o These agents can learn new things and improve over time.
o They can solve harder problems that need more than just basic rules.
Conclusion:
Knowledge-based agents use information stored in a knowledge base to make smarter decisions
and take actions. They are good at handling complex problems and can improve over time.
In AI, representation is how we store information, reasoning is how the AI makes decisions
based on that information, and logic is the set of rules the AI follows to make sure it makes the
right decisions.
1. Representation in AI:
Representation is how we organize and store information about the world in a way that
the AI can use. This helps the AI understand and work with the information it has.
Types of Representation:
o Propositional Logic: Stores facts in simple statements (like "The sky is blue").
o Frames: A structure that stores information about objects or events (like a set of
facts about a car—its color, model, and owner).
o Semantic Networks: A graph where concepts (like "dog") are connected to other
concepts (like "animal").
o Rules: If-then statements that describe relationships (like "If it rains, then take an
umbrella").
2. Reasoning in AI:
Reasoning is the process of using the information stored in the representation to make
decisions or solve problems. The AI uses reasoning to figure out what action to take
next.
Types of Reasoning:
o Deductive Reasoning: Starts with general rules and applies them to specific
situations.
Example: "Every time I see a dog, it has four legs. Therefore, all dogs
probably have four legs."
Example: If a person has a fever, the AI might reason that the person
could have the flu.
3. Logic in AI:
Logic is the set of rules AI follows to reason correctly. It ensures that AI can make sound
decisions based on the information it has.
Types of Logic:
o Propositional Logic (Boolean Logic): Deals with simple true or false statements.
o Predicate Logic: Deals with more complex statements involving objects and their
properties (like "Socrates is a human").
o Fuzzy Logic: Used when information is uncertain or vague. For example, "The
temperature is a little hot."
o Modal Logic: Deals with possibilities, necessities, and time (like "It is possible
that it will rain tomorrow").
How They Work Together:
Representation stores facts about the world (like "The sky is blue").
Reasoning uses these facts to make decisions (like "If it's blue, it's clear outside").
Logic helps the AI reason correctly (like "If it's clear, I can go outside").
Conclusion:
Logic makes sure the AI’s decisions are correct and valid.
FOL extends propositional logic by incorporating quantifiers and predicates, making it more
expressive.
Functions: Map objects to other objects (Example: , MotherOf(x) denotes the mother of
x).
Quantifiers: Define the scope of variables:
cream").
FOL's syntax defines how to construct valid expressions, while semantics assigns meaning to
them. An interpretation provides a domain of discourse and assigns meaning to constants,
functions, and predicates.
For example, in the domain of natural numbers, the predicate GreaterThan(x, y) holds if x is
greater than y.
Natural Language Processing (NLP): Structuring and understanding language for tasks
like machine translation and question answering.
Expert Systems: Encoding knowledge to infer decisions, such as legal rule-based AI.
This demonstrates how FOL enables logical reasoning to derive new knowledge from given
facts.
Resolution: A rule of inference for theorem proving, used to derive contradictions and
validate statements.
Logic Programming: Used in languages like Prolog for declarative AI applications in NLP
and expert systems.
Expressiveness vs. Decide-ability: While powerful, FOL is undecidable, meaning not all
statements can be resolved algorithmically.
Handling Uncertainty: FOL lacks probabilistic reasoning, requiring extensions like fuzzy
logic or probabilistic logic.
Definition:
A Belief Network (also known as a Bayesian Network) is a probabilistic graphical model that
represents a set of random variables and their conditional dependencies using a directed
acyclic graph (DAG). It is used to model uncertainty in decision-making, reasoning, and
prediction by representing relationships between different variables.
1. Nodes:
2. Edges:
o The edges (arrows) show the dependencies between the nodes. An edge from
one node to another means that the value of one node (random variable)
influences the other.
o Each node has a CPT which specifies the probability of that node's outcome
based on the values of its parent nodes. It describes how likely each value of the
node is, depending on the information coming from the connected nodes.
o The network is directed (edges point in one direction) and acyclic (no loops or
cycles). This ensures clear one-way dependencies between nodes.
Working of a Belief Network:
o Define the variables (nodes) and the relationships between them (edges).
Rain → Traffic
Traffic → Accident
For Traffic, we might have 70% chance if it rains, and 30% if it doesn’t.
If it rains, the chance of traffic might increase, and this would affect the
accident probability.
4. Make decisions:
o The network can now help make decisions. For example, if the probability of an
accident is high due to rain and traffic, the system might recommend an alternate
route.
Rain affects the Traffic, and Traffic affects the probability of an Accident.
The edges represent how the variables are dependent on each other.
1. Handles Uncertainty:
2. Flexible Representation:
o They can represent complex relationships between variables. For example, one
node can depend on several others, and those nodes can be interconnected.
3. Supports Reasoning:
o Allows reasoning about the values of variables based on the observed data. This
is useful for decision-making, predictions, and diagnosis.
4. Probabilistic Inference:
o As the number of nodes and relationships grows, the network becomes more
complex, and inference can become slow and computationally expensive.
2. Difficulty in Building:
3. Data Requirements:
o Belief networks are limited to handling probabilistic reasoning and might not
perform well in situations where deterministic reasoning is required.
Conclusion:
Belief networks are powerful tools for making decisions under uncertainty. They provide a way
to model complex relationships between variables and use probability to make inferences.
Despite their advantages, they can be complex to build and may require significant
computational resources for large networks.
What is it?
Learning in neural networks is the process by which a computer system (the neural network)
learns to recognize patterns and make predictions by processing data. The system gets better
over time by adjusting its "connections" (called weights) based on the data it sees.
o Neurons are like tiny decision-making units in the network. They process
information and send it to other neurons.
2. Weights:
o Weights are like the strength of the connection between neurons. The stronger
the weight, the more impact one neuron has on the next one.
3. Bias:
o Bias helps to adjust the output of a neuron. It ensures the neuron can give the
right result even if the input is zero.
4. Activation Function:
o This is the function that decides whether a neuron should be activated or not. It
helps the neural network make decisions based on the input.
5. Layers:
1. Forward Propagation:
o The input data goes through the network and gets processed. Each neuron does
a simple calculation, and the result moves to the next layer until the final output
is produced.
3. Backpropagation:
o The network works backward from the output layer to adjust the weights to
reduce the error. It learns from its mistakes and tries to improve.
4. Epochs:
o The network repeats this process many times (called epochs) to keep improving.
Types of Learning:
1. Supervised Learning:
o The network learns from labeled data (data where the correct answer is already
provided).
o Example: Predicting the price of a house based on its features like size and
location.
2. Unsupervised Learning:
o The network learns from data that doesn't have correct answers. It tries to find
patterns or groups in the data.
3. Reinforcement Learning:
o The network learns by trial and error, receiving rewards or punishments for its
actions.
o Neural networks can process complex data like images, sounds, and text.
o The more data the network gets, the better it becomes at making predictions.
o To train a neural network effectively, you need a lot of data. If there isn't enough
data, the network won't perform well.
o Training neural networks can be slow and require a lot of computer power
(especially for complex networks).
3. Can Overfit:
o If the network learns too much from the training data, it might perform poorly on
new data. This is called overfitting.
4. Hard to Understand:
o It can be difficult to figure out how the network makes decisions because it's like
a "black box."
Conclusion:
Neural networks are powerful tools that can learn to make predictions by looking at data and
adjusting their internal settings. They are great at tasks like recognizing images, predicting
trends, and even playing games. However, they require lots of data and time to train, and
sometimes it’s hard to understand how they make their decisions.
Sampling rate conversion means changing how often a signal is sampled (i.e., how many times
per second it is measured). There are two types of sampling rate conversion:
1. Up-sampling: Increasing the number of samples per second (higher sampling rate).
2. Down-sampling: Decreasing the number of samples per second (lower sampling rate).
This is important for systems where signals need to be adjusted to match the required rate, like
in audio processing or telecommunications.
Challenges in Sampling Rate Conversion
1. Computation: Changing the sample rate can take a lot of calculations, especially when
converting by large factors.
2. Quality: If not done carefully, converting the sample rate can cause distortions or loss of
information.
To make the process more efficient and less computationally expensive, several techniques are
used:
1. Polyphase Filters
What It Does:
Polyphase filters help in up-sampling and down-sampling by splitting the filter into
parts, reducing the number of calculations needed.
What It Does:
CIC filters are a special type of filter that can quickly change the sampling rate, especially
when the ratio is an integer (like 2x, 3x, etc.).
Very simple and fast since they don’t require multiplying numbers.
What It Does:
FIR filters are used for both up-sampling and down-sampling. They insert extra samples
(for up-sampling) or reduce the samples (for down-sampling) while applying a filter to
smooth the signal.
Works well for both integer and non-integer sampling rate changes.
4. Lagrange Interpolation
What It Does:
This method involves creating a smooth curve between the original data points when
up-sampling. It estimates the intermediate values.
What It Does:
It works by changing the signal into the frequency domain (the part where the signal's
frequencies are represented) and adjusting the sampling rate before transforming it back
to the time domain.
Why It's Good:
Conclusion
Efficient sampling rate converters are essential for systems that need to adjust signal rates
quickly and with high quality. Methods like polyphase filters, CIC filters, and FFT-based
methods offer a balance between speed and quality. Choosing the right method depends on
how much the sampling rate needs to change, the type of signal, and the resources available.
Spline interpolation is a method used to draw smooth curves through a set of data points.
Unlike simple straight lines, spline interpolation makes sure the curve is smooth at every point.
It's like connecting dots with a smooth, curvy line instead of sharp corners.
1. Breaking the Curve into Pieces: Instead of drawing one curve for all the points, spline
interpolation breaks the curve into small sections, each between two points, and uses a
curve (polynomial) for each section.
2. Smooth Joining: The curves at each point are joined in a way that there are no sharp
turns or sudden jumps. The line flows smoothly from one point to the next.
2. Cubic Spline: The most common and smooth type. It uses a special kind of curve (called
cubic polynomial) between each pair of points, making the overall curve look smooth
and nice.
Knots: The points where the curves meet (your data points).
Polynomial: The kind of curve (like cubic) used to connect the points.
Boundary Conditions: These set the behavior of the curve at the ends (for example, how
steep the curve should start or end).
1. Smooth Curves: Spline interpolation creates smooth curves instead of jagged ones,
making it look more natural.
2. Better Fit: It gives a more accurate curve that better follows the data compared to
simple straight lines.
3. Flexible: You can change how the curve behaves at the ends and adjust it for different
needs.
1. More Complex: It takes more time and computing power than just drawing straight lines
between points.
2. Can Overdo It: Sometimes the curve might get too "wiggly" and not fit the general trend
of the data.
3. Edge Effects: The curve's behavior at the ends can be tricky and might not always look
good if not handled properly.
Simple Example
If you have the points (1, 2), (2, 3), (3, 5), and (4, 4), a spline interpolation will smoothly connect
these points with a smooth curve. It won't just connect them with straight lines but will make
sure the curve flows smoothly between each point.
Conclusion
Spline interpolation is a way to make smooth curves through points, creating a much nicer-
looking curve than just connecting the dots with straight lines. It's especially useful when you
need a smooth, natural curve, and the most common type is the cubic spline.
Quadrature Mirror Filter Banks (QMF) are tools used in signal processing to break a signal (like
audio or images) into two parts – one with low frequencies and the other with high frequencies.
This helps in processing each part separately, making the overall system more efficient.
o Low-Pass Filter: Lets through the low-frequency parts (like bass sounds in music).
o High-Pass Filter: Lets through the high-frequency parts (like treble sounds in
music).
2. Processing:
o After splitting, you can process (compress, analyze, or modify) each part
separately.
3. Recombining:
o Once each part is processed, they are put back together (combined) to form the
original signal or a modified version.
Better Compression: It helps compress data, making it smaller without losing important
details.
Avoids Distortion: By carefully splitting and combining, QMF avoids signal distortions.
Advantages of QMF
1. Faster Processing: Splitting the signal makes each part easier to handle.
2. Better Compression: It helps reduce the size of data, making it easier to store or
transmit.
3. Prevents Errors: It avoids errors (aliasing) that can happen in other methods.
Disadvantages of QMF
1. Complex Design: Designing the filters to split and recombine signals can be tricky.
2. More Work: It takes more effort than simpler methods, so it’s not always the easiest
solution.
1. Audio Compression: In formats like MP3, QMF is used to separate sound frequencies for
efficient compression.
Example in Audio:
1. Step 1: An audio signal (like a song) is split into low and high-frequency parts.
3. Step 3: After compression, the parts are put back together to form the final audio.
This makes the song smaller in size without losing much quality!
Conclusion
QMF is a technique used to break a signal into low and high-frequency parts, making it easier to
process, store, or transmit. It’s widely used in audio and image compression for better
efficiency.
FIR (Finite Impulse Response) and IIR (Infinite Impulse Response) are types of digital filters used
to modify signals (like sound or images). They remove unwanted noise or allow certain
frequencies to pass through.
What It Is: An FIR filter processes a signal by using only the current and previous input
values (not the past output).
How It Works: It takes the input signal and combines it with the filter’s coefficients
(numbers that determine the filter's behavior).
Mathematical Formula:
y [n]=b 0 x [n]+b 1 x [n−1]+b 2 x [n−2]+ … y [n]=b0 x [n]+ b1 x [n−1]+b 2 x [n−2]+ ¿
Where:
Key Points:
Advantages:
No risk of instability.
Disadvantages:
Requires more processing power because it needs more coefficients to work well.
What It Is: An IIR filter uses both the current and past input values, plus past output
values (this feedback makes the response "infinite").
How It Works: It takes input signals, processes them, and then uses the result (the output) to
influence future outputs.
Mathematical Formula:
y [n]=b 0 x [n]+b 1 x [n−1]+⋯−a 1 y [n−1]−a 2 y [n−2]−… y [n]=b0 x [n ]+ b1 x [n−1]+¿−a1 y [n−1]−a2 y [n−
Where:
Key Points:
It has an infinite response to impulses (it keeps going after being triggered).
Advantages:
Disadvantages:
Phase Response Can have linear phase May have non-linear phase
Implementation Simple to design and implement More complex to design and implement
Summary:
FIR Filters are simple, always stable, and preserve the signal shape but need more
resources (more calculations).
IIR Filters are more efficient, need fewer resources, but can be tricky to design and may
cause problems like distortion or instability.
Both types of filters are useful in different situations depending on what you need to do with
the signal.
Cascaded lattice structures are a way to build FIR (Finite Impulse Response) and IIR (Infinite
Impulse Response) filters by breaking them into small sections called stages. Each stage does a
simple task, and by connecting them together, we can create a more complex filter.
How It Works: Each stage performs simple math on the input, like adding or multiplying.
All stages together make the full filter.
Example: If we want to filter a signal, we use several stages, each one slightly modifying
the signal. The final output is the combined effect of all stages.
Advantages:
What It Is: An IIR filter also has multiple stages, but it uses feedback, meaning the
output of one stage affects future stages.
How It Works: Each stage not only uses the input signal but also takes feedback from its
own output. This helps the filter do more complex tasks.
Example: For an IIR filter, each stage takes the input signal and the output from previous
stages to create the final result.
Advantages:
1. Stages: Both FIR and IIR filters are divided into smaller stages.
3. IIR: Each stage uses feedback, meaning the past output affects the future output.
Summary:
IIR Cascaded Lattice: More efficient, can handle complex tasks with fewer stages, but
needs careful design to avoid instability.
Both types of lattice structures are used to make filters work better and faster in digital signal
processing.
An allpass filter is a type of filter that changes the phase (the timing of the signal) but does not
change the amplitude (the loudness or strength of the signal). So, it keeps the signal strength
the same, but changes how the signal behaves over time.
When we want to create a complex IIR filter (a type of filter that has both feedback and
memory) using allpass filters, we can combine several allpass filters together in parallel (side
by side). This is called parallel allpass realization.
How It Works:
1. Allpass Filters in Parallel: Instead of using just one big allpass filter, we use multiple
allpass filters that work together.
2. Combine Outputs: Each filter in the parallel setup processes the signal slightly
differently, and when we combine their outputs, we get the desired filter behavior.
3. Creates IIR Filter: The result of combining these allpass filters is a complex IIR filter that
can modify the signal in the way we need.
Simple Example:
Think of parallel allpass realization like mixing colors: If you want a specific color (which
represents your filter's behavior), you can mix different primary colors (which represent the
allpass filters). Each color changes the overall result in its own way, but together they create the
final color (or in this case, the final filter).
Advantages:
1. Simple Design: We break down a complicated filter into simple pieces (allpass filters),
making it easier to design.
2. Stable: Allpass filters are very stable, so the final filter stays stable as well.
3. Easy to Adjust: It's easier to make changes to the design because we're working with
simpler pieces.
Disadvantages:
1. More Computation: Since we use multiple allpass filters, the calculations can become a
bit more complex and need more processing power.
2. Hard to Design: Creating the exact allpass filters to match the desired behavior of the
complex filter can be tricky.
Summary:
Parallel allpass realization is a way to build complex IIR filters by combining simple
allpass filters.
This method helps create filters that change the timing of a signal (phase) without affecting
how loud the signal is (amplitude).
Sure, here’s a simple and easy-to-understand 7-mark answer for Breadth-First Search (BFS)
and Depth-First Search (DFS)—perfect for school or college exams:
It starts from a node, visits all its neighbors first, then goes to the next level.
Example:
If we start from node A:
Order = A → B → C → D → E
DFS visits nodes by going deep into one branch first, before coming back (backtracking).
DFS is good for tasks like finding paths, solving puzzles, and checking for cycles.
Example:
If we start from node A:
Order = A → B → D → E → C
✅ Summary:
Learning Learns from experience, senses, Learns from data, patterns, and training
Method emotions algorithms
May make mistakes due to fatigue High accuracy in repetitive and data-
Accuracy
or bias heavy tasks
🔷 1. Forward Chaining
Definition:
Forward chaining is a data-driven reasoning approach. It starts from known facts and uses
inference rules to derive new facts until the goal is reached.
Working:
Example: Rules:
Advantages:
Disadvantages:
🔷 2. Backward Chaining
Definition:
Backward chaining is a goal-driven reasoning method. It starts with a goal and works backward
to check if known facts support it.
Working:
Advantages:
Disadvantages:
🔁 Key Differences:
✅ Conclusion:
Forward and backward chaining are essential inference techniques in AI. The choice depends on
whether we are starting from facts (forward) or a goal (backward).