Importants Ai
Importants Ai
Characteristics
● Only works if the environment is fully observable.
● Lacking history, easily get stuck in infinite loops
● One solution is to randomize actions
20.Explain A* algorithm. Consider the map of Romania and the hSLD value
table given. Use A* algorithm to find a path from Arad to Bucharest.
UNIT_3
[2m] 26. What are Universal quantifiers. Give example.
Universal quantifiers are logical symbols used in formal logic to indicate that a
particular property or predicate applies to all elements within a specified domain. The
universal quantifier is usually denoted by the symbol "∀" (an upside-down "A"), which
can be read as "for all" or "for every."
Example:
Consider the statement:
∀𝑥(𝑃(𝑥))
This statement reads as "For all x, P(x) is true," meaning that the property P holds for
every element x in the domain of discourse.
6. Consider given KB and using inference rule make the conclusion neither [1,2]
nor [2,1] contains pit.
8. Explain the various logical connectives used in propositional logic.
9. Explain entailment in wumpus world problem with example
10.Explain wumpus world game with diagram and agent program of it
First Percept:
Moving to [2,1][2,1][2,1]:
Exploring [1,2][1,2][1,2]:
Further Inferences:
Agent program:
The agent's program is a set of rules and inferences based on percepts and the
agent's current knowledge. Here is a simplified version in pseudocode:
# Initial Knowledge Base
kb = {(1, 1): 'OK'}
percepts = {(1, 1): [None, None, None, None, None]}
Use the inference procedure to derive facts of interest from axioms and
problem-specific facts.
7. Debug the knowledge base.
Fix errors by identifying missing, weak, or incorrect axioms.
Correct statements that do not match real-world facts and ensure logical
consistency.
Substitution θ = {John/x} is a unifier for these atoms and applying this substitution,
and both expressions will be identical.
o The UNIFY algorithm is used for unification, which takes two atomic sentences
and returns a unifier for those sentences (If any exist).
o Unification is a key component of all first-order inference algorithms.
o It returns fail if the expressions do not match with each other.
o The substitution variables are called Most General Unifier or MGU.
E.g. Let's say there are two different expressions, P(x, y), and P(a, f(z)).
In this example, we need to make both above statements identical to each other. For
this, we will perform the substitution.
o Substitute x with a, and y with f(z) in the first expression, and it will be
represented as a/x and f(z)/y.
o With both the substitutions, the first expression will be identical to the second
expression and the substitution set will be: [a/x, f(z)/y].
Example Facts:
1. American(West)American(West)American(West)
2. Missile(M1)Missile(M_1)Missile(M1)
3. Owns(Nono,M1)Owns(Nono, M_1)Owns(Nono,M1)
4. Sells(West,M1,Nono)Sells(West, M_1, Nono)Sells(West,M1,Nono)
5. Enemy(Nono,America)Enemy(Nono, America)Enemy(Nono,America)
Example Rules:
1. American(x)∧Weapon(y)∧Sells(x,y,z)∧Hostile(z)→Criminal(x)
2. Missile(x)→Weapon(x)
3. Enemy(x,America)→Hostile(x)
Query:
● Is Criminal(West)Criminal(West)Criminal(West) true?
Diagram:
Pseudocode:
def forward_chaining(facts, rules, goal):
working_memory = set(facts)
while True:
new_facts = set()
for rule in rules:
if rule.premise_matches(working_memory):
inferred_fact = rule.apply()
if inferred_fact == goal:
return True
new_facts.add(inferred_fact)
if not new_facts:
return False
working_memory.update(new_facts)
UNIT_4
[2m] 25.List the different types of layers in Convolution Neural Network.
CNN consists of four main types of layers: input layer, convolution
layer, pooling layer, fully connected layer.
[2m] 26.List the components of LSTM.
• Input gate layer
• Forget gate layer
• Output gate layer
• Memory cell state vector
1. Data storage
Facilities for storing and retrieving huge amounts of data are an important
component of the learning process. Humans and computers alike utilize data storage
as a foundation for advanced reasoning.
• In a human being, the data is stored in the brain and data is retrieved using
electrochemical signals.
• Computers use hard disk drives, flash memory, random access memory and similar
devices to store data and use cables and other technology to retrieve data.
2. Abstraction
The second component of the learning process is known as abstraction. Abstraction
is the process of extracting knowledge about stored data. This involves creating
general concepts about the data as a whole. The creation of knowledge involves
application of known models and creation of new models.
The process of fitting a model to a dataset is known as training. When the model has
been trained, the data is transformed into an abstract form that summarizes the
original information.
3. Generalization
The third component of the learning process is known as generalisation. The term
generalization describes the process of turning the knowledge about stored data into
a form that can be utilized for future action. These actions are to be carried out on
tasks that are similar, but not identical, to those what have been seen before. In
generalization, the goal is to discover those properties of the data that will be most
relevant to future tasks.
4. Evaluation
Evaluation is the last component of the learning process. It is the process of giving
feedback to the user to measure the utility of the learned knowledge. This feedback is
then utilized to effect improvements in the whole learning process.
Acquisition of image: The initial level begins with image preprocessing which uses a
sensor to capture the image andtransform it into a usable format.
Enhancement of image: Image enhancement is the technique of bringing out and
emphasising specific interesting characteristics which are hidden in an image.
Restoration of image: Image restoration is the process of enhancing an image's look.
Picture restoration, as opposed to image augmentation, is carried out utilising
specific mathematical or probabilistic models.
Colour image processing: A variety of digital colour modelling approaches such as HSI
(Hue-Saturation-Intensity), CMY (CyanMagenta-Yellow) and RGB (Red-Green-Blue)
etc. are used in colour picture processing.
Compression and decompression of image:This enables adjustments to image
resolution and size, whether for image reduction or restoration depending on the
situation, without lowering image quality below a desirable level. Lossy and lossless
compression techniques are the two main types of image file compression which are
being employed in this stage.
Morphological processing:Digital images are processed depending on their shapes
using an image processing technique known as morphological operations. The
operations depend on the pixel values rather than their numerical values, and well
suited for the processing of binary images. It aids in removing imperfections for
structure of the image.
Segmentation, representation and description: The segmentation process divides a
picture into segments, and each segment is represented and described in such a way
that it can be processed further by a computer. The image's quality and regional
characteristics are covered by representation. The description's job is to extract
quantitative data that helps distinguish one class of items from another.
Recognition of image: A label is given to an object through recognition based on its
description. Some of the often-employed algorithms in the process of recognizing
images include the Scale invariant Feature Transform (SIFT), the Speeded Up Robust
Features (SURF), and the PCA (Principal Component Analysis)