Unit 4 (Micro) (Ai)
Unit 4 (Micro) (Ai)
Unit 4 (Micro) (Ai)
Forward Chaining: Definition: In forward chaining, the system starts with known facts and uses them to infer new
conclusions until the desired goal is reached.
Process: It begins with the available data and applies inference rules to derive new information. This new
information is then added to the knowledge base, and the process continues iteratively until the system reaches
the desired conclusion.
- Example: In a medical diagnosis system
Backward Chaining: Definition: In backward chaining, the system starts with the desired goal and works backward
to determine what facts must be true in order to reach that goal.
- Process: It begins with the goal or query and attempts to find evidence in the knowledge base to support that
goal. If the evidence is not directly available, it recursively seeks to prove the conditions necessary to establish the
goal until it reaches known facts.
- Example: In a planning system for a robot, backward chaining might start with the goal of reaching a particular
location. The system would work backward, determining what actions the robot needs to take to reach that
location, considering obstacles, and available paths.
Factors determining choice: 1. Goal-driven vs. Data-driven: Forward chaining is suitable when there is a large
amount of data available, and the goal is to derive new conclusions from that data. Backward chaining is preferable
when the goal is known upfront, and the system needs to determine how to achieve that goal. 2. Complexity of
Rules: Forward chaining is typically simpler to implement and understand, making it suitable for systems with
straightforward rules. Backward chaining can handle more complex goals and dependencies but may require more
sophisticated reasoning mechanisms. 3. Efficiency: In some cases, one approach may be more efficient than the
other. 4. Resource Constraints: Forward chaining may be more suitable in resource-constrained environments
where precomputing conclusions can save time during inference.
Q.What are the reasoning patterns in propositional logic? Explain them in detail. [UNIT 5]
Reasoning patterns in propositional logic refer to systematic methods used to derive conclusions from a set of
premises using formal rules. These patterns ensure that the conclusions are logically valid based on the given
premises.
1. Modus Ponens (Affirming the Antecedent): Pattern: If \(P \rightarrow Q\) and \(P\) are both true, then \(Q\)
must be true. Example: If it rains (P), then the ground is wet (Q). It rains (P). Therefore, the ground is wet (Q).
2. Modus Tollens (Denying the Consequent): Pattern: If \(P \rightarrow Q\) and \(\neg Q\) are both true, then \(\
neg P\) must be true. Example: If it rains (P), then the ground is wet (Q). The ground is not wet (\(\neg Q\)).
Therefore, it does not rain (\(\neg P\)).
3. Disjunctive Syllogism: Pattern: If \(P \lor Q\) and \(\neg P\) are both true, then \(Q\) must be true. Example: It is
either raining (P) or snowing (Q). It is not raining (\(\neg P\)). Therefore, it is snowing (Q).
4. Hypothetical Syllogism: Pattern: If \(P \rightarrow Q\) and \(Q \rightarrow R\) are both true, then \(P \
rightarrow R\) must be true. Example: If it rains (P), then the ground gets wet (Q). If the ground gets wet (Q), then
the plants grow (R). Therefore, if it rains (P), then the plants grow (R).
5. Conjunction:Pattern: If \(P\) and \(Q\) are both true, then \(P \land Q\) must be true.
Example: It is raining (P). It is cold (Q). Therefore, it is raining and cold (P \land Q).
6. Simplification:- Pattern: If \(P \land Q\) is true, then \(P\) and \(Q\) are each true.
- Example: It is raining and cold (P \land Q). Therefore, it is raining (P). Also, it is cold (Q).
7. Addition: Pattern: If \(P\) is true, then \(P \lor Q\) must be true for any \(Q\).
Example: It is raining (P). Therefore, it is either raining or snowing (P \lor Q).
8. Resolution: - Pattern: If \(P \lor Q\) and \(\neg P \lor R\) are both true, then \(Q \lor R\) must be true. - Example:
It is either raining or snowing (P \lor Q). It is not raining or it is windy (\(\neg P \lor R\)). Therefore, it is either
snowing or windy (Q \lor R).
Q.Explain unification algorithm with an example.
The unification algorithm is a process used to find a substitution that makes two predicates or terms identical. It is
commonly used in first-order logic (FOL) and plays a crucial role in theorem proving, automated reasoning, and
artificial intelligence.
Algorithm Steps:
1. Initialize: Start with two terms or predicates that need to be unified.
2. Check Structure: Compare the structures of the terms or predicates to identify variables, constants, and
functions.
3. Matching: Match corresponding components of the terms or predicates. If two components are identical, they
match. If one or both components are variables, they can be unified.
4. Substitution: Generate a substitution that makes the terms or predicates identical. This substitution consists of
variable assignments.
5. Apply Substitution: Apply the generated substitution to both terms or predicates.
6. Check Consistency: Ensure that the substitution does not lead to conflicts or contradictions.
Example:Consider the following predicates:
- P(x, f(y)) - P(z, f(g(a)))
We want to unify these two predicates using the unification algorithm.
Steps:1. Check Structure : - Both predicates are of the form P(., .).
- The first predicate has variables x and y, while the second predicate has variables z.
2. Matching: - Match x with z since they are both variables. - Match f(y) with f(g(a)). This requires further
unification.
3. Substitution: - Substitution: x/z, y/g(a)
4. Apply Substitution: - Apply the substitution to both predicates:
- First predicate becomes: P(z, f(g(a)))
- Second predicate remains: P(z, f(g(a)))
5. Check Consistency: - The substitution is consistent and does not lead to conflicts.
Result:The unification algorithm yields a substitution x/z, y/g(a) that makes the two predicates identical. Therefore,
the predicates can be unified as P(z, f(g(a))).
4. **Possibility Predicates:** - The predicate \(Poss(A, S)\) is used to indicate whether an action \(A\) is possible in
situation \(S\).
- Preconditions for actions are expressed using these predicates.
5. **Successor State Axioms:- These axioms describe how fluents change as a result of actions. They define the
conditions under which a fluent holds in the resulting situation after an action is performed For example, a
successor state axiom for a fluent \(F\) might look like this : \( F(do(A, S)) \leftrightarrow [ (A = TurnOn \land S =
S_0) \lor (F(S) \land A \neq TurnOff)] \)
### Example Consider a simple example involving a robot and a light switch:
Resolution : Resolution is a fundamental rule of inference used in propositional and first-order logic to derive
conclusions from a set of premises. It is particularly important in automated theorem proving and logic
programming.
1. Concept:- Resolution works by refuting the negation of the statement to be proved. If the negation of the
statement leads to a contradiction, then the original statement is considered proven. It involves combining pairs of
clauses that contain complementary literals to produce a new clause. This process is called the resolution step.
2. Resolution Rule:The resolution rule states that if you have two clauses, one containing a literal \(L\) and the
other containing its negation \(\neg L\), you can infer a new clause that contains all the literals of the original
clauses except for \(L\) and \(\neg L\). Formally, if \( (A \lor L) \) and \( (\neg L \lor B) \) are two clauses, the
resolvent is \( (A \lor B) \).
3. Example:Given the clauses: \( (P \lor Q) \) and \( (\neg P \lor R) \)
-The literal \(P\) in the first clause and \(\neg P\) in the second clause are complementary.
- Applying the resolution rule, we obtain the resolvent: \( (Q \lor R) \).
4. Properties: Completeness: Resolution is complete for propositional logic, meaning that if a set of clauses is
unsatisfiable, resolution will eventually derive a contradiction.
- Soundness: Resolution is sound, meaning that any derived clause is logically entailed by the original set of
clauses.
Unification: Unification is the process of determining a substitution that makes two logical expressions identical. It
is a key operation in automated reasoning and logic programming, particularly in the context of first-order logic.
1. Concept: Unification aims to find a substitution of variables that, when applied, makes two terms identical.The
substitution is a set of variable bindings that can be applied to the terms.
2. Unification Algorithm:The unification algorithm systematically compares the structure of two terms to
determine if a substitution exists.
- Steps of the unification algorithm:
1. Initialize: Start with the two terms to be unified.
2. Check Structure: Compare the structures of the terms to identify variables, constants, and functions.
3. Matching: Match corresponding components of the terms. If they are identical or one is a variable, proceed
with unification.
4. Generate Substitution: Create a substitution that makes the terms identical.
5. Apply Substitution: Apply the substitution to the terms.
6. Check Consistency: Ensure the substitution does not lead to conflicts.
Example: Terms to unify: \( P(x, g(y)) \) and \( P(a, g(b)) \
Matching \( x \) with \( a \) and \( y \) with \( b \), the substitution is \( \{ x/a, y/b \} \).
- Applying the substitution: \( P(a, g(b)) \), making the terms identical.
4. Properties: Most General Unifier (MGU): The unification algorithm finds the most general unifier, which is the
simplest substitution that unifies the terms.
Soundness: The unification algorithm is sound, meaning that the unifier produced is correct and makes the terms
identical.
What is sematic network
Symmetric networks in artificial intelligence (AI) refer to a specific architecture in neural networks where the
connections and weights between neurons exhibit a form of symmetry. This symmetry can manifest in various ways
depending on the specific application and the desired properties of the network. Here are some key points to
understand about symmetric networks:
1. **Weight Symmetry**: In symmetric networks, the weights between neurons may be constrained to be
symmetric. This means that if there is a connection from neuron \(i\) to neuron \(j\) with weight \(w_{ij}\),
there is also a connection from neuron \(j\) to neuron \(i\) with the same weight \(w_{ji} = w_{ij}\). This is
common in certain types of recurrent neural networks (RNNs), such as Hopfield networks.
2. **Hopfield Networks**: Hopfield networks are a type of recurrent neural network where the weights are
symmetric and the network is used primarily for associative memory. The symmetry ensures that the
network converges to a stable state, which can be interpreted as the network recalling a stored pattern.
3. **Boltzmann Machines**: Boltzmann machines are another example where symmetry is applied to the
connections. They are stochastic recurrent neural networks that can learn probability distributions over
their input data. In restricted Boltzmann machines (RBMs), the symmetry is imposed in a bipartite graph
structure, meaning there are two layers (visible and hidden) with connections only between, but not
within, the layers.
4. **Energy-Based Models**: Symmetric networks are often associated with energy-based models, where
the network’s goal is to minimize an energy function. The symmetry in connections helps in defining a
clear energy landscape, aiding in the stability and convergence properties of the network.
5. **Benefits of Symmetry**: Imposing symmetry in the network can simplify learning algorithms and
improve convergence properties. It also helps in ensuring certain theoretical properties, like the existence
of a well-defined energy function in Hopfield networks or Boltzmann machines, leading to stable solutions.
6. **Applications**: Symmetric networks are used in various applications including optimization problems,
associative memory, pattern recognition, and generative models. For example, Hopfield networks are
useful in recalling corrupted versions of patterns they were trained on, while Boltzmann machines can be
used for generative tasks in machine learning.
In the context of artificial intelligence and knowledge representation, ontology refers to a formal specification of a
set of concepts and relationships within a domain. Ontology helps in organizing information and enabling machines
to understand and process complex data. Within an ontology, two key components are categories and objects.
Here’s an explanation of each:
### Categories
Categories in an ontology represent abstract groupings or classes of entities that share common characteristics.
They are used to define and structure the domain knowledge at a high level. Categories help in classifying objects,
enabling the grouping of similar items for easier analysis and processing.
- **Definition**: A category is a class or a group of entities that share certain attributes or properties.
- **Purpose**: Categories are used to simplify the understanding of a domain by grouping similar entities together,
making it easier to define relationships and rules that apply to the entire group.
- **Example**: In a medical ontology, “Diseases” might be a category that includes sub-categories such as
“Infectious Diseases” and “Genetic Disorders”.
### Objects
Objects in an ontology represent individual instances or entities that belong to one or more categories. They are
specific, concrete items that can be described by their attributes and their relationships with other objects.
- **Definition**: An object is an instance of a category, representing a specific entity with unique attributes.
- **Purpose**: Objects provide the detailed, granular data within the framework of the categories, allowing for the
representation of real-world entities in the ontology.
- **Example**: In the same medical ontology, “COVID-19” would be an object that belongs to the “Infectious
Diseases” category.
- **Definition**: A mental object is an entity or construct within the mind that can be thought about, perceived, or
remembered.
- **Example**: The image of a tree in your mind, a memory of your last vacation, or the concept of justice..