A Variable Is Said To Be A Bound Variable in A Formula If It Occurs Within The Scope of The Quantifier
A Variable Is Said To Be A Bound Variable in A Formula If It Occurs Within The Scope of The Quantifier
Code – 23CSC201
(CBCS Scheme)
Computer Science
Time: 3 Hours]
[Max. Marks: 70
1. Answer in Brief:
A variable is said to be a bound variable in a formula if it occurs within the scope of the
Weak or Narrow AI, also known as Soft AI, refers to artificial intelligence
systems designed to perform a specific task or set of tasks, typically within a
limited domain or context.
(c) State the Bayes’ rule.
2. Bayes' Rule:
3. Bayes' Theorem:*
4.
5. P(H|E) = P(E|H) × P(H) / P(E)
6.
7.
8. *Where:*
9.
10. P(H|E) = Posterior probability (probability of hypothesis H given
evidence E)
11.
12. P(E|H) = Likelihood (probability of evidence E given hypothesis
H)
13.
14. P(H) = Prior probability (probability of hypothesis H before
considering evidence E)
15.
16. P(E) = Normalizing constant (probability of evidence E under all
possible hypotheses)
*Optimization Challenges*
*Integration Challenges*
*Resource-Specific Challenges*
*Domain-Specific Challenges*
*Solution Approaches*
1. Constraint Programming (CP)
*Best Practices*
1. Statements of fact
2. Descriptive
3. Propositional
4. Explicit
5. Factual
Examples:
1. Historical events
2. Scientific facts
3. Definitions
4. Geographic information
5. Procedures
Types:
2. Understanding concepts
3. Recognizing relationships
4. Recalling information
2. Performing tasks
3. Executing procedures
Acquisition:
2. Reading
3. Experience
4. Observation
Representation:
1. Propositional networks
2. Semantic networks
3. Frames
4. Ontologies
Importance:
2. Enables reasoning
3. Supports decision-making
4. Facilitates communication
Applications:
1. Expert systems
2. Knowledge graphs
Notable Theories:
3. Frame Theory
1. Knowledge representation
2. Reasoning
3. Decision-making
4. Natural Language Understanding
3. Enhancing decision-making
*Definition:*
*Key Components:*
1. Nodes:
- Hypotheses or events
2. Edges:
- Undirected (associative)
3. CPTs:
*Properties:*
2. Conditional independence
*Applications:*
2. Risk analysis
3. Expert systems
4. Machine learning
6. Image recognition
7. Bioinformatics
8. Finance
*Advantages:*
4. Flexibility in modeling
*Inference Techniques:*
3. Bayesian learning
1. Naive Bayes
1. BayesNet (Matlab)
2. PyMC3 (Python)
3. BNLearn (R)
4. SMILE (Java)
5. GeNIe (C++)
*Real-World Examples:*
5. Recommendation systems
*Definition:*
*Characteristics:*
*Properties:*
*Advantages:*
1. Efficient reasoning
4. Scalability
*Applications:*
1. Expert systems
2. Knowledge management
5. Ontology-based systems
*Examples:*
*Formal Representation:*
P = (R, F)
where:
F = set of facts
*Inference Procedure:*
*Definition:*
*Problem Statement:*
Given a knowledge base and a set of actions or events, how can we:
1. Represent the
1. Top-Down Parsing
2. Bottom-Up Parsing
3. Chart Parsing
4. Shift-Reduce Parsing
5. Dependency Parsing
*Techniques Used:*
5. Statistical Parsing
*Applications:*
1. Sentiment Analysis
2. Text Summarization
3. Machine Translation
4. Question Answering
5. Text Classification
8. Speech Recognition
*Benefits:*
1. Improved Accuracy
2. Enhanced Understanding
3. Efficient Processing
4. Better Decision-Making
5. Increased Automation
*Challenges:*
1. Ambiguity Resolution
5. Contextual Understanding
*Real-World Examples:*
*Key Researchers:*
1. Noam Chomsky
2. Richard Montague
3. Joan Bresnan
4. Ronald Kaplan
5. Mark Steedman
*Influential Papers:*
Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing
a tree or graph. This algorithm searches breadthwise in a tree or graph,
so it is called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and
expands all successor node at the current level before moving to nodes
of next level.
o The breadth-first search algorithm is an example of a general-graph
search algorithm.
oBreadth-first search implemented using FIFO queue data structure.
Advantages:
o BFS will provide a solution if any solution exists.
o If there are more than one solutions for a given problem, then BFS will
provide the minimal solution which requires the least number of steps.
Disadvantages:It requires lots of memory since each level of the tree must
be saved into memory to expand the next level.
o BFS needs lots of time if the solution is far away from the root node.
Example:
In the below tree structure, we have shown the traversing of the tree using
BFS algorithm from the root node S to goal node K. BFS search algorithm
traverse in layers, so it will follow the path which is shown by the dotted
arrow, and the traversed path will be:
1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
*Key Components:*
*Mathematical Representation:*
Let Q = {q1, q2, …, qN} be the set of hidden states, O = {o1, o2, …, oM} be
the set of observations, and T be the number of time steps.
1. *λ* = (A, B, π)
*HMM Inference:*
*Applications:*
*Advantages:*
1. *Flexibility*: Handles sequential data with varying lengths.
*Disadvantages:*
*Real-World Example:*
*Speech Recognition*:
*Code Implementation:*
```
Import numpy as np
O = [0, 1, 0, 1, 0]
# Forward algorithm
Return alpha
# Viterbi algorithm
Psi[0] = -1
For j in range(len(pi)):
Max_prob = 0
Max_state = 0
For I in range(len(pi)):
Max_prob = prob
Max_state = i
Delta[t, j] = max_prob
Psi[t, j] = max_state
```
4. Explain state space approach in solving any Al problem. Discuss this for
water jug problem.
*Key Components:*
*Problem Statement:*
You have two water jugs, one with a capacity of 3 liters and the other with a
capacity of 5 liters. How can you measure exactly 4 liters of water using
these two jugs?
- State: (x, y), where x is the amount of water in the 3-liter jug and y is the
amount of water in the 5-liter jug.
- Transition Model:
- Pour from 3-liter to 5-liter: (x, y) → (x-z, y+z), where z is the amount
poured
3. A* Search
*Solution:*
*Steps:*
3. Pour from the 3-liter jug to the 5-liter jug until the 5-liter jug has 4 liters.
*Advantages:*
*Disadvantages:*
*Code Implementation:*
```
def water_jug_problem():
while queue:
x, y = queue.popleft()
if (x, y) == (0, 4) or (x, y) == (4, 0):
return True
if x < 3:
new_state = (3, y)
queue.append(new_state)
visited.add(new_state)
if y < 5:
new_state = (x, 5)
queue.append(new_state)
visited.add(new_state)
if x > 0:
new_state = (0, y)
queue.append(new_state)
visited.add(new_state)
if y > 0:
new_state = (x, 0)
if new_state not in visited:
queue.append(new_state)
visited.add(new_state)
z = min(x, 5-y)
queue.append(new_state)
visited.add(new_state)
z = min(y, 3-x)
queue.append(new_state)
visited.add(new_state)
return False
```
The State Space Approach provides a systematic way to solve problems like
the Water Jug Problem. By representing the problem as a set of states and
transitions, we can find a solution using search strategies like BFS or DFS.
*Text Preprocessing*
*Syntactic Analysis*
*Semantic Analysis*
1. Named Entity Recognition (NER): Identify entities (people, places,
organizations).
*Discourse Analysis*
*Pragmatic Analysis*
*Language Generation*
1. Text Generation: Generate text from scratch.
*NLP Applications*
1. Sentiment Analysis
2. Language Translation
3. Text Summarization
4. Question Answering
5. Chatbots
6. Speech Recognition
7. Information Retrieval
8. Text Classification
*NLP Tools and Technologies*
2. spaCy
3. Stanford CoreNLP
4. TensorFlow
5. PyTorch
6. Keras
7. Gensim
*Challenges in NLP*
1. Ambiguity
2. Contextual Understanding
4. Multi-Language Support
5. Domain Adaptation
Import copy
N=3
Class priorityQueue:
Def _init_(self):
Self.heap = []
Heappush(self.heap, node)
Def pop(self):
Return heappop(self.heap)
Def empty(self):
Return len(self.heap) == 0
Class node:
Self.parent = parent
Self.mat = mat
Self.empty_tile_pos = empty_tile_pos
Self.cost = cost
Self.level = level
Count = 0
For I in range(n):
For j in range(n):
If mat[i][j] != final[i][j]:
Count += 1
Return count
New_mat = copy.deepcopy(mat)
X1, y1 = empty_tile_pos
X2, y2 = new_empty_tile_pos
Return new_node
Def printMatrix(mat):
For I in range(n):
For j in range(n):
Print()
Def printPath(root):
If root is None:
Return
printPath(root.parent)
printMatrix(root.mat)
print()
pq = priorityQueue()
pq.push(root)
minimum = pq.pop()
if minimum.cost == 0:
printPath(minimum)
return
for I in range(4):
new_tile_pos = [
minimum.empty_tile_pos[0] + row[i],
minimum.empty_tile_pos[1] + col[i]
If isSafe(new_tile_pos[0], new_tile_pos[1]):
Child = newNode(minimum.mat, minimum.empty_tile_pos, new_tile_pos,
Pq.push(child)
Initial = [
[1, 2, 3],
[5, 6, 0],
[7, 8, 4]
Final = [
[1, 2, 3],
[5, 8, 6],
[0, 7, 4]
Empty_tile_pos = [1, 2]
Solve(initial, empty_tile_pos, final)
OUTPUT:
123
560
784
123
506
784
123
586
704
123
586
074
III. Answer any THREE questions of the following: (3 x 10 = 30)
(b) (i) What is the mini-Max search technique? What is alpha beta cut-off?
(5)oThe most common search technique in game playing is Minimax search
procedure. It is depth-first depth-limited search procedure. It is used for
games like chess and tic-tac-toe.
MOVEGEN : It generates all the possible moves that can be generated from
the current position.
This algorithm is a two player game, so we call the first player as PLAYER1
and second player as PLAYER2. The value of each node is backed-up from its
children. For PLAYER1 the backed-up value is the maximum value of its
children and for PLAYER2 the backed-up value is the minimum value of its
children. It provides most promising move to PLAYER1, assuming that the
PLAYER2 has make the best move. It is a recursive algorithm, as same
procedure occurs at each level.
b. Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.
(ii) Build a virtual assistant for Wikipedia using Wolfram Alpha and Python.
(5)
1. Python 3.x
- Extract keywords
- Parse responses
3. *Wikipedia Integration*:
4. *Speech Recognition*:
5. *Text-to-Speech (TTS)*:
```
Import nltk
Import wolframalpha
Import speech_recognition as sr
Import os
APP_ID = “YOUR_APP_ID”
WIKI_API = wikipedia.Wikipedia()
# Speech recognition
R = sr.Recognizer()
# Text-to-speech
Def process_input(input_text):
# Tokenize input
Tokens = word_tokenize(input_text)
# Identify intent
Intent = “”
Intent = “search”
Intent = “summary”
# Extract keywords
Def wolfram_alpha_query(keywords):
Query = “ “.join(keywords)
Wa = wolframalpha.Client(APP_ID)
Response = wa.query(query)
# Parse response
Answer = “”
If response[“@success”] == “true”:
Answer = response[“pod”][0][“subpod”][“value”]
Return answer
Def wikipedia_search(keywords):
# Search Wikipedia
Search_results = WIKI_API.search(keywords)
Article_title = search_results[0]
Article = WIKI_API.page(article_title)
Summary = article.summary
Return summary
Def speech_to_text():
# Recognize speech
Audio = r.listen(source)
Try:
Input_text = r.recognize_google(audio)
Except sr.UnknownValueError:
Input_text = “”
Return input_text
Def text_to_speech(text):
Tts.text = text
Tts.save(“output.mp3”)
Os.system(“start output.mp3”)
Def main():
While True:
Input_text = speech_to_text()
# Process input
If intent == “search”:
Answer = wolfram_alpha_query(keywords)
Answer = wikipedia_search(keywords)
Text_to_speech(answer)
If __name__ == “__main__”:
Main()
```
*Future Development:*
(c) List and explain the various methods used to solve traveling salesman
problem.
*Exact Methods*
*Metaheuristics*
*Approximation Algorithms*
1. *Christofides Algorithm*: Guarantees a solution within 1.5 times the
optimal tour length.
*Hybrid Methods*
*Other Methods*
*Comparison of Methods*
| Method | Time Complexity | Solution Quality |
1. Problem size
3. Computational resources
4. Time constraints
Note that no single method performs best for all instances of TSP.
1. Forward Pass:
- Input data flows through the network, layer by layer.
- Output is calculated.
2. Error Calculation:
3. Backward Pass:
- Gradients of the loss function with respect to each weight and bias are
calculated.
4. Weight Update:
- Weights and biases are adjusted based on gradients and learning rate.
*Mathematical Formulation:*
Given:
- Input x, output y
- Target output t
- Loss function E
- Learning rate η
Forward Pass:
1. Input layer: z0 = x
Error Calculation:
E = (1/2) * (y - t)^2
Backward Pass:
Weight Update:
Wl = Wl - η * ∂E/∂Wl
bl = bl - η * ∂E/∂bl
*Key Components:*
3. Learning Rate: η
*Types of Backpropagation:*
*Advantages:*
1. Efficient training
2. Robust to noise
1. Slow convergence
2. Local minima
*Real-World Applications:*
1. Image classification
2. Speech recognition
*Code Implementation:*
```
import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):
return x * (1 - x)
# Forward pass
z0 = x
# Error calculation
# Backward pass
# Weight update
# Train network
for i in range(1000):
x = np.array([1, 2])
y = np.array([0.5])
```
(5+5)
_Advantages:_
2. Functional programming
3. Dynamic typing
_Applications in AI:_
1. Expert Systems
3. Computer Vision
4. Robotics
5. Machine Learning
1. Common Lisp
2. Scheme
3. Clojure
4. Emacs Lisp
_Resources:_
_Code Example:_
```
(if (zerop n)
(* n (factorial (- n 1)))))
(add-two 3) ; returns 5
```
*Key Features:*
1. Control systems
2. Image processing
3. Pattern recognition
4. Decision-making
5. Artificial intelligence
6. Data mining
*Fuzzy Logic:*
*Advantages:*
*Disadvantages:*
1. Computational complexity
3. Limited interpretability
*Real-World Examples:*
1. Image segmentation
2. Speech recognition
3. Expert systems
4. Autonomous vehicles
5. Medical diagnosis
*Mathematical Representation:*
A = {(x, μA(x)) | x ∈ X}
*Code Implementation:*
```
Import numpy as np
If x <= a:
Return 0
Elif x >= c:
Return 1
Else:
Return (x – a) / (b – a)
X = np.linspace(0, 10, 100)
Y = [fuzzy_set(I, 2, 5, 8) for I in x]
Plt.plot(x, y)
Plt.show()
```