0% found this document useful (0 votes)
61 views5 pages

Exercises in Artificial Intelligence

exercises in Artificial Intelligence

Uploaded by

azizataleb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views5 pages

Exercises in Artificial Intelligence

exercises in Artificial Intelligence

Uploaded by

azizataleb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Here are some exercises in Artificial Intelligence (AI) along with their solutions.

These exercises
cover various AI topics, including search algorithms, machine learning, and logic.

---

### **Exercise 1: Search Algorithms (A* Search)**

**Problem:**
Consider the following graph where each node represents a city, and the edges represent roads
with associated costs. The heuristic values (h(n)) represent the estimated cost from each city to
the goal city (G).

```
Cities: A, B, C, D, G
Edges and Costs:
- A -> B (cost = 4)
- A -> C (cost = 2)
- B -> D (cost = 5)
- C -> D (cost = 8)
- D -> G (cost = 3)

Heuristic Values (h(n)):


- h(A) = 7
- h(B) = 6
- h(C) = 3
- h(D) = 2
- h(G) = 0
```

Use the **A* search algorithm** to find the shortest path from city **A** to city **G**. Show the
steps and the final path.

---

**Solution:**

1. **Initialization:**
- Start at node **A**.
- **f(n) = g(n) + h(n)**, where:
- **g(n)** is the cost from the start node to the current node.
- **h(n)** is the heuristic cost from the current node to the goal.

2. **Step 1: Expand A**


- **A** has two neighbors: **B** and **C**.
- Calculate **f(n)** for each:
- **f(B) = g(B) + h(B) = 4 + 6 = 10**
- **f(C) = g(C) + h(C) = 2 + 3 = 5**
- Add **B** and **C** to the open list.
- **Open List**: [C (f=5), B (f=10)]
- **Closed List**: [A]

3. **Step 2: Expand C**


- **C** has one neighbor: **D**.
- Calculate **f(D) = g(D) + h(D) = (2 + 8) + 2 = 12**
- Add **D** to the open list.
- **Open List**: [B (f=10), D (f=12)]
- **Closed List**: [A, C]

4. **Step 3: Expand B**


- **B** has one neighbor: **D**.
- Calculate **f(D) = g(D) + h(D) = (4 + 5) + 2 = 11**
- Update **D** in the open list with the lower cost.
- **Open List**: [D (f=11)]
- **Closed List**: [A, C, B]

5. **Step 4: Expand D**


- **D** has one neighbor: **G**.
- Calculate **f(G) = g(G) + h(G) = (4 + 5 + 3) + 0 = 12**
- Add **G** to the open list.
- **Open List**: [G (f=12)]
- **Closed List**: [A, C, B, D]

6. **Step 5: Goal Reached**


- **G** is the goal node.
- The shortest path is **A -> B -> D -> G** with a total cost of **12**.

---

### **Exercise 2: Machine Learning (Linear Regression)**

**Problem:**
You are given the following dataset:

| X (Feature) | Y (Target) |
|-------------|------------|
|1 |2 |
|2 |4 |
|3 |6 |
|4 |8 |

Use **linear regression** to predict the value of **Y** when **X = 5**. Assume the linear model
is of the form:

\[
Y = \theta_0 + \theta_1 X
\]

Use **gradient descent** to find the values of \(\theta_0\) and \(\theta_1\). Initialize \(\theta_0 =
0\) and \(\theta_1 = 0\), and use a learning rate (\(\alpha\)) of **0.1**.

---

**Solution:**

1. **Initialize Parameters:**
- \(\theta_0 = 0\), \(\theta_1 = 0\)
- Learning rate (\(\alpha\)) = 0.1

2. **Gradient Descent Update Rules:**


- \(\theta_0 := \theta_0 - \alpha \frac{1}{m} \sum_{i=1}^{m} (h(X^{(i)}) - Y^{(i)})\)
- \(\theta_1 := \theta_1 - \alpha \frac{1}{m} \sum_{i=1}^{m} (h(X^{(i)}) - Y^{(i)}) X^{(i)}\)
- Where \(h(X) = \theta_0 + \theta_1 X\) is the hypothesis function.

3. **Iteration 1:**
- Compute predictions:
- For \(X = 1\): \(h(1) = 0 + 0 \times 1 = 0\)
- For \(X = 2\): \(h(2) = 0 + 0 \times 2 = 0\)
- For \(X = 3\): \(h(3) = 0 + 0 \times 3 = 0\)
- For \(X = 4\): \(h(4) = 0 + 0 \times 4 = 0\)
- Compute gradients:
- \(\frac{\partial}{\partial \theta_0} = \frac{1}{4} [(0-2) + (0-4) + (0-6) + (0-8)] = -5\)
- \(\frac{\partial}{\partial \theta_1} = \frac{1}{4} [(0-2) \times 1 + (0-4) \times 2 + (0-6) \times 3
+ (0-8) \times 4] = -15\)
- Update parameters:
- \(\theta_0 := 0 - 0.1 \times (-5) = 0.5\)
- \(\theta_1 := 0 - 0.1 \times (-15) = 1.5\)

4. **Iteration 2:**
- Compute predictions:
- For \(X = 1\): \(h(1) = 0.5 + 1.5 \times 1 = 2\)
- For \(X = 2\): \(h(2) = 0.5 + 1.5 \times 2 = 3.5\)
- For \(X = 3\): \(h(3) = 0.5 + 1.5 \times 3 = 5\)
- For \(X = 4\): \(h(4) = 0.5 + 1.5 \times 4 = 6.5\)
- Compute gradients:
- \(\frac{\partial}{\partial \theta_0} = \frac{1}{4} [(2-2) + (3.5-4) + (5-6) + (6.5-8)] = -0.75\)
- \(\frac{\partial}{\partial \theta_1} = \frac{1}{4} [(2-2) \times 1 + (3.5-4) \times 2 + (5-6) \times
3 + (6.5-8) \times 4] = -2.25\)
- Update parameters:
- \(\theta_0 := 0.5 - 0.1 \times (-0.75) = 0.575\)
- \(\theta_1 := 1.5 - 0.1 \times (-2.25) = 1.725\)

5. **Final Model:**
- After several iterations, the parameters converge to:
- \(\theta_0 \approx 0\), \(\theta_1 \approx 2\)
- The linear model is:
\[
Y = 2X
\]
- For \(X = 5\), the predicted value is:
\[
Y = 2 \times 5 = 10
\]

---

### **Exercise 3: Logic (Propositional Logic)**

**Problem:**
Given the following logical statements:
1. If it rains, then the ground will be wet. (\(R \rightarrow W\))
2. The ground is wet. (\(W\))

Can we conclude that it rained? Use logical reasoning to justify your answer.

---

**Solution:**

1. **Logical Statements:**
- \(R \rightarrow W\): If it rains, then the ground will be wet.
- \(W\): The ground is wet.

2. **Logical Reasoning:**
- The statement \(R \rightarrow W\) means that rain is a sufficient condition for the ground to
be wet, but it is not the only possible cause. The ground could be wet for other reasons (e.g.,
someone spilled water).
- Therefore, knowing that the ground is wet (\(W\)) does not necessarily imply that it rained
(\(R\)).

3. **Conclusion:**
- We **cannot** conclude that it rained based solely on the given statements.

---

These exercises cover fundamental AI concepts and provide step-by-step solutions to help
reinforce understanding.

You might also like