0% found this document useful (0 votes)
3 views6 pages

DeepLearningLab - Ipynb - Colab

The document outlines an experiment to implement a basic linear regression model using a feedforward neural network with one hidden layer. It details the prerequisites, experimental setup, and step-by-step procedure including data generation, model definition, training, and evaluation. Additionally, it discusses precautions and potential sources of error, along with expected results and observations from the experiment.

Uploaded by

manyaprakash31
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views6 pages

DeepLearningLab - Ipynb - Colab

The document outlines an experiment to implement a basic linear regression model using a feedforward neural network with one hidden layer. It details the prerequisites, experimental setup, and step-by-step procedure including data generation, model definition, training, and evaluation. Additionally, it discusses precautions and potential sources of error, along with expected results and observations from the experiment.

Uploaded by

manyaprakash31
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

4/22/25, 8:43 PM DeepLearningLab.

ipynb - Colab

keyboard_arrow_down Experiment 1
Creating a basic network and analyze its performance.

Objective:

To implement a basic linear regression model, analyze its performance, and evaluate its
effectiveness in predicting outcomes based on a given dataset.

Prerequisites:

Basic understanding of linear regression. Knowledge of Python programming. Familiarity with


libraries such as NumPy, pandas, matplotlib, and scikit-learn. Fundamental concepts of data
preprocessing and evaluation metrics (e.g., Mean Squared Error, R² score).

Experimental Setup

Software Requirements:

Python (preferably version 3.x) Jupyter Notebook or any Python IDE (e.g., PyCharm, VS Code)
Required libraries: numpy, matplotlib, tensorflow

Hardware Requirements:

Any standard computer with a working Python environment.

Theory and Application

Neural Network Model (Feedforward with 1 Hidden Layer)

A simple feedforward neural network with one hidden layer:

Input Layer: Takes feature 𝑋

Hidden Layer: Uses ReLU activation to introduce non-linearity.

Output Layer: Predicts continuous values (regression task).

𝑌 ^ = 𝑊 2 ⋅ 𝑓 ( 𝑊 1 𝑋 + 𝑏 1 ) + 𝑏 2 Y ^ =W 2​⋅ f(W 1​X+b 1​)+b 2​

where:

𝑊 1 , 𝑊 2 W 1​,W 2​are weights.

𝑏 1 , 𝑏 2 b 1​,b 2​are biases.

𝑓 ( 𝑥 ) f(x) is the activation function (ReLU).

Experimental Procedure

https://colab.research.google.com/drive/1byIFyWQqeZtED0Va9nTlHnrndYJ9J3Ai#printMode=true 1/6
4/22/25, 8:43 PM DeepLearningLab.ipynb - Colab

Step 1: Import Libraries

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf

Step 2: Generate Synthetic Data

np.random.seed(42)
num_samples = 100 # Number of data points
x_train = np.random.rand(num_samples, 1) * 10 # Random X values between 0 and 10
y_train = 3 * x_train + 2 + np.random.randn(num_samples, 1) # Linear relation with noise

Step 3: Define the Linear Regression Mode

model = tf.keras.Sequential([
tf.keras.layers.Dense(1, input_shape=(1,)) # Single neuron for regression
])

/usr/local/lib/python3.11/dist-packages/keras/src/layers/core/dense.py:87: UserWarnin
super().__init__(activity_regularizer=activity_regularizer, **kwargs)

 

Step 4: Compile the Model

model.compile(optimizer='sgd', loss='mse') # 'mse' (Mean Squared Error) for regression

Step 5: Train the Model

history = model.fit(x_train, y_train, epochs=200, verbose=0) # Train quietly

Step 6: Predict and Evaluate

y_pred = model.predict(x_train) # Get model predictions

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step

Step 7: Plot the results

plt.scatter(x_train, y_train, label="True Data") # Original data


plt.plot(x_train, y_pred, color='red', label="Predicted Line") # Model prediction
plt.title("Linear Regression using Neural Network")
plt.xlabel("X values")
plt.ylabel("Y values")

https://colab.research.google.com/drive/1byIFyWQqeZtED0Va9nTlHnrndYJ9J3Ai#printMode=true 2/6
4/22/25, 8:43 PM DeepLearningLab.ipynb - Colab

plt.legend()
plt.show()

#without using tensorflow


# Import required libraries
import numpy as np
import matplotlib.pyplot as plt

# Generate sample data


np.random.seed(42)
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1) # Linear relation with some noise

# Initialize parameters
m = 0 # Slope
b = 0 # Intercept
learning_rate = 0.1
epochs = 1000
n = len(X)

# Cost function (Mean Squared Error)


def compute_cost(y, y_pred):
return np.mean((y - y_pred) ** 2) / 2

# Training loop
for epoch in range(epochs):
# Predictions
y_pred = m * X + b

https://colab.research.google.com/drive/1byIFyWQqeZtED0Va9nTlHnrndYJ9J3Ai#printMode=true 3/6
4/22/25, 8:43 PM DeepLearningLab.ipynb - Colab

# Compute cost
cost = compute_cost(y, y_pred)

# Compute gradients
dm = (-1 / n) * np.sum(X * (y - y_pred))
db = (-1 / n) * np.sum(y - y_pred)

# Update parameters
m -= learning_rate * dm
b -= learning_rate * db

# Print cost every 100 epochs


if epoch % 100 == 0:
print(f"Epoch {epoch}, Cost: {cost:.4f}")

# Final parameters
print(f"Final Parameters: Slope = {m:.2f}, Intercept = {b:.2f}")

# Plot training data


plt.scatter(X, y, color="blue", label="Training Data")

# Plot regression line


X_line = np.linspace(0, 2, 100).reshape(-1, 1) # Generate X values for line
y_line = m * X_line + b # Compute corresponding y values

plt.plot(X_line, y_line, color="red", linewidth=2, label="Regression Line")


plt.xlabel("X (Feature)")
plt.ylabel("y (Target)")
plt.title("Linear Regression using Gradient Descent")
plt.legend()
plt.show()

https://colab.research.google.com/drive/1byIFyWQqeZtED0Va9nTlHnrndYJ9J3Ai#printMode=true 4/6
4/22/25, 8:43 PM DeepLearningLab.ipynb - Colab

Epoch 0, Cost: 25.0042


Epoch 100, Cost: 0.4082
Epoch 200, Cost: 0.4035
Epoch 300, Cost: 0.4033
Epoch 400, Cost: 0.4033
Epoch 500, Cost: 0.4033
Epoch 600, Cost: 0.4033
Epoch 700, Cost: 0.4033
Epoch 800, Cost: 0.4033
Epoch 900, Cost: 0.4033
Final Parameters: Slope = 2.77, Intercept = 4.22

Precautions and Sources of Error

Precautions:

Choose an appropriate learning rate: Too high leads to divergence, too low causes slow
learning.

Ensure data is normalized: Large feature values can lead to unstable training.

Avoid overfitting: Use regularization techniques (dropout, L2).

Sources of Error:

Overfitting: If the model learns noise instead of trends.

Underfitting: If the network is too simple to capture data patterns.

Unstable Training: Poor weight initialization or high learning rate.

https://colab.research.google.com/drive/1byIFyWQqeZtED0Va9nTlHnrndYJ9J3Ai#printMode=true 5/6
4/22/25, 8:43 PM DeepLearningLab.ipynb - Colab

Results and Observations

The loss function should decrease over epochs, showing successful training.

The final plot should show the neural network fitting the sine function.

The animation should display how the predictions evolve over time.

Short questions.

1.What is deep learning?

2.How does a neural network work?

https://colab.research.google.com/drive/1byIFyWQqeZtED0Va9nTlHnrndYJ9J3Ai#printMode=true 6/6

You might also like