0% found this document useful (0 votes)
20 views1 page

Experiment 3 FDL - Jupyter Notebook

Uploaded by

riteshkumxr3668
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views1 page

Experiment 3 FDL - Jupyter Notebook

Uploaded by

riteshkumxr3668
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

In [5]: df.

info()
In [1]: import numpy as np
import pandas as pd <class 'pandas.core.frame.DataFrame'>
RangeIndex: 400 entries, 0 to 399
Data columns (total 9 columns):
In [2]: df=pd.read_csv("Admission_Predict.csv") # Column Non-Null Count Dtype
df.head() --- ------ -------------- -----
0 Serial No. 400 non-null int64
Out[2]: 1 GRE Score 400 non-null int64
Serial No. GRE Score TOEFL Score University Rating SOP LOR CGPA Research Chance of Admit 2 TOEFL Score 400 non-null int64
0 1 337 118 4 4.5 4.5 9.65 1 0.92 3 University Rating 400 non-null int64
4 SOP 400 non-null float64
1 2 324 107 4 4.0 4.5 8.87 1 0.76 5 LOR 400 non-null float64
2 3 316 104 3 3.0 3.5 8.00 1 0.72
6 CGPA 400 non-null float64
7 Research 400 non-null int64
3 4 322 110 3 3.5 2.5 8.67 1 0.80 8 Chance of Admit 400 non-null float64
dtypes: float64(4), int64(5)
4 5 314 103 2 2.0 3.0 8.21 0 0.65
memory usage: 28.2 KB

In [3]: df.columns = df.columns.str.strip() In [6]: df.duplicated().sum()

Out[6]: 0
In [4]: df.shape

Out[4]: (400, 9) In [7]: df['Serial No.'].value_counts()

Out[7]: Serial No.


1 1
264 1
274 1
273 1
272 1
..
131 1
130 1
129 1
128 1
400 1
Name: count, Length: 400, dtype: int64

In [8]: if 'Chance of Admit' in df.columns: In [9]: from sklearn.model_selection import train_test_split


X = df.drop(columns=['Chance of Admit']) from sklearn.preprocessing import StandardScaler
y = df['Chance of Admit'] from tensorflow.keras.models import Sequential
else: from tensorflow.keras.layers import Dense
print("Column 'Chance of Admit' not found in the dataset.") from tensorflow.keras.optimizers import Adam
exit() ​
print("Shape of X:", X.shape) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
print("Shape of y:", y.shape) ​
scaler = StandardScaler()
Shape of X: (400, 8) X_train = scaler.fit_transform(X_train)
Shape of y: (400,) X_test = scaler.transform(X_test)

model = Sequential()
model.add(Dense(units=64, activation='relu', input_dim=X_train.shape[1]))
model.add(Dense(units=32, activation='relu'))
model.add(Dense(units=1))

model.compile(optimizer=Adam(), loss='mean_squared_error')

history = model.fit(X_train, y_train, epochs=100, validation_split=0.2, verbose=1)

test_loss = model.evaluate(X_test, y_test, verbose=0)
print(f"Test Loss: {test_loss}")

y_pred = model.predict(X_test)

Epoch 92/100
8/8 [==============================] - 0s 7ms/step - loss: 9.3117e-04 - val_loss: 0.0055
Epoch 93/100
8/8 [==============================] - 0s 6ms/step - loss: 8.9376e-04 - val_loss: 0.0054
Epoch 94/100
8/8 [==============================] - 0s 7ms/step - loss: 8.9198e-04 - val_loss: 0.0056
Epoch 95/100
8/8 [==============================] - 0s 7ms/step - loss: 8.6865e-04 - val_loss: 0.0054
Epoch 96/100
8/8 [==============================] - 0s 6ms/step - loss: 8.6995e-04 - val_loss: 0.0056
Epoch 97/100
8/8 [==============================] - 0s 7ms/step - loss: 8.7003e-04 - val_loss: 0.0054
Epoch 98/100
8/8 [==============================] - 0s 6ms/step - loss: 7.9774e-04 - val_loss: 0.0056
Epoch 99/100
8/8 [==============================] - 0s 6ms/step - loss: 8.2290e-04 - val_loss: 0.0056
Epoch 100/100
8/8 [==============================] - 0s 7ms/step - loss: 8.0702e-04 - val_loss: 0.0055
Test Loss: 0.006245521362870932
3/3 [==============================] - 0s 2ms/step

In [ ]: ​

You might also like