AAIEXP@5
AAIEXP@5
EXPERIMENT-05
AIM:
To explore the working and outcome generation process of a pre-trained deep learning
model—VGG16—on an image classification task using the ImageNet dataset.
THEORY:
1. Pre-Trained Models
Pre-trained models are deep learning architectures that have been previously trained on
large benchmark datasets like ImageNet. These models learn rich feature representations
and can be reused for various downstream tasks such as classification, detection, and
segmentation through transfer learning.
2. VGG16
● It uses only 3x3 convolutional layers, stacked on top of each other, followed by
max-pooling layers and fully connected layers.
Mitesh Singh / B22 / 2101111
● It is trained on the ImageNet dataset, which contains over 1.2 million images across
1,000 classes.
● The final layer produces a probability distribution over these 1,000 categories.
Pre-trained models like VGG16 can be used directly to make predictions or fine-tuned for
specific tasks. These models are efficient as they reduce training time and require less data.
import numpy as np
import requests
model = VGG16(weights='imagenet')
img_url = "https://upload.wikimedia.org/wikipedia/commons/6/6e/Golde33443.jpg" #
Golden retriever
response = requests.get(img_url)
img_array = image.img_to_array(img)
img_preprocessed = preprocess_input(img_batch)
predictions = model.predict(img_preprocessed)
plt.imshow(img)
plt.axis('off')
plt.show()
CONCLUSION:
In conclusion, the experiment successfully demonstrated the working of the VGG16 pre-
trained model in generating accurate predictions for real-world images. By utilizing a model
trained on the extensive ImageNet dataset, we were able to classify objects in an image
without any additional training. This highlights the power of transfer learning and pre-trained
models in reducing computational overhead while delivering high-accuracy outcomes. Such
models serve as foundational tools in many AI applications, from image classification to
feature extraction in more complex pipelines.