Lab Manual: Smt. Indira Gandhicollege of Engineering Ghansoli
Lab Manual: Smt. Indira Gandhicollege of Engineering Ghansoli
Lab Manual
Vision
Smt. Indira Gandhi College of Engineering
MISSION
1. Serve and help transform society by graduating talented, broadly educated
engineers.
2. Encourage students for collaborative multidisciplinary activity.
3. Enable student to develop ethical professionals
4. To provide conductive environment for teaching and learning.
VISION
MISSION
Program Outcome
PO2 Problem analysis Identify, formulate, review research literature, and analyze
complex engineering problems reaching substantiated
conclusions using first principles of mathematics, natural
sciences, and engineering sciences.
PO5 Modern tool usage Create, select, and apply appropriate techniques, resources,
and modern engineering and IT tools including prediction
and modeling to complex engineering activities with an
understanding of the limitations.
PO6 The engineer and Apply reasoning informed by the contextual knowledge to
society assess societal, health, safety, legal and cultural issues and
the consequent responsibilities relevant to the professional
engineering practice
PO12 Life-long learning Recognize the need for, and have the preparation and
ability to engage in independent and life-long learning in
the broadest context of technological change
Lab Objective
Lab Objective
1 Articulate basic knowledge of fuzzy set theory through programing.
2 To design Associative Memory Networks.
3 To apply Unsupervised learning towards Networks design.
4 To demonstrate Special networks and its applications in soft computing.
5 To implement Hybrid computing systems.
Lab Outcome
Lab Outcome Bloom Taxonomy
CSL801.1 Implement Fuzzy operations and functions towards Fuzzy-rule Apply
creations.
CSL801.2 Build and training Associative Memory Network. Create
CSL801.3 Build Unsupervised learning based networks . Create
CSL801.4 Design and implement architecture of Special Networks Create
CSL801.5 Implement Neuro-Fuzzy hybrid computing applications. Apply
LO-PO mapping
LO PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 P012 PSO1 PSO2
CSL 801.1 2 - - - 3 - - - - - - 2- - -
CSL 801.2 2 - - - 3 - - - - - - -2 - -
CSL 801.3 - - 3 - - - - - - - - -2 3 -
CSL 801.4 - - 3 - 3 - - - - - - 2- 3 -
CSL 801.5 - 3 - - - - - - - - - -2 - 3
Avg 2 3 3 - 3 - - - - - - -2 3 3
Lab Plan
Practical No 1
Objective:
The objective of this lab is to introduce students to the concept of Hidden Markov Models (HMMs) and
guide them through the process of designing and implementing an HMM for outcome prediction. The
lab will cover the basic principles of HMMs, their components, and how to apply them to a simple
outcome prediction problem.
Prerequisites:
Python (3.x)
hmmlearn library
Lab Steps:
Markov Models are mathematical models used to describe systems that undergo transitions between
different states over discrete time steps. The key principle of Markov Models is the Markov property,
which states that the future state of the system depends only on its current state and is independent of
the sequence of events that preceded it.
Weather Prediction: Modeling transitions between weather states (sunny, rainy, cloudy) over
time.
Queueing Systems: Analyzing the flow of entities through a queue or network of queues.
Stock Price Movements: Modeling the stochastic nature of stock price changes.
Natural Language Processing: Capturing the sequence of words in language processing tasks.
Biological Systems: Modeling molecular and genetic processes.
Hidden Markov Models (HMMs) are an extension of basic Markov Models designed to model systems
with unobservable or hidden states. In an HMM, while the underlying system undergoes transitions
between hidden states, only observable emissions or symbols are directly visible. HMMs are particularly
useful in situations where the true states are not directly measurable but can be inferred from observed
data.
Speech Recognition: HMMs are widely used in speech recognition systems. Hidden states
represent phonemes, and observation symbols are acoustic features.
Bioinformatics: HMMs are applied in genomics for gene prediction, sequence alignment, and
protein structure prediction.
Financial Modeling: HMMs can model financial time series data, where hidden states represent
market regimes and observed symbols represent asset prices.
Natural Language Processing (NLP): HMMs are used in language modeling, part-of-speech
tagging, and named entity recognition.
Gesture Recognition: HMMs can be employed to recognize gestures in computer vision
applications, where hidden states represent different gestures.
Robotics: HMMs are used for localization and mapping in robotics, where hidden states
represent the robot's location.
Healthcare: HMMs find applications in predicting disease progression, analyzing medical time
series data, and personalized medicine.
2. Components of an HMM
States: Define the states that represent the underlying system. In this lab, consider two states: "Healthy"
and "Sick."
Observation Symbols: Define the observable outcomes or symbols associated with each state. For
example, "Normal" and "Abnormal."
Emission Probabilities: Specify the probabilities of observing different symbols in each state.
Use the defined HMM parameters to generate synthetic data. This will involve simulating a sequence of
observations and hidden states.
4. Data Splitting
Split the synthetic data into training and testing sets. This step is crucial for evaluating the performance
of the HMM.
Utilize the hmmlearn library to train the HMM model on the training set.
Apply the trained HMM model to predict hidden states on the test set.
Evaluate the accuracy of predictions using appropriate metrics (e.g., accuracy_score from scikit-learn).
7. Conclusion
Discuss the results and insights gained from the HMM model.
Explore potential improvements or extensions to the HMM for more realistic scenarios.
Conclusion:
In this lab, you've gained hands-on experience in designing and implementing a Hidden Markov Model
for outcome prediction. You've learned about the key components of an HMM, how to generate
synthetic data, split it for training and testing, and train the model using the hmmlearn library. The
evaluation of the model's accuracy provides insights into its predictive capabilities.
Hidden Markov Models are powerful tools with applications in various fields, including speech
recognition, bioinformatics, and finance. Understanding their principles and implementation is valuable
for anyone working on prediction tasks where the underlying system may not be directly observable.
Solution:
import numpy as np
np.random.seed(42)
# Transition probabilities
# Emission probabilities
# Number of samples
num_samples = 1000
model.startprob_ = initial_probabilities
model.transmat_ = transition_probabilities
model.emissionprob_ = emission_probabilities
# Generate samples
X, Z = model.sample(1000)
model.fit(X_train)
predicted_states = model.predict(X_test)
CSE-AIML SIGCE 10 | P a g e
Sem VIII Advanced AI LabCSL801
Output Interpretation:
1. We define the states, observation symbols, transition probabilities, emission probabilities, and
initial state probabilities.
2. Generate synthetic data using the defined HMM parameters.
3. Split the data into training and testing sets.
4. Train the HMM model on the training set.
5. Predict the hidden states on the test set.
6. Evaluate the accuracy of the prediction.
CSE-AIML SIGCE 11 | P a g e
Sem VIII Advanced AI LabCSL801
Practical No 2
Objective:
The objective of this lab is to build and train a generative multi-layer network model using an
appropriate dataset. Generative models aim to learn the underlying probability distribution of the data
in order to generate new samples that resemble the training data.
Materials Required:
A suitable dataset for the chosen task (e.g., MNIST, CIFAR-10, Fashion-MNIST)
Procedure:
Setup Environment:
Ensure that Python and necessary libraries like TensorFlow or PyTorch are installed. You can install them
via Anaconda or pip.
Load Dataset:
Load the chosen dataset using appropriate functions provided by TensorFlow or PyTorch. If necessary,
preprocess the data (e.g., normalization, reshaping).
Choose a suitable architecture for your generative model. Popular choices include Variational
Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Autoencoder architectures.
Implement the chosen architecture using TensorFlow or PyTorch modules. Define the layers, activation
functions, loss functions, and any other necessary components.
Define training parameters such as batch size, learning rate, and number of epochs.
CSE-AIML SIGCE 12 | P a g e
Sem VIII Advanced AI LabCSL801
Train the generative model using the training data. Monitor the training process by observing the loss
values and any other relevant metrics.
Periodically evaluate the model's performance on the validation set to prevent overfitting.
Generate Samples:
Once the model is trained, use it to generate new samples from the learned distribution.
Generate a sufficient number of samples and visualize them to assess the quality of the generated data.
Analysis:
Evaluate the quality of the generated samples. Consider metrics such as visual fidelity, diversity, and
similarity to the original dataset.
Compare the performance of different generative models if you experimented with multiple
architectures.
Conclusion:
In this lab, we successfully built and trained a generative multi-layer network model using an
appropriate dataset. By implementing the chosen architecture and optimizing its parameters, we were
able to train a model that learned the underlying probability distribution of the data. Through the
generation of new samples, we observed the model's ability to produce data resembling the training
dataset. This exercise provides valuable insights into the capabilities and limitations of generative
models, which are widely used in various applications such as image generation, data augmentation, and
anomaly detection.
Solution
import numpy as np
import tensorflow as tf
# Generator model
def build_generator():
CSE-AIML SIGCE 13 | P a g e
Sem VIII Advanced AI LabCSL801
model = tf.keras.Sequential([
layers.Dense(512, activation='relu'),
layers.Dense(784, activation='tanh'),
])
return model
# Discriminator model
def build_discriminator():
model = tf.keras.Sequential([
layers.Dense(512, activation='relu'),
layers.Dense(256, activation='relu'),
layers.Dense(1)
])
return model
discriminator.trainable = False
return model
# Loss function
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
# Discriminator loss
CSE-AIML SIGCE 14 | P a g e
Sem VIII Advanced AI LabCSL801
return total_loss
# Generator loss
def generator_loss(fake_output):
# Optimizers
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
# Initialize models
generator = build_generator()
discriminator = build_discriminator()
# Training loop
epochs = 50
batch_size = 128
noise_dim = 100
num_examples_to_generate = 16
@tf.function
def train_step(images):
CSE-AIML SIGCE 15 | P a g e
Sem VIII Advanced AI LabCSL801
gen_loss = generator_loss(fake_output)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator,
discriminator.trainable_variables))
train_step(images_batch)
print(f'Epoch {epoch+1}/{epochs}')
if (epoch + 1) % 10 == 0:
plt.figure(figsize=(4, 4))
for i in range(generated_images.shape[0]):
plt.subplot(4, 4, i+1)
plt.axis('off')
CSE-AIML SIGCE 16 | P a g e
Sem VIII Advanced AI LabCSL801
plt.savefig(f'gan_generated_image_epoch_{epoch+1}.png')
plt.show()
Output Interpretation:
The code iterates over epochs, displaying the current epoch number during training.
Every 10 epochs, it generates a batch of synthetic images using the generator model and saves them as
PNG files.
These images represent the current progress of the generator in generating handwritten digits similar to
those in the MNIST dataset.
When you run this code, you'll observe the training progress in the console, and after each set of 10
epochs, you'll find PNG files named gan_generated_image_epoch_{epoch_number}.png saved in your
working directory, displaying the generated images at that epoch.
CSE-AIML SIGCE 17 | P a g e
Sem VIII Advanced AI LabCSL801
Practical No 3
Objective: The objective of this lab is to understand and implement a Deep Convolutional Generative
Adversarial Network (DCGAN) for generating images based on a given dataset.
Materials Required:
Procedure:
Understanding DCGAN:
Familiarize yourself with the concept of DCGANs. They consist of two neural networks: the generator
and the discriminator. The generator tries to create realistic images, while the discriminator tries to
distinguish between real and fake images.
CSE-AIML SIGCE 18 | P a g e
Sem VIII Advanced AI LabCSL801
Generating Images:
Evaluation:
Evaluate the quality of generated images (e.g., visual inspection, metrics like Inception Score, Frechet
Inception Distance, etc.).
Experiment with different architectures, hyperparameters, and training techniques to improve the
performance of the DCGAN.
Conclusion:
In this lab, we successfully built and trained a Deep Convolutional Generative Adversarial Network
(DCGAN) for generating images based on a given dataset. We started by understanding the concept of
DCGANs and then proceeded to set up the environment, load and preprocess the dataset, build the
generator and discriminator models, and train the DCGAN. We then generated new images using the
trained generator and evaluated the quality of the generated images. Additionally, we explored fine-
tuning and experimentation to improve the performance of the DCGAN further.
By completing this lab, we gained valuable hands-on experience in implementing DCGANs and learned
how to generate realistic images using deep learning techniques. This knowledge can be further applied
to various domains such as image synthesis, data augmentation, and generative art.
Solution
import tensorflow as tf
import numpy as np
CSE-AIML SIGCE 19 | P a g e
Sem VIII Advanced AI LabCSL801
def build_generator():
model = models.Sequential([
layers.BatchNormalization(),
layers.LeakyReLU(),
layers.Reshape((7, 7, 256)),
layers.BatchNormalization(),
layers.LeakyReLU(),
layers.BatchNormalization(),
layers.LeakyReLU(),
])
return model
def build_discriminator():
model = models.Sequential([
layers.LeakyReLU(),
layers.Dropout(0.3),
CSE-AIML SIGCE 20 | P a g e
Sem VIII Advanced AI LabCSL801
layers.LeakyReLU(),
layers.Dropout(0.3),
layers.Flatten(),
layers.Dense(1)
])
return model
generator = build_generator()
discriminator = build_discriminator()
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
# Generator loss
def generator_loss(fake_output):
# Discriminator loss
# Optimizers
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
# Training loop
CSE-AIML SIGCE 21 | P a g e
Sem VIII Advanced AI LabCSL801
EPOCHS = 50
BATCH_SIZE = 256
noise_dim = 100
num_examples_to_generate = 16
@tf.function
def train_step(images):
gen_loss = generator_loss(fake_output)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator,
discriminator.trainable_variables))
# Training loop
CSE-AIML SIGCE 22 | P a g e
Sem VIII Advanced AI LabCSL801
train_step(batch_images)
if epoch % 10 == 0:
plt.figure(figsize=(4, 4))
for j in range(generated_images.shape[0]):
plt.subplot(4, 4, j+1)
plt.axis('off')
plt.show()
Output Interpretation:
After training, the model will generate images similar to the MNIST dataset. At every epoch, a grid of 16
generated images will be displayed. These images will progressively resemble handwritten digits as
training progresses..
CSE-AIML SIGCE 23 | P a g e
Sem VIII Advanced AI LabCSL801
Practical No 4
Objective:
The objective of this lab is to develop a Conditional Generative Adversarial Network (CGAN) to direct the
image generation process of the generator model. The CGAN framework extends the traditional GAN by
conditioning both the generator and discriminator networks on additional information, allowing for the
generation of images based on specific conditions.
Materials Required:
Procedure:
Dataset Preparation:
Select an appropriate dataset for training the CGAN model. Common choices include MNIST, CIFAR-10,
or a custom dataset relevant to your application.
Preprocess the dataset as necessary, including normalization and resizing, to ensure compatibility with
the network architecture.
Design the architecture for both the generator and discriminator networks.
Incorporate conditional inputs into both networks to enable conditional image generation.
Utilize appropriate activation functions, normalization layers, and other components as needed.
Model Implementation:
Define the loss functions for both the generator and discriminator, incorporating adversarial loss along
with any additional losses required for conditioning.
Set up the training loop to optimize the network parameters using techniques such as stochastic
gradient descent or Adam optimization.
CSE-AIML SIGCE 24 | P a g e
Sem VIII Advanced AI LabCSL801
Initialize the model parameters and set hyperparameters such as learning rate, batch size, and number
of training epochs.
Train the CGAN model on the prepared dataset, monitoring the training progress by evaluating the
generated images and loss values.
Evaluation:
Evaluate the trained CGAN model by generating images conditioned on different input conditions.
Assess the quality of the generated images qualitatively and, if applicable, quantitatively using metrics
such as Inception Score or Frechet Inception Distance.
Compare the generated images against ground truth images from the dataset to gauge the model's
performance.
Conclusion:
In this lab, we successfully developed and trained a Conditional Generative Adversarial Network (CGAN)
for image generation. By conditioning both the generator and discriminator networks on additional
information, we were able to direct the image generation process based on specific conditions. Through
iterative training, the model learned to generate realistic images that adhere to the specified conditions.
The effectiveness of the trained CGAN model was evaluated through qualitative and, if applicable,
quantitative assessments, demonstrating its capability to generate high-quality images in a controlled
manner.
Solution
import numpy as np
import tensorflow as tf
def build_generator():
CSE-AIML SIGCE 25 | P a g e
Sem VIII Advanced AI LabCSL801
model = models.Sequential([
layers.BatchNormalization(),
layers.LeakyReLU(),
layers.Reshape((7, 7, 256)),
layers.BatchNormalization(),
layers.LeakyReLU(),
layers.BatchNormalization(),
layers.LeakyReLU(),
])
return model
def build_discriminator():
model = models.Sequential([
layers.LeakyReLU(),
layers.Dropout(0.3),
layers.LeakyReLU(),
layers.Dropout(0.3),
layers.Flatten(),
CSE-AIML SIGCE 26 | P a g e
Sem VIII Advanced AI LabCSL801
layers.Dense(1)
])
return model
class CGAN(models.Model):
super(CGAN, self).__init__()
self.generator = generator
self.discriminator = discriminator
generated_images = self.generator(gen_input)
decision = self.discriminator(generated_images)
return decision
generator = build_generator()
discriminator = build_discriminator()
# Compile the discriminator (we don't compile the generator in this case)
discriminator.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=True))
def generator_loss(fake_output):
CSE-AIML SIGCE 27 | P a g e
Sem VIII Advanced AI LabCSL801
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
EPOCHS = 50
BATCH_SIZE = 128
# Training loop
# Generate images
gen_loss = generator_loss(fake_output)
disc_loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)(tf.ones_like(real_output),
real_output) + \
tf.keras.losses.BinaryCrossentropy(from_logits=True)(tf.zeros_like(fake_output),
fake_output)
CSE-AIML SIGCE 28 | P a g e
Sem VIII Advanced AI LabCSL801
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator,
discriminator.trainable_variables))
# Print progress
def generate_images(label):
plt.axis('off')
plt.show()
for i in range(10):
generate_images(i)
Output Interpretation:
The output consists of generated images corresponding to each digit (0-9). These images are generated
by conditioning the generator network on the specified label.
The output will display 10 images, each corresponding to a different digit from 0 to 9, generated by the
CGAN model. These images represent synthetic digit images created by the generator conditioned on
the respective digit labels. Due to the simplified nature of this example, the image quality may not be
perfect, but it demonstrates the basic functionality of a Conditional Generative Adversarial Network
(CGAN) for image generation.
CSE-AIML SIGCE 29 | P a g e
Sem VIII Advanced AI LabCSL801
Practical No 5
Introduction:
Variational Autoencoders (VAEs) are a type of generative model that learns to encode and decode data.
They are particularly useful for generating new data samples similar to the training data. In this lab, we
will train a Variational Autoencoder using TensorFlow on the Fashion MNIST dataset. Fashion MNIST is a
dataset consisting of grayscale images of 10 different clothing items.
Objective:
The objective of this lab is to understand the implementation of a Variational Autoencoder using
TensorFlow and to generate new samples of fashion items.
Materials:
Procedure:
Setup Environment:
Define the encoder and decoder networks using TensorFlow's Keras API.
Define the loss function, which is a combination of reconstruction loss and KL divergence.
CSE-AIML SIGCE 30 | P a g e
Sem VIII Advanced AI LabCSL801
Train the model on the Fashion MNIST dataset for a certain number of epochs.
After training, use the decoder part of the VAE to generate new samples.
Randomly sample from the latent space and decode to generate new images.
Visualization:
Visualize the original images along with their reconstructed versions to evaluate the performance of the
VAE.
Visualize the generated samples to see the variety of images produced by the VAE.
Conclusion:
In this lab, we successfully trained a Variational Autoencoder using TensorFlow on the Fashion MNIST
dataset. We implemented the encoder and decoder networks, defined the loss function incorporating
both reconstruction loss and KL divergence, and trained the model. After training, we were able to
generate new samples by sampling from the latent space and decoding them using the decoder
network.
By visualizing the reconstructed images, we observed that the VAE is capable of reconstructing the input
images reasonably well. Moreover, the generated samples exhibited diverse fashion items,
demonstrating the generative capability of the VAE.
Overall, this lab provided hands-on experience with implementing and training a Variational
Autoencoder using TensorFlow, which is a fundamental technique in the field of unsupervised learning
and generative modeling.
Solution:
import tensorflow as tf
import numpy as np
CSE-AIML SIGCE 31 | P a g e
Sem VIII Advanced AI LabCSL801
latent_dim = 2
class VAE(Model):
def __init__(self):
super(VAE, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Flatten(),
layers.Dense(latent_dim + latent_dim),
])
self.decoder = tf.keras.Sequential([
layers.Input(shape=(latent_dim,)),
layers.Dense(7*7*32, activation='relu'),
layers.Reshape(target_shape=(7, 7, 32)),
])
if eps is None:
CSE-AIML SIGCE 32 | P a g e
Sem VIII Advanced AI LabCSL801
eps = tf.random.normal(shape=mean.shape)
return self.decoder(z)
vae = VAE()
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
# Training loop
epochs = 10
batch_size = 128
num_examples_to_generate = 16
recon_batch = vae(batch)
CSE-AIML SIGCE 33 | P a g e
Sem VIII Advanced AI LabCSL801
optimizer.apply_gradients(zip(grads, vae.trainable_variables))
print(f"Loss: {loss.numpy():.4f}")
generated_images = vae.decoder(random_latent_vectors)
plt.figure(figsize=(10, 10))
for i in range(num_examples_to_generate):
plt.subplot(4, 4, i + 1)
plt.axis('off')
plt.show()
Output Interpretation:
After training the VAE for 10 epochs, the loss decreases over time, indicating the model's learning
progress. The generated images will vary based on the random latent vectors, but you should observe
fashion-like patterns similar to those in the Fashion MNIST dataset.
A 4x4 grid of generated images will be displayed, showcasing the variety of fashion items the VAE has
learned to generate.
This code provides a basic implementation of a Variational Autoencoder using TensorFlow on the
Fashion MNIST dataset. Depending on your computational resources and time, you may adjust the
number of epochs, batch size, and other hyperparameters for better performance.
CSE-AIML SIGCE 34 | P a g e
Sem VIII Advanced AI LabCSL801
Practical No 6
Objective:
The objective of this lab is to explore the working of pre-trained models for outcome generation. We will
utilize a pre-trained model to generate outcomes based on input data.
Materials Required:
Procedure:
Setup Environment:
Depending on the pre-trained model you choose, install the corresponding library. For example, for GPT-
3, you can use OpenAI's openai library:
Select the pre-trained model you want to use for outcome generation. Consider factors such as the type
of data it supports (text, images, etc.), the quality of generated outcomes, and any usage restrictions.
CSE-AIML SIGCE 35 | P a g e
Sem VIII Advanced AI LabCSL801
Collect or generate sample input data that you will use to feed into the pre-trained model for outcome
generation. This could be text, images, or any other format supported by the model.
Generate Outcomes:
Write a Python script or Jupyter Notebook to interact with the pre-trained model and generate
outcomes based on the input data.
Follow the documentation of the chosen pre-trained model library to understand how to use it for
outcome generation.
Input your sample data into the model and observe the outcomes generated.
Experiment with different input data to see how the pre-trained model responds.
Conclusion:
In this lab, we explored the working of pre-trained models for outcome generation. We selected a pre-
trained model, prepared input data, and generated outcomes based on that data. Through
experimentation, we observed how the model responds to different inputs and analyzed the quality of
the outcomes generated.
Pre-trained models offer a powerful tool for various natural language processing and computer vision
tasks. They can generate outcomes that are often contextually relevant and coherent. However, it's
essential to understand the limitations of these models, including biases in the training data and
occasional inaccuracies in the generated outcomes.
Overall, this lab provided valuable insights into the capabilities of pre-trained models for outcome
generation and demonstrated their potential applications in various domains. Further experimentation
and research in this area can lead to advancements in artificial intelligence and machine learning
technologies.
Solution
import openai
CSE-AIML SIGCE 36 | P a g e
Sem VIII Advanced AI LabCSL801
api_key = "your_openai_api_key_here"
openai.api_key = api_key
response = openai.Completion.create(
prompt=prompt,
max_tokens=50 # You can adjust the number of tokens for the desired length of output
print(response.choices[0].text.strip())
Output Interpretation:
For the given prompt "Once upon a time, in a land far far away," the generated outcome could be
something like:
There lived a brave knight named Sir Cedric. He embarked on a quest to rescue the princess from the
clutches of an evil dragon.
This is just an example, and the actual outcome may vary based on the randomness and diversity of the
GPT-3 model's responses. You can try different prompts and experiment with the parameters to observe
various outcomes generated by the model.
CSE-AIML SIGCE 37 | P a g e