GenAI Assignment 3 & 4
GenAI Assignment 3 & 4
---
Objective:
In this assignment, you will implement two state-of-the-art generative AI models: a
Generative Adversarial Network (GAN) for image generation and a Di usion Model
for progressive image synthesis. This task will help you understand how generative
models work by learning the underlying data distribution and generating new
samples.
---
Assignment Details:
Requirements:
- Build both the generator and discriminator using fully connected or
convolutional layers.
- Train the GAN model for a reasonable number of epochs (e.g., 50 to 100),
ensuring the generator improves over time.
- Evaluate and visualize the performance:
- Plot the generated images at di erent stages of training to show the
progression of learning.
- Compare the generated images with real samples to assess the quality.
Expected Outputs:
- Generated images from the GAN at the end of training.
- Loss curves for both generator and discriminator during the training process.
- Explanation of any challenges encountered during training (e.g., mode collapse,
unstable training) and how you mitigated them.
---
Requirements:
- Start by de ning a noise schedule where noise is progressively added to real
images during training.
- Implement the reverse process where the model denoises noisy images to
generate new images from random noise.
- Train the di usion model on the dataset to generate new images from scratch.
Expected Outputs:
- Show the progression of generated images over time as the model gradually
denoises them.
- Visualize the noisy images during the forward process and compare them with
the denoised, generated images from the reverse process.
- Report and visualize any evaluation metrics used, such as Mean Squared Error
(MSE) or Inception Score (if applicable).
---
Implementation Guidelines:
1. Dataset:
- Use MNIST or Fashion MNIST dataset for both tasks. These datasets are
readily available through popular libraries like TensorFlow/Keras or PyTorch.
2. Frameworks:
- Use PyTorch for the implementation of both models.
3. Model Architecture:
- Ensure your models are well-structured and clearly explain the architecture of
the Generator and Discriminator (for GAN), and the noise schedule and denoising
process (for Di usion Model).
- The implementation should be modular, with separate functions or classes for
di erent components of the models.
5. Colab Notebook:
- Your notebook should be well-commented and include sections that describe
the model architecture, training process, and results.
- Include visualizations like loss curves, sample outputs, and any evaluation
metrics used.
ff
ff
ff
fi
ff
ff
ff
---
Submission Instructions:
- Submit your Colab le through the provided Google Drive link.
- Ensure that the code in the notebook runs without errors, and all required outputs
(such as visualizations) are properly generated.
- Name your Colab notebook le as `USN_A3_Assignment.ipynb`.
---
—————————————————————————————————————
——————————————————————————
---
Objective:
This assignment aims to assess your understanding of various optimization
methods, evaluation metrics, and the challenges encountered in generative AI
models. You will explore the theory and practical implications of these techniques,
focusing on how they are used in training and evaluating models like GANs, VAEs,
and other generative models.
---
Assignment Tasks:
1. Gradient Descent:
- Explain the basic principle of gradient descent.
- Discuss its variants: Batch Gradient Descent and Mini-batch Gradient
Descent .
- Discuss its role in training machine learning models.
6. AdaDelta:
- Explain the AdaDelta optimizer and how it modi es Adagrad to overcome its
limitations.
- Compare AdaDelta with other optimization algorithms, especially for deep
learning applications.
---
3. Perplexity:
- Explain the Perplexity metric, commonly used in language models.
- Discuss its use in evaluating generative text models.
4. Reconstruction Error:
- De ne Reconstruction Error and its importance in evaluating autoencoders
and VAEs.
- Discuss the relationship between reconstruction error and model performance.
5. Mode Score:
- Describe the Mode Score and how it evaluates the quality and diversity of
generated samples.
- Compare the Mode Score with the Inception Score.
6. Diversity Metrics:
- Explain the importance of Diversity Metrics in evaluating generative models.
- Discuss speci c metrics used to assess diversity in generated data.
---
1. Mode Collapse:
- De ne Mode Collapse and explain how it occurs in generative models like
GANs.
- Discuss possible techniques for mitigating mode collapse.
2. Stability:
ff
fi
fi
fi
ff
- Discuss the issue of stability in training GANs and other generative models.
- Explain why maintaining stability is di cult in adversarial training and suggest
strategies to improve stability.
3. Convergence:
- Explain the concept of convergence in the context of generative models.
- Discuss why achieving convergence is challenging in GAN training and how
alternative loss functions (like Wasserstein loss) can help improve convergence.
---
ffi