Unit2_PracticeQuestions
Unit2_PracticeQuestions
1. Outline the primary objectives of Explainable AI (XAI) within the context of machine
learning models.
Answer: Explainable AI (XAI) focuses on making machine learning (ML) models
transparent, interpretable, and understandable to both technical and non-technical users. The
primary objectives of XAI are:
• Transparency: Provide clear insights into how models make decisions, including the
role of input features in predictions.
• Accountability: Ensure that the model can be audited and debugged for errors or
biases.
• Trust and Adoption: Increase user confidence by enabling stakeholders to
understand and trust the decisions made by AI systems.
• Compliance with Regulations: Help models comply with legal and ethical standards
(e.g., GDPR) requiring explainability in automated decision-making systems.
• Bias and Fairness Detection: Identify and mitigate biases in models to ensure
fairness in applications like hiring, lending, and criminal justice.
Examples: XAI is used in medical diagnosis to explain why an AI predicts a certain
condition, or in autonomous vehicles to clarify why specific navigation decisions are made.
3. Contrast the characteristics of problems that are effectively solvable using Generative
Adversarial Networks (GANs) with those that are not well-suited for GANs.
Problems Solvable by GANs:
• Image and Video Generation: GANs excel at generating realistic images (e.g., faces,
landscapes) and videos.
• Data Augmentation: GANs generate synthetic data to improve machine learning model
performance.
• Domain Adaptation: Used to transform data from one domain to another (e.g., day-to-night
photo translation).
• Super-Resolution: Enhancing the resolution of images for applications like satellite imagery
and medical imaging.
Problems Not Well-Suited for GANs:
• Structured Data Generation: GANs struggle with structured, tabular data such as relational
databases.
• Sequential Data: GANs are not ideal for generating sequential data (e.g., time series, text)
compared to models like RNNs or Transformers.
• Interpretability Needs: GANs lack transparency, making them unsuitable for applications
requiring explainability.
• Fine-Grained Control: Controlling specific attributes of generated data is challenging without
extensions like Conditional GANs.
4. Explain the role of the Generator and Discriminator in GANs. How do they interact to
improve performance over time?
Answer:
• Generator: Generates synthetic data from random noise, attempting to mimic the real data
distribution.
• Discriminator: Evaluates whether a given sample is real (from the dataset) or fake (generated
by the Generator).
• Interaction:
o The Discriminator provides feedback to the Generator, highlighting weaknesses in the
generated data.
o The Generator uses this feedback to improve and produce more realistic data.
o Over time, the Discriminator becomes better at identifying fake data, and the Generator
becomes better at fooling the Discriminator, creating a feedback loop that enhances
performance.
5. Discuss the main challenges in training GANs, such as mode collapse and vanishing gradients,
and suggest possible solutions.
Answer:
• Mode Collapse: The Generator produces limited variations of data, ignoring diversity in the
dataset.
o Solution: Use techniques like minibatch discrimination or feature matching.
• Vanishing Gradients: The Generator receives weak gradients during training, slowing its
improvement.
o Solution: Use Wasserstein GANs (WGANs) to address gradient issues.
• Instability in Training: Balancing the Generator and Discriminator can be difficult.
o Solution: Adjust learning rates dynamically or use techniques like spectral
normalization.
• Overfitting in the Discriminator: The Discriminator becomes too strong, leaving little room for
the Generator to improve.
o Solution: Regularize the Discriminator using dropout or early stopping.
7. Analyze how Conditional GANs (cGANs) extend the standard GAN framework. Provide
examples of their applications.
Answer: Conditional GANs (cGANs) extend standard GANs by conditioning both the Generator
and Discriminator on additional information (e.g., labels, attributes). This enables cGANs to
generate data with specific characteristics.
Examples of Applications:
• Image Synthesis: Generate images conditioned on class labels (e.g., generating specific types
of animals).
• Text-to-Image Translation: Create images based on textual descriptions.
• Super-Resolution: Generate high-resolution images based on low-resolution inputs.
• Domain-Specific Data Generation: Generate medical images (e.g., MRI scans) for specific
conditions.
8. Analyze how the collaborative interaction between a GAN's Generator and Discriminator
improves the model's performance.
Answer:
• Roles:
o The Generator creates data to mimic the real data distribution.
o The Discriminator evaluates and classifies samples as real or fake.
• Interaction:
o The Generator learns to produce more realistic data based on feedback from the
Discriminator.
o The Discriminator improves its ability to identify subtle differences between real and
fake data.
• Collaboration: This adversarial training forces both components to improve iteratively,
resulting in a Generator that produces high-quality data and a Discriminator that can evaluate
data more accurately.