0% found this document useful (0 votes)
16 views

Unit2_PracticeQuestions

Uploaded by

SINIXTER Op
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Unit2_PracticeQuestions

Uploaded by

SINIXTER Op
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Practice Set- Unit 2

1. Outline the primary objectives of Explainable AI (XAI) within the context of machine
learning models.
Answer: Explainable AI (XAI) focuses on making machine learning (ML) models
transparent, interpretable, and understandable to both technical and non-technical users. The
primary objectives of XAI are:
• Transparency: Provide clear insights into how models make decisions, including the
role of input features in predictions.
• Accountability: Ensure that the model can be audited and debugged for errors or
biases.
• Trust and Adoption: Increase user confidence by enabling stakeholders to
understand and trust the decisions made by AI systems.
• Compliance with Regulations: Help models comply with legal and ethical standards
(e.g., GDPR) requiring explainability in automated decision-making systems.
• Bias and Fairness Detection: Identify and mitigate biases in models to ensure
fairness in applications like hiring, lending, and criminal justice.
Examples: XAI is used in medical diagnosis to explain why an AI predicts a certain
condition, or in autonomous vehicles to clarify why specific navigation decisions are made.

2. Define Generative AI and discuss its applications across various industries.


Answer: Generative AI refers to a category of artificial intelligence models that can create
new, original data resembling the training data. These models learn patterns and distributions
from datasets to generate text, images, audio, and more.
Applications Across Industries:
• Healthcare: Generating synthetic medical data for research, drug discovery, or
creating enhanced medical imaging (e.g., GANs for MRI scans).
• Entertainment: Producing art, music, and scripts using models like DALL-E and
GPT.
• E-commerce: Personalizing customer experiences through AI-generated product
descriptions and reviews.
• Finance: Simulating market scenarios and generating synthetic data for algorithmic
trading.
• Education: Generating personalized learning content and assisting in automatic
content creation.

3. Contrast the characteristics of problems that are effectively solvable using Generative
Adversarial Networks (GANs) with those that are not well-suited for GANs.
Problems Solvable by GANs:
• Image and Video Generation: GANs excel at generating realistic images (e.g., faces,
landscapes) and videos.
• Data Augmentation: GANs generate synthetic data to improve machine learning model
performance.
• Domain Adaptation: Used to transform data from one domain to another (e.g., day-to-night
photo translation).
• Super-Resolution: Enhancing the resolution of images for applications like satellite imagery
and medical imaging.
Problems Not Well-Suited for GANs:
• Structured Data Generation: GANs struggle with structured, tabular data such as relational
databases.
• Sequential Data: GANs are not ideal for generating sequential data (e.g., time series, text)
compared to models like RNNs or Transformers.
• Interpretability Needs: GANs lack transparency, making them unsuitable for applications
requiring explainability.
• Fine-Grained Control: Controlling specific attributes of generated data is challenging without
extensions like Conditional GANs.
4. Explain the role of the Generator and Discriminator in GANs. How do they interact to
improve performance over time?
Answer:
• Generator: Generates synthetic data from random noise, attempting to mimic the real data
distribution.
• Discriminator: Evaluates whether a given sample is real (from the dataset) or fake (generated
by the Generator).
• Interaction:
o The Discriminator provides feedback to the Generator, highlighting weaknesses in the
generated data.
o The Generator uses this feedback to improve and produce more realistic data.
o Over time, the Discriminator becomes better at identifying fake data, and the Generator
becomes better at fooling the Discriminator, creating a feedback loop that enhances
performance.

5. Discuss the main challenges in training GANs, such as mode collapse and vanishing gradients,
and suggest possible solutions.
Answer:
• Mode Collapse: The Generator produces limited variations of data, ignoring diversity in the
dataset.
o Solution: Use techniques like minibatch discrimination or feature matching.
• Vanishing Gradients: The Generator receives weak gradients during training, slowing its
improvement.
o Solution: Use Wasserstein GANs (WGANs) to address gradient issues.
• Instability in Training: Balancing the Generator and Discriminator can be difficult.
o Solution: Adjust learning rates dynamically or use techniques like spectral
normalization.
• Overfitting in the Discriminator: The Discriminator becomes too strong, leaving little room for
the Generator to improve.
o Solution: Regularize the Discriminator using dropout or early stopping.

7. Analyze how Conditional GANs (cGANs) extend the standard GAN framework. Provide
examples of their applications.
Answer: Conditional GANs (cGANs) extend standard GANs by conditioning both the Generator
and Discriminator on additional information (e.g., labels, attributes). This enables cGANs to
generate data with specific characteristics.
Examples of Applications:
• Image Synthesis: Generate images conditioned on class labels (e.g., generating specific types
of animals).
• Text-to-Image Translation: Create images based on textual descriptions.
• Super-Resolution: Generate high-resolution images based on low-resolution inputs.
• Domain-Specific Data Generation: Generate medical images (e.g., MRI scans) for specific
conditions.

8. Analyze how the collaborative interaction between a GAN's Generator and Discriminator
improves the model's performance.
Answer:
• Roles:
o The Generator creates data to mimic the real data distribution.
o The Discriminator evaluates and classifies samples as real or fake.
• Interaction:
o The Generator learns to produce more realistic data based on feedback from the
Discriminator.
o The Discriminator improves its ability to identify subtle differences between real and
fake data.
• Collaboration: This adversarial training forces both components to improve iteratively,
resulting in a Generator that produces high-quality data and a Discriminator that can evaluate
data more accurately.

CNN (Convolutional Neural Networks):


1. Explain the concept of receptive fields in CNNs and their importance in feature extraction.
2. Tabulate the differences between zero padding, same padding, and valid padding in CNNs.
3. Given an input image of size 128x128x3, apply three convolution layers with filter sizes of 3x3,
stride 1, and padding 1. The number of filters for each layer is 16, 32, and 64, respectively.
Calculate the output dimensions.
4. Critically evaluate how pooling layers (max pooling vs. average pooling) affect feature maps
in CNNs.
5. Design a CNN for classifying handwritten digits from the MNIST dataset. Specify the number
of layers, filter sizes, and activation functions.
6. Discuss how the number of filters in a CNN affects both the model’s accuracy and
computational efficiency.
7. Given an input image of size 64x64x3, analyze the impact of increasing the kernel size from
3x3 to 7x7 on the receptive field and output dimensions.
8. Evaluate the advantages and disadvantages of using pre-trained CNN architectures like ResNet
and VGG for transfer learning.
9. Describe how dropout is used in CNNs to reduce overfitting. Include an example of its
implementation in a CNN architecture.
10. You are tasked with classifying X-ray images as normal or abnormal. Design a CNN model,
including preprocessing steps and the rationale for selecting the architecture.

You might also like