0% found this document useful (0 votes)
21 views

Advanced Machine Learning

The document discusses potential applications of GANs and GNNs including image generation, video generation, drug discovery, and graph classification. It also provides details on the GAN architecture including the generator, discriminator, training process, and use of log-loss function. Common graph prediction tasks for GNNs are described.

Uploaded by

jitunnirmal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Advanced Machine Learning

The document discusses potential applications of GANs and GNNs including image generation, video generation, drug discovery, and graph classification. It also provides details on the GAN architecture including the generator, discriminator, training process, and use of log-loss function. Common graph prediction tasks for GNNs are described.

Uploaded by

jitunnirmal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Assignment – 4

Short type Questions

1. List out potential real-world applications of Generative Adversarial


Network (GAN)?
- Some potential real-world applications of Generative Adversarial
Networks (GANs) include image generation, data augmentation,
image-to-image translation, video generation, text-to-image
synthesis, drug discovery, anomaly detection, style transfer and
artistic rendering, super-resolution imaging, and voice
generation/speech synthesis.
2. Differentiate between discriminative and generative models.
- Image Generation:
o GANs can be used to generate realistic images that resemble
photographs of objects, landscapes, or people.
o These generated images can be used for various applications
such as art generation, image editing, and data
augmentation for training computer vision models.
- Video Generation and Editing:
o GANs can generate realistic video sequences by learning the
temporal dependencies between frames.
o They can also be used for video editing tasks such as scene
completion, object removal, and video synthesis.
- Drug Discovery:
o GANs can generate molecular structures with desired
properties, which is valuable in drug discovery and
pharmaceutical research.
o They can be used to generate new molecules with specific
characteristics such as potency, selectivity, and
bioavailability.
3. Given a graph neural network where each node is represented
with a 32-dimensional node vector. Explain how to perform the
graph-level classification.
- GNNs are a class of neural networks designed to operate on graph-
structured data.
- They learn vectorial representations (embeddings) for nodes by
aggregating information from neighbouring nodes
- Given a graph with node features (32-dimensional node vectors), we
want to predict a label or property for the entire graph.
- Since GNNs compute node-level representations, we need a pooling layer
to learn graph-level representations.
- Common pooling techniques include:
o Graph Pooling: Aggregating node features using mean, max, or
sum operations.
o Graph Attention Pooling: Weighted aggregation based on
attention mechanisms.
o Graph Readout: Learning a fixed-size representation from node
embeddings.
- GNNs learn feature extraction and downstream tasks (e.g., classification)
in an end-to-end fashion.
- The pooling layer captures the graph structure and summarizes it into a
graph-level representation.
- This representation is then fed into a classifier (e.g., fully connected
layers) for prediction.
4. Explain why CNNs can’t be employed to classify graphs.
- CNNs (Convolutional Neural Networks) are primarily designed to
process grid-like data such as images, where the spatial relationships
between neighboring pixels are crucial for understanding the data's
structure. However, graphs are non-grid structured data, and the
relationships between nodes are not as straightforward as the
relationships between neighboring pixels in an image. Here's why
CNNs can't be directly employed to classify graphs:

1. Topology Variability: - Graphs can have varying numbers of nodes


and edges, and their structures can be highly irregular. CNNs rely on
fixed-sized receptive fields and local connectivity patterns, making
them unsuitable for handling the variable topology of graphs.
2. Node Permutation Invariance: - CNNs are not invariant to
permutations of nodes. In a graph, the order of nodes can change
without altering the graph's structure. However, CNNs operate on
fixed-sized windows of data and are sensitive to the order of elements
within those windows.
3. Parameter Sharing: - CNNs typically employ parameter sharing,
where the same set of weights is applied across different spatial
locations in the input data. In graphs, there is no natural notion of
spatial locality, making it challenging to apply parameter sharing
effectively.
4. Graph Isomorphism: - Graphs can represent the same underlying
structure but with different node labels or ordering. CNNs lack the
ability to recognize and exploit such structural similarities between
different graphs.
5. Message Passing and Graph Structure: - Graphs often require
methods for message passing between nodes to capture the graph's
structural information effectively. CNNs do not inherently support
message passing mechanisms, which are crucial for capturing graph
structure.

Long Type Questions

1. Describe the GAN architecture in details. Give the intuition behind


using log-loss function with the GAN.
- The Generative Adversarial Network (GAN) architecture consists of
two neural networks: the generator and the discriminator. The
GAN framework was proposed by Ian Goodfellow and his
colleagues in 2014. Here's a detailed description of the GAN
architecture and the intuition behind using the log-loss function:

1. Generator: - The generator takes random noise (often drawn


from a simple distribution like Gaussian) as input and produces
synthetic data samples.

- It typically consists of a series of layers (e.g., fully connected


layers or convolutional layers) followed by an activation function
(e.g., ReLU or Sigmoid) to transform the input noise into
meaningful data samples.

- The generator aims to generate data samples that are


indistinguishable from real data.

2. Discriminator: - The discriminator acts as a binary classifier that


distinguishes between real data samples (drawn from the true data
distribution) and fake data samples generated by the generator.
- It takes both real and synthetic data samples as input and
outputs a probability score indicating the likelihood that each
sample is real.

- The discriminator aims to correctly classify real and fake


samples.

3. Training Process: - During training, the generator and


discriminator are trained simultaneously in a competitive manner.

- The generator tries to improve its ability to generate realistic


data samples by fooling the discriminator, while the discriminator
tries to improve its ability to distinguish between real and fake
samples.

- The training process can be conceptualized as a min-max game,


where the generator aims to minimize the discriminator's ability to
differentiate between real and fake samples, while the
discriminator aims to maximize its ability to do so.

4. Loss Function: - The loss function used in GANs is typically the


binary cross-entropy (or log-loss) function.

- For the discriminator, the binary cross-entropy loss measures


the difference between the predicted probabilities and the true
labels (0 for fake samples, 1 for real samples).

- For the generator, the binary cross-entropy loss measures the


difference between the discriminator's predictions on the
generated samples and the label indicating they are real (i.e.,
maximizing the log-probability that the generated samples are
classified as real).

5. Intuition Behind Log-Loss Function: - The log-loss function


(binary cross-entropy) is well-suited for GANs because it provides a
clear and interpretable measure of the discrepancy between the
predicted and true labels.

- Using the log-loss function encourages the discriminator to


output probabilities close to 1 for real samples and close to 0 for
fake samples, effectively maximizing the margin between the two
classes.

2. List out different types of prediction tasks associated with Graph


Neural Networks (GNNs)?
- Graph Neural Networks (GNNs) are versatile models capable of
addressing various prediction tasks on graph-structured data. Here
are different types of prediction tasks associated with GNNs:

- 1. Node Classification: Predicting the class or label of each node in


a graph based on its attributes and/or the graph structure.
- 2. Graph Classification: Predicting the class or label of an entire
graph based on its topology and/or node attributes.
- 3. Link Prediction: Predicting the presence or absence of edges
between pairs of nodes in a graph.
- 4. Graph Generation: new graphs that share similar properties with
a given set of input graphs.
- 5. Graph Regression: Predicting continuous-valued target variables
associated with nodes, edges, or entire graphs.
- 6. Graph Clustering: Partitioning nodes in a graph into clusters or
communities based on their structural and attribute similarities.
- 7. Graph Anomaly Detection: Identifying abnormal or anomalous
patterns in graphs that deviate significantly from the norm.
- 8. Graph Exploration: Exploring and discovering important nodes,
edges, or substructures within a graph.
- 9. Graph Alignment: Aligning nodes or subgraphs across multiple
graphs to identify common patterns or relationships.
- 10. Graph Embedding: Learning low-dimensional representations
(embeddings) of nodes, edges, or entire graphs for downstream
machine learning tasks.

Name - Aniketa Das


Reg. no. - 2201030030
Branch - IOT & CS

You might also like