75% found this document useful (8 votes)
76K views5 pages

Answer For Introduction To Generative AI Quiz

The document provides an overview of generative AI and large language models through a series of quizzes. It covers topics such as what a prompt and foundation model are, examples of generative and discriminative AI models, benefits and challenges of large language models, the attention mechanism, transformer models like BERT, image generation with diffusion models, and the encoder-decoder architecture used in tasks like image captioning and machine translation.

Uploaded by

奇門大數據
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
75% found this document useful (8 votes)
76K views5 pages

Answer For Introduction To Generative AI Quiz

The document provides an overview of generative AI and large language models through a series of quizzes. It covers topics such as what a prompt and foundation model are, examples of generative and discriminative AI models, benefits and challenges of large language models, the attention mechanism, transformer models like BERT, image generation with diffusion models, and the encoder-decoder architecture used in tasks like image captioning and machine translation.

Uploaded by

奇門大數據
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Introduction to Generative AI: Quiz

What is a prompt?
A prompt is a short piece of text that is given to the large language model as
input, and it can be used to control the output of the model in many ways.

What is Generative AI?


Generative AI is a type of artificial intelligence (AI) that can create new content, such
as text, images, audio, and video. It does this by learning from existing data and then
using that knowledge to generate new and unique outputs.

What are foundation models in Generative AI?


A foundation model is a large AI model pretrained on a vast quantity of data that was
"designed to be adapted” (or fine-tuned) to a wide range of downstream tasks, such
as sentiment analysis, image captioning, and object recognition.

What is an example of both a generative AI model and a discriminative AI model?


A generative AI model could be trained on a dataset of images of cats and then used
to generate new images of cats. A discriminative AI model could be trained on a
dataset of images of cats and dogs and then used to classify new images as either
cats or dogs.

Introduction to Large Language Models: Quiz


1. What are some of the benefits of using large language models (LLMs)?

They can generate human-quality text.


They can be used for a variety of tasks.

They can be trained on massive datasets of text and code.

They are constantly improved.

2. What are some of the applications of LLMs?

Writing

Translating

Coding

Answering questions

Summarizing text

Generating creative content

3. What are large language models (LLMs)?:


An LLM is a type of artificial intelligence (AI) that can generate human-
quality text. LLMs are trained on massive datasets of text and code, and
they can be used for many tasks, such as writing, translating, and coding.
4. What are some of the challenges of using LLMs? Select three options.
They can be biased.
They can be used to generate harmful content.
They can be expensive to train.

Attention Mechanism: Quiz

1. What is the advantage of using the attention mechanism over a traditional recurrent
neural network (RNN) encoder-decoder? The attention mechanism lets the
decoder focus on specific parts of the input sequence, which can improve
the accuracy of the translation.
2. What are the two main steps of the attention mechanism? Calculating
the attention weights and generating the context vector
3. What is the advantage of using the attention mechanism over a
traditional sequence-to-sequence model? The attention mechanism lets
the model focus on specific parts of the input sequence.
4. What is the name of the machine learning technique that allows a
neural network to focus on specific parts of an input sequence? Attention
mechanism
5. What is the purpose of the attention weights?
To assign weights to different parts of the input sequence, with the most
important parts receiving the highest weights.
6.How does an attention model differ from a traditional model? Attention
models pass a lot more information to the decoder.
7. What is the name of the machine learning architecture that can be used to
translate text from one language to another? Encoder-decoder

Transformer Models and BERT Model: Quiz


1. What kind of transformer model is BERT? Encoder-only model
2. What is the name of the language modeling technique that is used in
Bidirectional Encoder Representations from Transformers (BERT)?
Transformer
3. What are the two sublayers of each encoder in a Transformer model?
Self-attention and feedforward
4. What are the three different embeddings that are generated from an input
sentence in a Transformer model? Token, segment, and position
embeddings
5. BERT is a transformer model that was developed by Google in 2018.
What is BERT used for? It is used to solve many natural language
processing tasks, such as question answering, text classification, and
natural language inference.
6. What is the attention mechanism? A way of determining the importance
of each word in a sentence for the translation of another sentence
7. What are the encoder and decoder components of a transformer model?
The encoder ingests an input sequence and produces a sequence of
hidden states. The decoder takes in the hidden states from the encoder
and produces an output sequence.
8. What is a transformer model? A deep learning model that uses self-
attention to learn relationships between different parts of a sequence.
9. What does fine-tuning a BERT model mean? Training the model and
updating the pre-trained weights on a specific task by using labeled data
Introduction to Image Generation: Quiz
1. What are some challenges of diffusion models? All of the above
2. What is the name of the model family that draws inspiration from
physics and thermodynamics? Diffusion models
3. What is the goal of diffusion models? To learn the latent structure of a
dataset by modeling the way in which data points diffuse through the
latent space
4. What is the process of forward diffusion? Start with a clean image and
add noise iteratively
5. Which process involves a model learning to remove noise from
images? Reverse diffusion

Create Image Captioning Models: Quiz


1. What is the purpose of the encoder in an encoder-decoder model? To
extract information from the image.
2. What is the name of the dataset the video uses to train the encoder-
decoder model? COCO dataset
3. What is the goal of the image captioning task? To generate text captions for
images
4. What is the purpose of the decoder in an encoder-decoder model? To
generate output data from the information extracted by the encoder
5. What is the purpose of the attention mechanism in an encoder-decoder
model? To allow the decoder to focus on specific parts of the image when
generating text captions.
6. What is the name of the model that is used to generate text captions for
images? Encoder-decoder model

Encoder-Decoder Architecture: Quiz


1. What is the name of the machine learning architecture that takes a
sequence of words as input and outputs a sequence of words?
Encoder-decoder
2. What is the purpose of the encoder in an encoder-decoder
architecture? To convert the input sequence into a vector
representation
3. What is the purpose of the decoder in an encoder-decoder
architecture? Choose 2 Answers 1.To generate the output sequence from
the vector representation 2.To predict the next word in the output sequence
4. What are two ways to generate text from a trained encoder-decoder
model at serving time? Greedy search and beam search
5. What is the difference between greedy search and beam search? Greedy
search always selects the word with the highest probability, whereas beam
search considers multiple possible words and selects the one with the
highest combined probability.

You might also like