0% found this document useful (0 votes)
461 views1 page

Oracle Cloud Infrastructure 2024 Generative AI Professional 1Z0-1127-24 Practice Exam Questions GitHub

To u mm

Uploaded by

vthreefriends
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
461 views1 page

Oracle Cloud Infrastructure 2024 Generative AI Professional 1Z0-1127-24 Practice Exam Questions GitHub

To u mm

Uploaded by

vthreefriends
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Sign in Sign up

Instantly share code, notes, and snippets.

primaryobjects / 1-exam.md
Last active 2 days ago

Embed <script src="https://gist.github.com/pri

e Revisions 11 Stars 5 Forks 3

Oracle Cloud Infrastructure 2024 Generative AI


Professional 1Z0-1127-24 Practice Exam Questions

1-exam.md Raw

How are documents usually evaluated in


the simplest form of keyword-based
search? According to the length of the
documents Based on the number of
images and videos contained in the
documents By the complexity of
language used in the documents **
Based on the presence and frequency of
the user-provided keywords

When is fine-tuning an appropriate


method for customizing a Large
Language Model (LLM)? When you want
to optimize the model without any
instructions When the LLM requires
access to the latest data for generating
outputs ** When the LLM does not
perform well on a task and the data for
prompt engineering is too large When
the LLM already understands the topics
necessary for text generation

In which scenario is soft prompting


appropriate compared to other training
styles? When the model requires
continued pretraining on unlabeled data
When the model needs to be adapted to
perform well in a domain on which it was
not originally trained ** When there is a
need to add learnable parameters to a
Large Language Model (LLM) without
task-specific training When there is a
significant amount of labeled, task-
specific data available

How does the temperature setting in a


decoding algorithm influence the
probability distribution over the
vocabulary? Increasing the temperature
removes the impact of the most likely
word. Decreasing the temperature
broadens the distribution, making less
likely words more probable. **
Increasing the temperature flattens the
distribution, allowing for more varied
word choices. Temperature has no
effect on probability distribution; it only
changes the speed of decoding.

Which statement is true about Fine-


tuning and Parameter-Efficient Fine-
Tuning (PEFT)? PEFT requires replacing
the entire model architecture with a new
one designed specifically for the new
task, making it significantly more data-
intensive than Fine-tuning. ** Fine-
tuning requires training the entire model
on new data, often leading to substantial
computational costs, whereas PEFT
involves updating only a small subset of
parameters, minimizing computational
requirements and data needs. Both
Fine-tuning and PEFT require the model
to be trained from scratch on new data,
making them equally data and
computationally intensive. Fine-tuning
and PEFT do not involve model
modification; they differ only in the type
of data used for training, with Fine-
tuning requiring labeled data and PEFT
using unlabeled data.

What does accuracy measure in the


context of fine-tuning results for a
generative model? ** How many
predictions the model made correctly
out of all the predictions in an evaluation
The number of predictions a model
makes, regardless of whether they are
correct or incorrect The depth of the
neural network layers used in the model
The proportion of incorrect predictions
made by the model during an evaluation

In the context of generating text with a


Large Language Model (LLM), what
does the process of greedy decoding
entail? Picking a word based on its
position in a sentence structure **
Choosing the word with the highest
probability at each step of decoding
Selecting a random word from the entire
vocabulary at each step Using a
weighted random selection based on a
modulated distribution

In the simplified workflow for managing


and querying vector data, what is the
role of indexing? ** To map vectors to a
data structure for faster searching,
enabling efficient retrieval To categorize
vectors based on their originating data
type (text, images, audio) To convert
vectors into a nonindexed format for
easier retrieval To compress vector data
for minimized storage usage

When does a chain typically interact


with memory in a run within the
LangChain framework? Continuously
throughout the entire chain execution
process Only after the output has been
generated Before user input and after
chain execution ** After user input but
before chain execution, and again after
core logic but before output

What do prompt templates use for


templating in language model
applications? Python's lambda functions
Python's class and object structures
Python's list comprehension syntax **
Python's str.format syntax

What does a cosine distance of 0


indicate about the relationship between
two embeddings? ** They are similar in
direction They are unrelated They are
completely dissimilar They have the
same magnitude

Which is a characteristic of T-Few fine-


tuning for Large Language Models
(LLMs)? It does not update any weights
but restructures the model architecture.
** It selectively updates only a fraction
of the model's weights. It updates all the
weights of the model uniformly. It
increases the training time as compared
to Vanilla fine-tuning.

What does the Loss metric indicate


about a model's predictions? Loss
measures the total number of
predictions made by a model. Loss
describes the accuracy of the right
predictions rather than the incorrect
ones. ** Loss is a measure that indicates
how wrong the model's predictions are.
Loss indicates how good a prediction is,
and it should increase as the model
improves.

Accuracy in vector databases


contributes to the effectiveness of Large
Language Models (LLMs) by preserving
a specific type of relationship. What is
the nature of these relationships, and
why are they crucial for language
models? Hierarchical relationships;
important for structuring database
queries ** Semantic relationships;
crucial for understanding context and
generating precise language Linear
relationships; they simplify the modeling
process Temporal relationships;
necessary for predicting future linguistic
trends

How does a presence penalty function in


language model generation? It penalizes
all tokens equally, regardless of how
often they have appeared. It penalizes
only tokens that have never appeared in
the text before. ** It penalizes a token
each time it appears after the first
occurrence. It applies a penalty only if
the token has appeared more than
twice.

How does the structure of vector


databases differ from traditional
relational databases? A vector database
stores data in a linear or tabular format.
** It is based on distances and
similarities in a vector space. It uses
simple row-based data storage. It is not
optimized for high-dimensional spaces.

Why is it challenging to apply diffusion


models to text generation? ** Because
text representation is categorical unlike
images Because text generation does
not require complex models Because
text is not categorical Because diffusion
models can only produce images

When is fine-tuning an appropriate


method for customizing a Large
Language Model (LLM)? When the LLM
requires access to the latest data for
generating outputs When you want to
optimize the model without any
instructions ** When the LLM does not
perform well on a task and the data for
prompt engineering is too large When
the LLM already understands the topics
necessary for text generation

In which scenario is soft prompting


appropriate compared to other training
styles? When there is a significant
amount of labeled, task-specific data
available ** When there is a need to add
learnable parameters to a Large
Language Model (LLM) without task-
specific training When the model needs
to be adapted to perform well in a
domain on which it was not originally
trained When the model requires
continued pretraining on unlabeled data

What is the purpose of Retrievers in


LangChain? To break down complex
tasks into smaller steps To combine
multiple components into a single
pipeline ** To retrieve relevant
information from knowledge bases To
train Large Language Models

What does the RAG Sequence model do


in the context of generating a response?
It modifies the input query before
retrieving relevant documents to ensure
a diverse response. ** For each input
query, it retrieves a set of relevant
documents and considers them together
to generate a cohesive response. It
retrieves relevant documents only for
the initial part of the query and ignores
the rest. It retrieves a single relevant
document for the entire input query and
generates a response based on that
alone.

What is the purpose of Retrieval


Augmented Generation (RAG) in text
generation? To store text in an external
database without using it for generation
To generate text based only on the
model's internal knowledge without
external data ** To generate text using
extra information obtained from an
external data source To retrieve text
from an external source and present it
without any modifications

Which statement is true about Fine-


tuning and Parameter-Efficient Fine-
Tuning (PEFT)? Fine-tuning and PEFT do
not involve model modification; they
differ only in the type of data used for
training, with Fine-tuning requiring
labeled data and PEFT using unlabeled
data. PEFT requires replacing the entire
model architecture with a new one
designed specifically for the new task,
making it significantly more data-
intensive than Fine-tuning. Both Fine-
tuning and PEFT require the model to be
trained from scratch on new data,
making them equally data and
computationally intensive. ** Fine-tuning
requires training the entire model on
new data, often leading to substantial
computational costs, whereas PEFT
involves updating only a small subset of
parameters, minimizing computational
requirements and data needs.

Which LangChain component is


responsible for generating the linguistic
output in a chatbot system? LangChain
Application ** LLMs Vector Stores
Document Loaders

How are documents usually evaluated in


the simplest form of keyword-based
search? Based on the number of images
and videos contained in the documents
** Based on the presence and frequency
of the user-provided keywords
According to the length of the
documents By the complexity of
language used in the documents

Given the following code block: history =


StreamlitChatMessageHistory(key="cha
t_messages") memory =
ConversationBufferMemory(chat_memo
ry=history)

Which statement is NOT true about


StreamlitChatMessageHistory? A given
StreamlitChatMessageHistory will NOT
be persisted. A given
StreamlitChatMessageHistory will not be
shared across user sessions.
StreamlitChatMessageHistory will store
messages in Streamlit session state at
the specified key. **
StreamlitChatMessageHistory can be
used in any type of LLM application.

Which statement is true about string


prompt templates and their capability
regarding variables? ** They support
any number of variables, including the
possibility of having none. They can only
support a single variable at a time. They
are unable to use any variables. They
require a minimum of two variables to
function properly.

When does a chain typically interact


with memory in a run within the
LangChain framework? Only after the
output has been generated Before user
input and after chain execution
Continuously throughout the entire
chain execution process ** After user
input but before chain execution, and
again after core logic but before output

How can the concept of


"Groundedness" differ from "Answer
Relevance" in the context of Retrieval
Augmented Generation (RAG)?
Groundedness focuses on data integrity,
whereas Answer Relevance emphasizes
lexical diversity. Groundedness
measures relevance to the user query,
whereas Answer Relevance evaluates
data integrity. Groundedness refers to
contextual alignment, whereas Answer
Relevance deals with syntactic
accuracy. ** Groundedness pertains to
factual correctness, whereas Answer
Relevance concerns query relevance.

What is LangChain? A Ruby library for


text generation ** A Python library for
building applications with Large
Language Models A JavaScript library
for natural language processing A Java
library for text summarization

Which statement accurately reflects the


differences between these approaches
in terms of the number of parameters
modified and the type of data used?
Fine-tuning and continuous pretraining
both modify all parameters and use
labeled, task-specific data. Soft
prompting and continuous pretraining
are both methods that require no
modification to the original parameters
of the model. Parameter Efficient Fine
Tuning and Soft prompting modify all
parameters of the model using
unlabeled data. ** Fine-tuning modifies
all parameters using labeled, task-
specific data, whereas Parameter
Efficient Fine-Tuning updates a few, new
parameters also with labeled, task-
specific data.

What does in-context learning in Large


Language Models involve? Pretraining
the model on a specific domain Training
the model using reinforcement learning
Adding more layers to the model **
Conditioning the model with task-
specific instructions or demonstrations

What does the term "hallucination" refer


to in the context of Language Large
Models (LLMs)? The process by which
the model visualizes and describes
images in detail The model's ability to
generate imaginative and creative
content A technique used to enhance
the model's performance on specific
tasks ** The phenomenon where the
model generates factually incorrect
information or unrelated content as if it
were true

What is prompt engineering in the


context of Large Language Models
(LLMs)? ** Iteratively refining the ask to
elicit a desired response Training the
model on a large data set Adding more
layers to the neural network Adjusting
the hyperparameters of the model

What is the role of temperature in the


decoding process of a Large Language
Model (LLM)? ** To adjust the
sharpness of probability distribution
over vocabulary when selecting the next
word To increase the accuracy of the
most likely word in the vocabulary To
determine the number of words to
generate in a single decoding step To
decide to which part of speech the next
word should belong

What is the main advantage of using


few-shot model prompting to customize
a Large Language Model (LLM)? It
allows the LLM to access a larger data
set. It significantly reduces the latency
for each model request. It eliminates the
need for any training or computational
resources. ** It provides examples in the
prompt to guide the LLM to better
performance with no training cost.

Which is a distinctive feature of GPUs in


Dedicated AI Clusters used for
generative AI tasks? GPUs are shared
with other customers to maximize
resource utilization. GPUs are used
exclusively for storing large data sets,
not for computation. Each customer's
GPUs are connected via a public
Internet network for ease of access. **
The GPUs allocated for a customer’s
generative AI tasks are isolated from
other GPUs.

What happens if a period (.) is used as a


stop sequence in text generation? The
model generates additional sentences to
complete the paragraph. ** The model
stops generating text after it reaches the
end of the first sentence, even if the
token limit is much higher. The model
ignores periods and continues
generating text until it reaches the token
limit. The model stops generating text
after it reaches the end of the current
paragraph.

What is the purpose of frequency


penalties in language model outputs? To
reward the tokens that have never
appeared in the text ** To penalize
tokens that have already appeared,
based on the number of times they have
been used To randomly penalize some
tokens to increase the diversity of the
text To ensure that tokens that appear
frequently are used more often

What is the purpose of embeddings in


natural language processing? To
increase the complexity and size of text
data To translate text into a different
language To compress text data into
smaller files for storage ** To create
numerical representations of text that
capture the meaning and relationships
between words or phrases

What is the function of the Generator in


a text generation system? To store the
generated responses for future use To
rank the information based on its
relevance to the user's query ** To
generate human-like text using the
information retrieved and ranked, along
with the user's original query To collect
user queries and convert them into
database search terms

What does the Ranker do in a text


generation system? It sources
information from databases to use in
text generation. It generates the final
text based on the user's query. It
interacts with the user to understand the
query better. ** It evaluates and
prioritizes the information retrieved by
the Retriever.

What differentiates Semantic search


from traditional keyword search? ** It
involves understanding the intent and
context of the search. It is based on the
date and author of the content. It relies
solely on matching exact keywords in
the content. It depends on the number
of times keywords appear in the
content.

Which is a key characteristic of Large


Language Models (LLMs) without
Retrieval Augmented Generation (RAG)?
** They rely on internal knowledge
learned during pretraining on a large text
corpus. They use vector databases
exclusively to produce answers. They
always use an external database for
generating responses. They cannot
generate responses without fine-tuning.

What do embeddings in Large Language


Models (LLMs) represent? The
grammatical structure of sentences in
the data The color and size of the font in
textual data ** The semantic content of
data in high-dimensional vectors The
frequency of each word or pixel in the
data

How are chains traditionally created in


LangChain? Declaratively, with no
coding required ** Using Python
classes, such as LLM Chain and others
Exclusively through third-party software
integrations By using machine learning
algorithms

What is the purpose of memory in the


LangChain framework? ** To store
various types of data and provide
algorithms for summarizing past
interactions To retrieve user input and
provide real-time output only To perform
complex calculations unrelated to user
interaction To act as a static database
for storing permanent records

How are prompt templates typically


designed for language models? ** As
predefined recipes that guide the
generation of language model prompts
To work only with numerical data instead
of textual content As complex
algorithms that require manual
compilation To be used without any
modification or customization

What is LECL in the context of


LangChain Chains? A programming
language used to write documentation
for LangChain ** A declarative way to
compose chains together using
LangChain Expression Language A
legacy method for creating chains in
LangChain An older Python library for
building Large Language Models

What is the function of "Prompts" in the


chatbot system? They handle the
chatbot's memory and recall abilities.
They are responsible for the underlying
mechanics of the chatbot. They store
the chatbot's linguistic knowledge. **
They are used to initiate and guide the
chatbot's responses.

An LLM emits intermediate reasoning


steps in responses during what type of
prompting? In context learning Soft
prompting Least to most ** Chain of
thought

What is a characteristic of T Few fine


tuning? It updates model weights
uniformly It updates a fraction of
weights to reduce number of
parameters ** It selectively updates
weights to reduce compurational load
and prevent overfitting It increases
training time

For a fine tuning dedicated cluster how


many minimum unit hours for 10 days?
Hint: Fine-tuning requires 2 unit to run,
10 days multiplied by 24 hours per day
multiplied by 2 unit is 480. Hosting
cluster is 744. 200 240 744 ** 480

When building a chatbot with an online


retail company's internal store policies
and a memory of conversation which

You might also like