Easter Sale Limited Time Flat 70% Discount offer - Ends in 0d 00h 00m 00s - Coupon code: 70spcl

Oracle 1z0-1127-24 Oracle Cloud Infrastructure 2024 Generative AI Professional Exam Practice Test

Page: 1 / 6
Total 64 questions

Oracle Cloud Infrastructure 2024 Generative AI Professional Questions and Answers

Question 1

When should you use the T-Few fine-tuning method for training a model?

Options:

A.

For complicated semantical undemanding improvement

B.

For models that require their own hosting dedicated Al duster

C.

For data sets with a few thousand samples or less

D.

For data sets with hundreds of thousands to millions of samples

Question 2

Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?

Options:

A.

Both Fine-tuning and PEFT require the model to be trained from scratch on new data, making them equally data and computationally intensive.

B.

Fine-tuning and PEFT do not involve model modification; they differ only in the type of data used for training, with Fine-tuning requiring labeled data and PEFT using unlabeled data.

C.

Fine-tuning requires training the entire model on new data, often leading to substantial computational costs, whereas PEFT involves updating only a small subset of parameters, minimizing computational requirements and data needs.

D.

PEFT requires replacing the entire model architecture with a new one designed specifically for the new task, making it significantly more data-intensive than Fine-tuning.

Question 3

Which is a characteristic of T-Few fine-tuning for Large Language Models (LLMs)?

Options:

A.

It selectively updates only a fraction of the model’s weights.

B.

It does not update any weights but restructures the model architecture.

C.

It updates all the weights of the model uniformly.

D.

It increases the training time as compared to Vanilla fine-tuning.

Question 4

What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?

Options:

A.

Improved retrievals for Retrieval Augmented Generation (RAG) systems

B.

Capacity to translate text in over u languages

C.

Support for tokenizing longer sentences

D.

Emphasis on syntactic clustering of word embedding’s

Question 5

What does a dedicated RDMA cluster network do during model fine-tuning and inference?

Options:

A.

It leads to higher latency in model inference.

B.

It enables the deployment of multiple fine-tuned models.

C.

It limits the number of fine-tuned model deployable on the same GPU cluster.

D.

It increases G PU memory requirements for model deployment.

Question 6

How does the structure of vector databases differ from traditional relational databases?

Options:

A.

It is not optimized for high-dimensional spaces.

B.

It is based on distances and similarities in a vector space.

C.

It uses simple row-based data storage.

D.

A vector database stores data in a linear or tabular format.

Question 7

Which Oracle Accelerated Data Science (ADS) class can be used to deploy a Large Language Model (LLM) application to OCI Data Science model deployment?

Options:

A.

RetrievalQA

B.

Text Leader

C.

Chain Deployment

D.

GenerativeAI

Question 8

How does a presence penalty function in language model generation?

Options:

A.

It penalizes a token each time it appears after the first occurrence.

B.

It applies a penalty only if the token has appeared more than twice.

C.

It penalizes only tokens that have never appeared in the text before.

D.

It penalizes all tokens equally, regardless of how often they have appeared.

Question 9

In the context of generating text with a Large Language Model (LLM), what does the process of greedy decoding entail?

Options:

A.

Selecting a random word from the entire vocabulary at each step

B.

Choosing the word with the highest probability at each step of decoding

C.

Picking a word based on its position in a sentence structure

D.

Using a weighted random selection based on a modulated distribution

Question 10

In the simplified workflow for managing and querying vector data, what is the role of indexing?

Options:

A.

To compress vector data for minimized storage usage

B.

To convert vectors into a nonindexed format for easier retrieval

C.

To categorize vectors based on their originating data type (text, images, audio)

D.

To map vectors to a data structure for faster searching, enabling efficient retrieval

Question 11

How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?

Options:

A.

Increasing the temperature flattens the distribution, allowing for more varied word choices.

B.

Increasing the temperature removes the impact of the most likely word.

C.

Temperature has no effect on probability distribution; it only changes the speed of decoding.

D.

Decreasing the temperature broadens the distribution, making less likely words more probable.

Question 12

Which is a cost-related benefit of using vector databases with Large Language Models (LLMs)?

Options:

A.

They require frequent manual updates, which increase operational costs.

B.

They offer real-time updated knowledge bases and are cheaper than fine-tuned LLMs.

C.

They increase the cost due to the need for real- time updates.

D.

They are more expensive but provide higher quality data.

Question 13

How can the concept of "Groundedness" differ from "Answer Relevance" in the context of Retrieval Augmented Generation (RAG)?

Options:

A.

Groundedness pertains to factual correctness, whereas Answer Relevance concerns query relevance.

B.

Groundedness measures relevance to the user query, whereas Answer Relevance evaluates data integrity.

C.

Groundedness focuses on data integrity, whereas Answer Relevance emphasizes lexical diversity.

D.

Groundedness refers to contextual alignment, whereas Answer Relevance deals with syntactic accuracy.

Question 14

Which is the main characteristic of greedy decoding in the context of language model word prediction?

Options:

A.

It chooses words randomly from the set of less probable candidates.

B.

It requires a large temperature setting to ensure diverse word selection.

C.

It selects words bated on a flattened distribution over the vocabulary.

D.

It picks the most likely word email at each step of decoding.

Question 15

What do prompt templates use for templating in language model applications?

Options:

A.

Python’s lambda functions

B.

Python’s str.format syntax

C.

Python’s list comprehension syntax

D.

Python’s class and object structures

Question 16

How are documents usually evaluated in the simplest form of keyword-based search?

Options:

A.

By the complexity of language used in the documents

B.

Based on the presence and frequency of the user-provided keywords

C.

Based on the number of images and videos contained in the documents

D.

According to the length of the documents

Question 17

Which is NOT a typical use case for LangSmith Evaluators?

Options:

A.

Measuring coherence of generated text

B.

Aliening code readability

C.

Evaluating factual accuracy of outputs

D.

Detecting bias or toxicity

Question 18

When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?

Options:

A.

When the LLM requires access to the latest data for generating outputs

B.

When the LLM already understands the topics necessary for text generation

C.

When the LLM does not perform well on a task and the data for prompt engineering is too large

D.

When you want to optimize the model without any instructions

Question 19

How does the utilization of T-Few transformer layers contribute to the efficiency of the fine-tuning process?

Options:

A.

By incorporating additional layers to the base model

B.

By allowing updates across all layers of the model

C.

By excluding transformer layers from the fine-tuning process entirely

D.

By restricting updates to only a specific croup of transformer Layers

Page: 1 / 6
Total 64 questions