Question 1:
Which LangChain component is responsible for generating the linguistic output in a chatbot system?
a) Document Loaders
b) Vector Stores
c) LangChain Application
d) LLMs ✅
Question 2:
Which statement best describes the role of encoder and decoder models in natural language processing?
a) Encoder models and decoder models both convert sequences of words into vector representations without generating new text.
b) Encoder models take a sequence of words and predict the next word in the sequence, whereas decoder models convert a sequence of words into a numerical representation
c) Encoder models convert a sequence of words into a vector representation to generate a sequence of words ✅
d) Encoder models are used only for numerical calculations, whereas decoder models are used to interpret the calculated numerical values back into text
Question 3:
What does a higher number assigned to a token signify in the “Show Likelihoods” feature of the language model token generation?
a) The token is less likely to follow the current token.
b) The token is more likely to follow the current token. ✅
c) The token is unrelated to the current token and will not be used.
d) The token will be the only one considered in the next generation step.
4) Which statement is true about the “Top p” parameter of the OCI Generative AI Generation models?
a) "Top p" selects tokens from the “Top k” tokens sorted by probability
b) "Top p" assigns penalties to frequent occurring tokens
c) ✅ "Top p" limits token selection based on the sum of their probabilities.
d) "Top p" determines the maximum number of tokens per response.
5) How does a presence penalty function in language model generation when using OCI Generative AI service?
a) It penalizes all tokens equally, regardless of how often they have appeared
b) It only penalizes tokens that have never appeared in the text before
c) It applies a penalty only if the token has appeared more than twice
d) It penalizes a token each time it appears after the first occurrence ✅
6) Which is NOT a typical use case for LangSmith Evaluators?
a) Measuring coherence of generated text
b) Aliging code readability ✅
c) Evaluating factual accuracy of outputs
d) Detecting bias or toxicity
7) What does the term “Hallucination” refer in the context of Large Language Models (LLMs)?
a) The model’s ability to generate imaginative and creative content
b) A Technique used to enhance the model’s performance on specific tasks
c) The process by which the model visualizes and describes images in details
d) The phenomenon where the model generates factually incorrect information or unrelated content as if it were true ✅
8) Given the following code:
PromptTemplate(input_variable=[“human_input”,”city”], template=template)
Which statement is true about PromptTemplate in relation to input_variables?
a) PromptTemplate requires a minimum of two variables to function properly.
b) PromptTemplate can support only a single variable at a time.
c) PromptTemplate supports any number of variables, including the possibility of having none. ✅
d) PromptTemplate is unable to use any variables.
9) What does the Loss metric indicate about a model’s predictions?
a) Loss measures the total number of predictions made by model.
b) Loss is a measure that indicates how wrong the model’s predictions are.✅
c) Loss indicates how good a predictions is, and it should increase as the model improves.
d) Loss describes the accuracy of the right predictions rather than the incorrect ones.
10) What does “k-shot prompting” refer to when using Large Language Models for task-specific applications?
a) Providing the exact k words in the prompt to guide the model’s response
b) Explicitly providing k examples of the intended task in the prompt to guide the model’s output ✅
c) The process of training the model on k different tasks simultaneously to improve its versatility
d) Limiting the model to only k possible outcomes or answers for a given task
11) When does a chain typically interact with memory in a run within the LangChain framework?
a) Only after the output has been generated
b) Before user input and after chain execution
c) After user input but before chain execution, and again after core logic but before output ✅
d) Continuously throughout the entire chain execution process
12) What does the RAG Sequence model do in the context of generating a response?
a) It retrieves a single document for the entire input query and generates a response based on that alone.
b) For each input query, it retrieves a set of relevant documents and considers them together to generate a cohesive response. ✅
c) It retrieves relevant documents only for the initial part of the query and ignores the rest.
d) It modifies the input query before retrieving relevant documents to ensure a diverse response.
13) What does the Ranker do in a text generation system?
a) It generates the final text based on the user’s query
b) It sources information from databases to use in text generation
c) It evaluates and prioritizes the information retrieved by the Retriever ✅
d) It interacts with the user to understand the query better
14) In which scenario is soft prompting especially appropriate compared to other training styles?
a) When there is a significant amount of labeled, task-specific data available.
b) When the model needs to be adapted to perform well in a different domain it was not originally trained on.
c) When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training ✅
d) When the model requires continued pre-training on unlabeled data.
0 Comments