See All: Miscellaneous Interviews @ Accenture
📘 Accenture Proficiency Test on LLMs
Q1. Few-shot Learning with GPT-3
Question (Cleaned)
You are developing a large language model using GPT-3 and want to apply few-shot learning techniques. You have a limited dataset for a specific task and want the model to generalize well. Which approach would be most effective?
Options:
a) Train the model on the entire dataset, then fine-tune on a small subset
b) Provide examples of the task in the input and let the model generate
c) LLMs are unable to handle single tasks
✅ Correct Answer
✔ b) Provide examples of the task in the input and let the model generate
💡 Hint
-
Few-shot learning = prompt engineering
-
No retraining required
-
You show the task via examples in the prompt
Q2. Edge AI with LLMs
Question
You are using an LLM for an edge AI application requiring real-time object detection. Which approach is most efficient?
Options:
a) Cloud-based LLM processing
b) Use a complex LLM regardless of compute
c) Use an LLM optimized for edge devices balancing accuracy and efficiency
d) Store data and process later
e) Manual input-based LLM
✅ Correct Answer
✔ c) Use an LLM optimized for edge devices
💡 Hint
-
Edge AI prioritizes low latency + low compute
-
Cloud = latency bottleneck
-
“Optimized” is the keyword
Q3. Improving Fine-tuned LLM Performance (Select Two)
Question
You fine-tuned a pre-trained LLM but performance is poor. What steps improve it?
Options:
a) Gather more annotated Q&A data with better supervision
b) Change architecture to Mixture of Experts / Mixture of Tokens
c) Simplify the task definition
d) Smaller chunks reduce retrieval complexity
e) Smaller chunks improve generation accuracy
✅ Correct Answers
✔ a) Gather more annotated data
✔ c) Simplify the task definition
💡 Hint
-
First fix data quality & task framing
-
Architecture changes come later
-
Accenture favors practical ML hygiene
Q4. Chunking in RAG Systems
Question
Why do smaller chunks improve a RAG pipeline?
Correct Statements:
✔ Smaller chunks reduce retrieval complexity
✔ Smaller chunks improve generation accuracy
💡 Hint
-
Retrieval works better with semantic focus
-
Too large chunks dilute meaning
Q5. Challenges of Local LLMs in Chatbots
Question
What is a potential challenge local LLMs face in long-term task planning?
Options:
a) Unable to adjust plans when errors occur
b) Unable to handle complex tasks
c) Unable to handle multiple queries
d) Unable to use task decomposition
✅ Correct Answer
✔ a) Unable to adjust plans when errors occur
💡 Hint
-
Local models lack persistent memory & feedback loops
-
Planning correction is the real limitation
Q6. RAG Pipeline – Poor Semantic Representation
Question
Why might embeddings not represent semantic meaning correctly?
Options:
a) Encoder not trained on similar data
b) Text chunks too large
c) Encoder incompatible with RAG
d) Incorrect chunk splitting
e) Encoder not initialized
✅ Correct Answers
✔ a) Domain mismatch in training data
✔ b) Chunk size too large
💡 Hint
-
Embeddings fail due to domain shift or context overflow
-
Initialization issues are rare in practice
Q7. Designing Advanced RAG – Chunking Decision
Question
Which is NOT a valid reason for splitting documents into smaller chunks?
Options:
a) Large chunks are harder to search
b) Small chunks won’t fit in context window
c) Smaller chunks improve indexing efficiency
✅ Correct Answer
✔ b) Small chunks won’t fit in context window
💡 Hint
-
Smaller chunks fit better, not worse
-
This is a classic reverse-logic trap
Q8. Intent in Chatbots
Question
What is the purpose of an intent in a chatbot?
✅ Correct Answer
✔ To determine the user’s goal
💡 Hint
-
Intent ≠ entity
-
Intent answers “what does the user want?”
Q9. Healthcare LLM Security
Question
Which strategy best ensures privacy and compliance for patient data?
Options:
a) Public API & public cloud
b) Layered security: encryption, access control, audits, private network
c) No security changes
d) Plaintext storage
e) Unverified 3rd party services
✅ Correct Answer
✔ b) Layered security approach
💡 Hint
-
Healthcare = defense in depth
-
Accenture loves encryption + audits + private infra
Q10. Edge AI Programming Language
Question
Which language is commonly used for developing ML models in Edge AI?
Options:
-
Java
-
Python
-
C++
-
JavaScript
-
Ruby
✅ Correct Answer
✔ Python
💡 Hint
-
Python dominates ML tooling
-
C++ is used for deployment, not modeling
Q11. Customizing METEOR Scoring
Question
How do you customize METEOR’s scoring function?
✅ Correct Answer
✔ Modify tool configuration or run with command-line flags
💡 Hint
-
METEOR supports custom weighting
-
Not hardcoded, no paid version needed
Q12. Bias Mitigation in LLMs
Question
First step when an LLM is found biased?
✅ Correct Answer
✔ Identify the source of bias
💡 Hint
-
Diagnosis before correction
-
Retraining comes later
Q13. DeepEval Tool – Advanced Features
Question
Which statement is correct about DeepEval advanced usage?
✅ Correct Answer
✔ Configure advanced features in Python scripts
💡 Hint
-
Tool-level configuration
-
No paid version needed
No comments:
Post a Comment