<<< Previously
Context about this post
...This is my way of helping others by sharing feedback I received from ChatGPT (5.4 Thinking) on an interview. I hope it can be useful to others who are preparing for interviews, especially in data science and AI/ML roles. Company: GSR Business Services Position: Lead AI Engineer / AI Architect
Me: Attached is one-sided call transcript of an interview. It contains just the answers of the candidate (me). Can you please organize it / structure it and provide what might have been the questions asked by the interviewer (putting back into the organized structure)? AND IMPORTANTLY: Critic me and my answers.
Yes. I reconstructed the interview as best as possible from your one-sided transcript. A few of the questions are necessarily approximate because the audio text is short, noisy, and contains some unclear phrases like “DB cluster” and “WILP.” Everything below is inferred from your answers.
Reconstructed interview structure
1) Tell me about yourself
Likely interviewer question:
“Please introduce yourself.”
Your answer, organized:
-
You introduced yourself as Ashish.
-
You said you have 13 years of experience, with 11 years in the AI-ML space.
-
You mentioned experience in traditional ML, deep learning, data science, generative AI, agentic AI, and prompt engineering.
-
You highlighted Python as a strength, with some familiarity in SQL, PL/SQL, JavaScript, and Java.
-
You described your recent project as an Accenture offering involving agentic capabilities such as text-to-SQL, RAG, generic knowledge agent, and visualization agent, orchestrated by a router.
-
You also mentioned domain experience in finance, telecom, and edtech.
-
You ended with a personal note about reading and coding in your free time.
Critique:
This was your strongest answer in substance, but it was too long and not sharply structured. You had good material, but it came out as a stream rather than a crisp executive summary.
Better version of the same answer:
“I’m Ashish, with 13 years of overall experience and about 11 years focused on AI/ML. My core strengths are Python, machine learning, deep learning, and, more recently, generative AI and agentic AI systems. In my recent work, I contributed to an agentic platform involving text-to-SQL, RAG, knowledge agents, and visualization agents coordinated through routing/orchestration. I’ve worked across finance, telecom, and edtech, and I’m most comfortable in roles that combine hands-on model work with applied business problem-solving.”
2) What AI/ML areas or algorithms are you comfortable with?
Likely interviewer question:
“What algorithms or ML areas are you comfortable with?”
Your answer, organized:
-
You said supervised and unsupervised learning are clear to you.
-
You said reinforcement learning is weaker for you.
-
You listed linear regression, decision trees, SVM, and random forest.
-
You said you are hands-on with almost all of them, mainly through scikit-learn APIs.
Critique:
This answer was acceptable, but too generic for an experienced candidate. Saying “I access them via scikit-learn API” undersells your capability. It sounds tool-level rather than concept-level. At 13 years of experience, the interviewer likely wants to hear:
-
when you use which model,
-
trade-offs,
-
evaluation criteria,
-
one real project example.
Better version:
“I’m strongest in supervised and unsupervised learning. I’ve worked with regression, tree-based methods, SVMs, and ensemble models such as random forests. My reinforcement learning exposure is limited compared to the others. In practice, I choose models based on data size, interpretability requirements, feature quality, latency constraints, and explainability needs. For tabular enterprise data, tree-based models have often given me a good balance of performance and interpretability.”
3) How well do you understand GPT, BERT, Gemini, or LLM internals?
Likely interviewer question:
“How deep is your understanding of GPT/BERT/Gemini?”
or
“Have you worked on model internals, or only via APIs?”
Your answer, organized:
-
You honestly said you do not know the internal details of GPT, BERT, or Gemini deeply.
-
You said you are familiar with using such models through APIs.
-
You added that you studied BERT when it came out.
-
You said you are familiar with Transformers, but not with the internals of the latest models.
Critique:
Your honesty was good. The problem was the framing. You repeated what you do not know several times. That creates a “defensive” impression. A better answer would briefly acknowledge the limit, then pivot quickly to what you do know and have done. Right now, the interviewer likely concluded: “He has mostly API-level LLM exposure.”
Better version:
“I haven’t worked on training foundation models from scratch or on proprietary internals of GPT-class systems. My experience is more on the applied side: using LLM APIs, prompt design, orchestration, RAG, text-to-SQL, and agent workflows. Conceptually, I understand the Transformer foundation and how these models are applied in production systems, but my hands-on depth is stronger in implementation and solution design than in pretraining internals.”
That answer is still honest, but sounds confident.
4) What is a Transformer?
Likely interviewer question:
“What is a Transformer?”
possibly followed by
“Which Transformer?” or “Are all Transformers the same?”
Your answer, organized:
-
You referred to the 2017 Google paper.
-
You said not every Transformer is the same and different organizations make modifications.
-
You mentioned Meta, Google, OpenAI, etc. modifying the architecture.
-
You said Transformer is open and modifiable, though you were unsure about patent/licensing details.
Critique:
This part was weak. Not because you lacked all knowledge, but because the answer drifted. The interviewer probably expected a technical explanation, something like self-attention, sequence modeling, parallelization, encoder/decoder, or why it replaced RNN/LSTM in many tasks. Instead, your answer moved toward licensing, patents, and company modifications. That likely made you sound unsure.
The line “I need to check if it is patented by Google or how it’s licensed” hurt the answer. In an interview, this sounds like you lost the thread of the question.
Better version:
“A Transformer is a neural network architecture introduced in the 2017 ‘Attention Is All You Need’ paper. Its key idea is self-attention, which lets the model weigh relationships between tokens in a sequence more effectively than older sequential architectures like RNNs. That made it much better for large-scale language tasks and parallel training. Modern LLMs are Transformer-based, though different organizations adapt the base architecture in different ways.”
That would have been a strong, clean answer.
5) Do you use Transformers directly?
Likely interviewer question:
“Have you worked directly with Transformer models?”
or
“Do you use Transformers at model level?”
Your answer, organized:
-
You said you use LLMs via APIs such as OpenAI and Gemini APIs.
-
You said you do not directly use Transformers themselves as components; rather, you use LLMs built on them.
Critique:
Reasonable answer, but again it emphasized limitation more than capability. You could have said:
-
API-based LLM integration,
-
prompt engineering,
-
RAG,
-
orchestration,
-
agent workflows,
-
evaluation,
-
reliability handling.
That would make your applied expertise sound much more valuable.
6) Explain decision tree vs random forest
Likely interviewer question:
“What is a decision tree?”
or
“What is the difference between a decision tree and random forest?”
Your answer, organized:
-
You said decision tree is a building block of random forest.
-
You described decision tree as data structured via nodes and edges.
-
You said random forest is an ensemble of many decision trees.
-
You said the final decision is based on the constituent trees’ outputs.
Critique:
The core idea was correct, but the wording was imprecise. “Hard or soft working” was unclear. Also, “data is structured in the form of nodes and edges” is not the best way to explain a decision tree in an interview. Better to explain splits, impurity reduction, overfitting, and aggregation.
Better version:
“A decision tree makes predictions by recursively splitting the data based on feature values. It is easy to interpret but can overfit. A random forest reduces that risk by combining many decision trees trained on bootstrapped samples and random subsets of features. The final output is aggregated across trees, usually by majority vote in classification or averaging in regression.”
That sounds much more interview-ready.
7) Unclear technical question: “DB cluster”
Likely interviewer question:
Possibly one of these:
-
“What is DBSCAN clustering?”
-
“What is density-based clustering?”
-
“What is clustering?”
-
“What is distributed database clustering?”
This part is not clear from the transcript.
Your answer:
You said you were not sure.
Critique:
Saying “I’m not sure” is fine when needed. But the better approach is:
“I’m not fully sure I heard you correctly — did you mean DBSCAN clustering or database clustering?”
That would have bought you time, reduced misunderstanding, and perhaps turned a weak moment into a competent one.
This is one of the biggest interview habits to improve: clarify the question before conceding uncertainty.
8) Unclear technical question: “Depth decision tree”
Likely interviewer question:
Possibly:
“What is depth in a decision tree?”
or
“What is max depth in a decision tree?”
Your answer:
You said you know decision tree, but are not familiar with “depth decision tree.”
Critique:
This was likely a miss. If the interviewer asked about tree depth, that is a basic concept. Max depth controls how many levels the tree can grow and directly affects overfitting. Missing that probably weakened the perception of your ML fundamentals.
Ideal answer would be:
“Tree depth refers to the number of levels from root to leaf. It is an important hyperparameter because deeper trees can capture more complexity but are also more prone to overfitting.”
9) Was your M.Tech Regular?
Likely interviewer question:
“Yes, it’s WILP”
This was reference to Work Integrated Learning Program of BITS, Pilani.
Critique:
No fair criticism here because the transcript itself is unclear.
10) Who taught you machine learning?
Likely interviewer question:
“Who taught you machine learning?”
or perhaps a casual follow-up about education.
Your answer:
You said it was more than eight years ago and you do not recall the teacher’s name.
Critique:
This answer is not damaging by itself, but it ended weakly. A better recovery would be:
“I don’t recall the faculty name now, but the stronger learning actually came later through self-study and project work.”
That would turn an awkward personal-memory question into a professional-strength answer.
Overall assessment of your interview performance
What you did well
You came across as:
-
honest,
-
experienced,
-
broad in exposure,
-
genuinely hands-on in applied AI/ML work,
-
not pretending to know things you do not know.
Those are real positives.
What hurt you most
These were the main issues:
1. Too much hedging and filler language
You said “yeah,” “I mean,” and similar fillers a lot. That makes answers sound less authoritative.
2. You led with your limitations too often
Instead of briefly acknowledging a gap and pivoting to strengths, you stayed too long on what you did not know.
3. Conceptual answers were not crisp enough
Transformer, decision tree, random forest, and possibly tree depth were answered in a way that felt partially correct but not interview-polished.
4. You sounded more like an API user than an architect/expert
For a senior candidate, “I use scikit-learn APIs” and “I use OpenAI API / Gemini API” are not enough by themselves. The interviewer wants design thinking, trade-offs, production considerations, evaluation, and decision rationale.
5. You missed chances to clarify unclear questions
For things like “DB cluster,” you should have asked the interviewer to repeat or clarify.
My blunt verdict
You do not sound weak overall. You sound like someone with real industry exposure.
But in this transcript, you sounded under-packaged.
That means:
-
your real experience seems better than how it came across,
-
your delivery likely made you appear less technically sharp than you probably are,
-
especially for senior AI/ML/GenAI roles, this style can cost you.
The interviewer may have walked away thinking:
“Good practical exposure, but fundamentals are uneven and answers are not crisp enough for a senior-level technical conversation.”
That is fixable.
Biggest improvements to make before your next interview
-
Prepare a 45-second self-intro.
-
Prepare crisp 2-3 line definitions for:
-
Transformer
-
attention
-
supervised vs unsupervised vs reinforcement learning
-
decision tree
-
random forest
-
overfitting
-
precision/recall/F1
-
RAG
-
agents vs workflows
-
-
Stop saying “I don’t know” too early. First clarify the question.
-
Replace “I used APIs” with “I designed and implemented solutions using…”
-
For every answer, use this format:
-
definition,
-
practical example,
-
trade-off / limitation.
-
If you want, I’ll turn this into a proper interview preparation sheet with two columns: Likely Question / Ideal Improved Answer.
Go ahead.
Absolutely. I turned your transcript into a cleaner interview-prep sheet. A few questions are still approximate because the source is one-sided and some phrases are unclear, especially “DB cluster.” The likely topics below are inferred from your answers.
Interview preparation sheet
| Tell me about yourself. | “I’m Ashish, with 13 years of overall experience and around 11 years focused on AI/ML. My core strengths are Python, machine learning, deep learning, and, more recently, generative AI and agentic AI systems. In my recent work, I contributed to an agentic offering involving text-to-SQL, RAG, knowledge agents, and visualization agents coordinated through routing/orchestration. I’ve worked across finance, telecom, and edtech, and I’m most effective in roles that combine hands-on AI delivery with business problem-solving.” |
| What AI/ML areas are you strongest in? | “My strongest areas are supervised and unsupervised learning. I’ve worked with regression, tree-based models, SVMs, and ensemble methods. My reinforcement learning exposure is comparatively limited. In practice, I choose models based on data characteristics, explainability requirements, latency needs, and business constraints.” |
| Which algorithms have you worked with? | “I’ve worked hands-on with linear regression, decision trees, random forests, SVMs, and other standard ML methods, mainly in Python using scikit-learn and related tooling. I’m comfortable not just calling the APIs, but also selecting models, tuning them, evaluating them, and interpreting results in business context.” |
| How deep is your understanding of GPT, BERT, Gemini, or LLMs? | “My experience is stronger on the applied side than on foundation-model pretraining internals. I haven’t trained GPT-class models from scratch, but I’ve worked with LLM APIs, prompt design, RAG, text-to-SQL, and agent workflows. Conceptually, I understand the Transformer foundation and how these models are used in production systems.” |
| What is a Transformer? | “A Transformer is a neural network architecture introduced in the 2017 paper ‘Attention Is All You Need.’ Its key idea is self-attention, which helps the model capture relationships between tokens more effectively than older sequential architectures like RNNs. That made Transformers highly effective for large-scale language tasks and became the foundation for modern LLMs.” |
| Are all Transformers the same? | “No. The Transformer is the foundational architecture, but organizations adapt it in different ways depending on objectives such as scale, efficiency, context length, alignment, and inference cost. So while many LLMs are Transformer-based, their exact implementations and optimizations differ.” |
| Have you used Transformers directly, or mainly through APIs? | “Mostly through applied LLM usage rather than building Transformer models from scratch. My hands-on experience is with LLM integration through APIs, prompt engineering, RAG pipelines, agent orchestration, and use-case implementation.” |
| Explain supervised vs unsupervised learning. | “Supervised learning uses labeled data to learn a mapping from inputs to outputs, such as classification or regression. Unsupervised learning works on unlabeled data to identify structure, such as clustering, grouping, or dimensionality reduction. I’m comfortable with both, while reinforcement learning is an area I’d describe as less strong for me.” |
| What is a decision tree? | “A decision tree is a model that makes predictions by recursively splitting data based on feature values. Each split is chosen to improve purity or reduce error. It’s interpretable and useful, but if unconstrained it can overfit.” |
| What is a random forest, and how is it different from a decision tree? | “A random forest is an ensemble of multiple decision trees. Each tree is trained on a bootstrapped sample of the data and usually on random subsets of features. The final prediction is aggregated across the trees, typically by majority vote in classification or averaging in regression. Compared with a single tree, it usually generalizes better and is less prone to overfitting.” |
| What is tree depth or max depth in a decision tree? | “Tree depth refers to the number of levels from the root to the deepest leaf. Max depth is an important hyperparameter because deeper trees can model more complex patterns but also increase the risk of overfitting. Setting an appropriate max depth is one way to control model complexity.” |
| What recent project have you worked on? | “My recent work involved an agentic AI solution with components such as text-to-SQL, RAG, a knowledge agent, and a visualization agent, orchestrated through routing logic. The value was in breaking down user requests, sending them to the right agent, and combining outputs into a more usable enterprise workflow.” |
| Which programming languages are you comfortable with? | “Python is my strongest language and the one I use most in AI/ML work. I also have familiarity with SQL, PL/SQL, JavaScript, and Java, though my core delivery strength is clearly in Python-based data and AI systems.” |
| Which domains have you worked in? | “I’ve worked across finance, telecom, and edtech. That has helped me adapt AI/ML solutions to different types of business problems, data environments, and stakeholder expectations.” |
| Who taught you machine learning? | “I don’t recall the faculty name now, because that was several years ago. But most of my real depth came later through hands-on project experience, self-study, and practical implementation in industry.” |
Where your original answers were hurting you
Your transcript shows real experience, but the delivery weakened the impression. You repeatedly framed answers around what you didn’t know, especially around GPT/BERT/Gemini internals and Transformers, instead of quickly pivoting to what you do know and have done. You also sometimes answered in a tool-centric way, like “I use APIs” or “I access models via scikit-learn,” which sounds more junior than your actual experience level.
The biggest technical miss was around Transformer explanation and likely tree depth / max depth. Those are areas where the interviewer probably expected a cleaner, more fundamental explanation. The random forest answer had the right general direction, but it needed sharper wording around splits, ensembles, voting/averaging, and overfitting control.
Another issue was interview control. When the question sounded unclear, like “DB cluster,” the better move would have been to clarify the term before saying you were unsure. That alone can save weak moments in technical interviews.
Your best improvement pattern for next time
Use this answer structure every time:
Definition → practical experience → trade-off/limitation
Example:
Q: What is a Transformer?
“A Transformer is a neural architecture based on self-attention that became the foundation for modern LLMs. In my work, I use Transformer-based LLMs in applied settings such as RAG, prompt engineering, and agent workflows. I haven’t trained foundation models from scratch, so my strength is more on applied implementation than pretraining internals.”
That format makes you sound senior, honest, and structured.
My blunt coaching summary
You do not sound inexperienced.
You sound experienced but under-packaged.
That means your real capability is probably stronger than how it came across in the interview. The fix is not to learn everything from scratch. The fix is to make your answers:
-
shorter,
-
clearer,
-
more technical at the concept level,
-
more confident in framing.

No comments:
Post a Comment