Showing posts with label Interview Preparation. Show all posts
Showing posts with label Interview Preparation. Show all posts

Wednesday, May 6, 2026

How to survive layoffs?

See All Articles on Layoffs    « Previously


Career · Jobs · AI Economy

The Pink Slip
Survival Guide

Layoffs are no longer one-off crises — they have become the background hum of the modern economy. Here is how to keep your footing when the ground shifts beneath you.

May 20268 min read

When Job Cuts Become the Norm

It is hard to scroll through a business newsfeed today without stumbling onto another headline about corporate restructuring. Cognizant, the US-based IT giant, is reportedly planning to eliminate around 15,000 positions globally — with the heaviest cuts expected in India. Coinbase, the American crypto exchange, has announced it is slashing 14% of its workforce. PayPal is trimming 20% of its staff. These are not fringe cases. They are part of a gathering wave: more than 90,000 tech employees have already been handed pink slips so far in 2026 alone.

The common thread running through almost every announcement is artificial intelligence. A recent Gartner survey found that 80% of companies deploying AI are actively reducing headcount. This is the uncomfortable truth the industry is dancing around: AI is not just supplementing human work — in many roles, it is replacing it outright.

"Artificial intelligence is not assisting jobs. It is redesigning them — and in many cases, eliminating them entirely."

But here is what nobody tells you in those press releases: a layoff does not merely remove a salary. It dismantles routine. It chips away at confidence. One terse email, and suddenly a person is asking themselves questions they have no business asking — Was I not good enough? The answer, almost always, is no. Structural forces — automation cycles, cost pressures, macro headwinds — do not discriminate between the talented and the mediocre. They sweep wide.

What follows is not a lament about the job market. It is a practical guide for navigating it.

By the Numbers

Company Sector Jobs Cut Status
Cognizant IT Services ~15,000 globally Reported
Coinbase Crypto / Fintech 14% of workforce Announced
PayPal Payments 20% of staff Planned
Tech sector (2026 YTD) Multiple 90,000+ Confirmed
Statistic Finding Source
Companies using AI that are cutting staff 80% Gartner Survey
Job seekers who apply online ~82% Industry data
Online applicants who land an interview Only 2% Industry data

Five Steps to Come Back Stronger

These five principles are not feel-good platitudes. Each one addresses a specific, practical failure mode that makes the difference between a three-month gap on a résumé and a genuine career pivot.

01
Accept It — But Don't Resign to It

Denial is comfortable. It is also dangerous. Grieve the job — that is entirely valid — but do not pitch a tent inside the grief. There is a critical difference between acknowledging what happened and surrendering to it. You cannot fight a storm you refuse to see.

02
Upskill With Urgency

The same companies doing the cuts are investing heavily in AI, automation, and digital transformation. That is your roadmap. Prompt engineering, data analysis, cloud certifications, product thinking — pick a lane and persist. Your degree got you hired once. Your skills keep you employed.

03
Rebuild Your CV for Impact

Most résumés are packed with buzzwords and long paragraphs that say almost nothing. Lead with a single sentence that tells a hiring manager who you are in under ten seconds. Highlight impact, not duties. Recruiters do not read résumés — they scan them for value. Make sure that value is unmissable.

04
Network — For Real

Eighty-two percent of job seekers apply online. Only 2% land an interview that way. Think of online applications as crowded highways and referrals as fast lanes. Reach out to former managers, ex-colleagues, and mentors. Most roles are filled before they are ever posted publicly. Referrals are not just helpful — they are essential.

05
Protect Your Mental Health — Fiercely

A layoff can shake your identity to its core. The morning anxiety. The compulsive LinkedIn scrolling. The toxic comparisons to someone else's highlight reel. None of that helps. Build a daily routine. Move your body. Eat well. Reach out to people you trust. The currency of today's job market is adaptability — and adaptability requires a mind that is not running on empty.

A Note on the Mental Weight of Job Loss

It bears saying plainly: losing a job is a genuinely difficult life event, not merely a logistical inconvenience. Identity, self-worth, and social connection are all tied up in work in ways we rarely acknowledge until it is gone.

Practical habits that measurably help:

  • Maintain a structured daily schedule — even without a 9-to-5 anchoring it
  • Limit passive social media consumption, especially LinkedIn during vulnerable hours
  • Exercise consistently — even short walks count
  • Designate job-search hours rather than letting anxiety make it all-day
  • Talk to a friend, therapist, or peer group — isolation amplifies distress

Adaptability Is the New Job Security

Every wave of technological disruption — mechanisation, the internet, mobile — has produced its equivalent of today's headlines. Workers who treated the disruption as a signal to retool fared significantly better than those who waited for normalcy to return. It never quite returned; it evolved.

The AI transition is different in speed and scale, but not in kind. The roles being automated today are, broadly, those defined by repetition and pattern-matching. The roles being created require judgment, creativity, contextual understanding, and the very human capacity to manage AI systems effectively. The gap between those two categories is the upskilling opportunity hiding inside every redundancy notice.

People who use this moment to learn, reconnect, and rebuild do not just survive layoffs. They tend to emerge into roles that are more interesting, better compensated, and more durable than the ones they left.

What to Remember

  • Layoffs are increasingly structural, not personal — AI-driven restructuring affects entire functions, not just individuals.
  • Online job applications have a staggeringly low conversion rate (2%). Invest in human networks instead.
  • Upskilling is not optional — it is the direct counter to automation. Target skills the companies cutting jobs are simultaneously buying.
  • A well-crafted CV leads with impact, not job descriptions. Ten seconds is all a recruiter typically gives it.
  • Mental health is not a soft footnote — it is the precondition for effective job searching. Neglect it and every other step suffers.
  • The instinct to withdraw during a layoff is natural and almost always counterproductive. Visibility and connection are what accelerate re-employment.

Every résumé in circulation today once belonged to someone who was starting over. Success is not a checklist, and a career is not a competition scored against other people's LinkedIn timelines. Make room for fear. Make room for uncertainty. Just do not let either of them take the wheel.

The job market will reset — it always does. The question is only whether you will be ready when it does.


See All Articles on Layoffs    « Previously Tags: Layoffs,Interview Preparation,

Interview at Deloitte for Lead AI Engineer Role (2024 Jul 30)

Index For Interviews Preparation
<<< Previously    Next >>>
Interview Reconstruction & Critique

Lead AI Engineer @ Deloitte

One-sided transcript analysis — candidate answers reconstructed with inferred questions

Candidate: Ashish
Company: Cognizant
Experience: 10 years (8 DS + 2 SE)
Location: Gurugram

Organized Transcript

Domain
Data Science / GenAI
Languages
PythonJavaScriptSQL
Frameworks / Tools
scikit-learnKerasTensorFlow HuggingFaceOpenAIPySpark AzureLangChain / GenAI
Topic 01 · Introduction

Tell me about yourself and your experience.

Answer

10 years total experience at Cognizant — 8 years in data science, 2 years in software engineering. Core expertise covers machine learning, deep learning, and NLP. Currently working on generative AI projects. Languages include Python (self-rated 7/10), JavaScript, and SQL. Python packages: scikit-learn, Keras, TensorFlow, HuggingFace, and OpenAI/GPT models.

Topic 02 · Domain Coverage

Have you worked in computer vision or time series forecasting?

Answer

No experience in computer vision. Studied time series forecasting but hasn't worked on it professionally. Has hands-on experience with RNN and LSTM architectures.

Topic 03 · Deep Learning — LSTM

Can you explain the LSTM architecture and how it differs from RNN?

Answer

LSTM stands for Long Short-Term Memory — a type of neural network cell embedded in a deep neural network. RNNs (Recurrent Neural Networks) were simpler but suffered from the vanishing gradients problem, making it difficult to capture long-range dependencies. LSTMs solved this via a memory state cell and a forget gate that learns, at each time step, whether to retain or discard a piece of information. After LSTMs, GRU (Gated Recurrent Unit) was introduced as a simpler variant, followed by bidirectional LSTM architectures that process sequences in both directions.

Topic 04 · ML Project — Anomaly Detection (Amex)

Tell me about a traditional ML project. What data volumes have you worked with?

Answer

Led an anomaly detection project for American Express. Data: 500 million records from a mainframe system, ingested into Amex's Cornerstone Cloud using PySpark. Three models were explored during the project lifecycle: (1) Statistical modeling based on log-likelihood to flag low-probability data points. (2) Autoencoder (POC phase) — anomalies yield high reconstruction error since the model reconstructs them toward the trained distribution. (3) Isolation Forest — final choice, tree-based approach where anomalous points can be isolated with fewer splits. K-medians (distance-based) was also tried but rejected due to high latency. Final choice was Isolation Forest, which offered better interpretability, lower latency, and good accuracy.

Topic 05 · Stakeholder Communication

How did you explain model selection to a non-technical governance team?

Answer

Used a physical paper-folding analogy to explain Isolation Forest: a point surrounded by other points requires many folds to isolate, whereas an outlier point can be isolated with a single fold. For the autoencoder, explained reconstruction error by example — if a model is trained on values clustering around 5, a value of 10 will be reconstructed back to ~5, producing a high reconstruction error (~5), versus ~1–2 for normal points, which signals an anomaly.

Topic 06 · PySpark

How comfortable are you with PySpark?

Answer

Comfortable with PySpark overall, though slightly out of touch with RDDs. PySpark offers three interfaces: Resilient Distributed Datasets (RDDs), SQL DataFrames, and the Pandas API on PySpark. Deployment in the Amex project leveraged PySpark UDFs (User Defined Functions) integrated with Hive.

Topic 07 · Deployment Experience

How have you deployed ML models? Walk me through a deployment architecture.

Answer

Deployment was handled by a dedicated team; personal exposure is limited to architecture understanding. Described a generative AI architecture: an OpenAI GPT-4 model exposed via an Azure endpoint → consumed by Azure Functions (business logic / API layer) → connected to a vector database or SQL database → surfaced via a web, mobile, or bot interface. In the Amex project, deployment involved Hive + PySpark UDFs but the candidate was not directly responsible.

Topic 08 · Current Projects

What are you currently working on?

Answer

Two GenAI projects at Cognizant (non-billable): (1) Jira Epic & Story Writer for Elevens Health (a health insurance company) — uses a requirements document and historical Jira dumps as RAG context to generate epics and user stories via an LLM. Also leverages a knowledge graph and vector database. Future roadmap includes fine-tuning the model on prompt-completion pairs. (2) A second GenAI project (not detailed) also in the GenAI space.

Topic 09 · ML Concepts — Feature Scaling

When is feature scaling required, and which algorithms need it?

Answer

Feature scaling is required when features are on different scales. Distance-sensitive algorithms — KNN, SVM, and neural networks — will incorrectly treat large-magnitude features (e.g., salary in thousands) as more important than small-magnitude features (e.g., age in hundreds). Scaling normalizes features to the same range (e.g., 0 to 1 via min-max). Tree-based algorithms like Decision Trees and Random Forests do not require feature scaling.


Inferred Interviewer Questions

The following questions were reconstructed from the candidate's responses. They are presented in order of likely occurrence.

Q1 Tell me about yourself and your background.
Q2 Have you worked in computer vision? What about time series forecasting?
Q3 Can you explain the LSTM architecture? How does it improve upon vanilla RNN?
Q4 Have you deployed ML models? Walk me through how you would approach deployment.
Q5 How would you rate your Python proficiency?
Q6 Tell me about a traditional ML project you've worked on. What kind of data volumes were involved?
Q7 Are you comfortable with PySpark? Which APIs have you used?
Q8 Why did you choose Isolation Forest over the autoencoder and K-medians? How did you explain the model to the governance team?
Q9 What are you currently working on? Why are you looking for a change?
Q10 When is feature scaling required, and which algorithms are affected?

Critique & Recommended Answers

3
Critical Gaps
4
Partial Answers
3
Solid Answers
⚠ Communication Pattern — Filler Word Overload

The transcript contains an extremely high density of "basically" (used 30+ times), "so" as a sentence opener (25+ times), and "like" as a filler (~15 times). For a Lead AI Engineer role, communication clarity and executive presence are evaluated as seriously as technical depth. Excessive filler words project uncertainty and dilute the impact of otherwise good answers. A rule of thumb: if a word can be removed without changing meaning, remove it before speaking.


Self-Introduction
Partial

The introduction listed skills and years competently, but for a Lead AI Engineer role the evaluator is listening for leadership signals: team sizes led, cross-functional ownership, business outcomes delivered. Listing tools and packages up front reads like a junior profile. There was also no "hook" — something memorable that positions you as a distinct candidate. The self-rating of Python as 7/10 is a common trap; interviewers will probe for the missing 3 points and it seeds doubt unnecessarily.

Stronger Opening
"I'm Ashish, with 10 years at Cognizant — 8 in data science, 2 in software engineering. Over the last 3 years I've focused on production GenAI systems, most recently leading a RAG-based automation pipeline for a Fortune 500 health insurer and an anomaly detection system processing 500 million records for American Express. I work across the full lifecycle — from problem framing and model selection to advising stakeholders on interpretability — and I have hands-on depth in Python, PySpark, and the OpenAI/Azure GenAI stack. I'm currently a senior data scientist and technical lead at Cognizant, looking to move into a role where I can own the end-to-end AI architecture."
LSTM Architecture
Partial

The conceptual explanation was accurate — vanishing gradients, forget gate, GRU, bidirectional extensions. But a Lead-level candidate is expected to also address the practical trade-offs: LSTM vs Transformer for sequence tasks, when to use what today (transformers have largely supplanted LSTMs for NLP), and awareness of attention mechanisms as the key architectural leap. The answer also rambled without a clear structure — no clear beginning, middle, end. The interviewer likely wanted a concise, structured walkthrough, not a stream of consciousness.

Stronger Answer
"LSTM — Long Short-Term Memory — was designed to fix the vanishing gradient problem in vanilla RNNs, which struggled to learn dependencies across long sequences. The core addition is a cell state, a persistent memory, controlled by three gates: the forget gate (what to discard), the input gate (what to write), and the output gate (what to expose). GRU later simplified this to two gates with comparable performance. Bidirectional variants process sequences in both directions, useful for tasks like NER. Today, for most NLP tasks, transformers with self-attention have replaced LSTMs because they parallelize better and capture longer-range dependencies without sequential bottlenecks — though LSTMs still have a place in real-time, low-latency applications where autoregressive transformers are too slow."
Deployment Experience
Critical Gap

This is the most damaging answer in the interview for a Lead AI Engineer role. Opening with "I am not aware" and "there was a team handling deployments" is a red flag at this seniority level. A Lead AI Engineer is expected to own or deeply understand the deployment pipeline. The Azure architecture described was generic and at a very high level — any engineer who has read one blog post on Azure could say the same. Missing entirely: MLOps tooling (MLflow, BentoML, Kubeflow), containerization (Docker, Kubernetes), model monitoring, CI/CD pipelines, A/B testing models, and model drift detection. These are table-stakes topics for the role.

Stronger Answer
"My deployment involvement has grown over the years. On the Amex project, the model artifacts were packaged as PySpark UDFs deployed on Hive — I worked closely with the data engineering team on that interface even though they owned the infra layer. In my recent GenAI projects I've been more directly involved: we containerize the inference layer with Docker, expose it through Azure Functions as REST endpoints, and use Azure AI Studio for model versioning. I track model behavior post-deployment using a lightweight logging layer that flags input distribution shifts — basic drift detection. For future projects I'm actively upskilling on MLflow for experiment tracking and Kubernetes-based serving for higher-throughput scenarios."
Amex Anomaly Detection Project
Strong

This was your best answer. The data volume (500M records), multi-model comparison (autoencoder → K-medians → Isolation Forest), the decision rationale based on interpretability and latency, and the stakeholder communication (paper-folding analogy) all told a coherent story. However, it lost points because it was delivered in a very unstructured, stream-of-consciousness manner. Using the STAR framework (Situation, Task, Action, Result) would have made this a standout answer. Also missing: what was the final precision/recall or anomaly rate detected? Quantified outcomes make answers memorable.

Enhancement
Add a closing quantified result: "The deployed Isolation Forest model flagged approximately X% of transactions as anomalous, which the business team validated against known fraud cases, achieving a precision of Y%. This replaced a fully manual review process that was covering only a fraction of that volume." Even approximate numbers demonstrate ownership.
Revealing Non-Billable Status
Critical Gap

Voluntarily disclosing that you are on a non-billable project and that it affects your performance rating is a significant strategic error in an interview. It signals to the interviewer that you may be on the bench, which is interpreted as either a supply-demand mismatch in your skills or an availability concern. Even if the context is nuanced (which it is — you are a manager on internal work), the framing you used led with the negative. Never volunteer information that a recruiter will interpret through a worst-case lens.

Reframing
"I'm currently leading two internal GenAI initiatives at Cognizant in a senior data scientist + tech lead capacity. These are strategic internal projects, but I'm looking for an external-facing role where I can directly own AI product outcomes and work with a broader scope of problems — specifically in LLM application engineering and MLOps, which is where I see the most exciting growth right now."
Current Project — Jira Epic & Story Writer
Partial

The project itself is relevant and interesting for a Lead AI Engineer role — RAG with LLMs, knowledge graphs, vector databases, and a fine-tuning roadmap. But the delivery was extremely disorganized. The explanation went back and forth between the architecture, the business problem, future roadmap, and technical limitations without a logical flow. The candidate also never stated a clear outcome or current status metric. For a lead role, you should be able to describe any project in 90 seconds using: Problem → Approach → Architecture → Outcome.

Structured Version
"We're building a Jira Epic and Story Writer for Elevens Health, a health insurance company. The problem: their business analysts were spending significant time writing detailed Jira user stories and epics that were inconsistent in quality. Our approach: a RAG pipeline where a requirements document and historical Jira dumps (vectorized in a vector DB with knowledge graph enrichment) are passed as context to an LLM, which then generates structured epics and stories. We're currently in pilot — the system reduces story-writing time by roughly 60% and we're collecting human feedback to build a fine-tuning dataset for the next phase."
Feature Scaling
Solid

Accurate and clear — correctly identified that distance-based algorithms (KNN, SVM, neural networks) are sensitive to scale, gave a concrete example (age vs salary), and correctly excluded tree-based algorithms. Minor gaps: didn't distinguish between Min-Max scaling and StandardScaler (Z-score normalization) and when each is preferred. Didn't mention that regularization (L1/L2) in linear models also makes them sensitive to feature scale. But for the question asked, this was a satisfactory answer.

Enhancement
Add: "The choice of scaler also matters — Min-Max scaling (0 to 1) is useful when you know the bounds and the distribution isn't heavily skewed. StandardScaler (zero mean, unit variance) is preferred when the feature follows a roughly Gaussian distribution or when using regularized linear models like Ridge/Lasso, since regularization penalizes coefficient magnitude."
What Was Missing — Lead-Level Topics Never Addressed
Critical Gap

For a Lead AI Engineer position, the interview conspicuously lacked any discussion of: (1) System design and scalability — how would you architect an ML system for 10x load? (2) Team leadership — how many engineers have you mentored or managed? (3) LLM evaluation — how do you evaluate RAG pipelines for hallucination, faithfulness, and retrieval quality? (4) Cost optimization — GPT-4 API calls are expensive; how do you optimize prompts or use smaller models intelligently? (5) Responsible AI — bias detection, fairness considerations, especially important in insurance use cases. None of these were raised, likely because the candidate's answers didn't invite those threads. A Lead-level candidate should steer conversations toward these domains proactively.

Proactive Topics to Introduce
In your Jira project answer, add: "One challenge we're managing is hallucination in the generated stories — we implemented an evaluation layer using RAGAS that scores faithfulness and context relevance before stories are presented to the user." In the deployment answer: "I'm also thinking about cost — GPT-4 is expensive at scale, so we route simpler requests to GPT-3.5 and only invoke GPT-4 for complex multi-requirement prompts." These signals differentiate a lead from a senior IC.
INTERVIEW ANALYSIS REPORT · LEAD AI ENGINEER
Candidate: Ashish · Cognizant

Index For Interviews Preparation
<<< Previously    Next >>>

Thursday, April 30, 2026

Interview at Bechtel for AI Architect Role (2026 Mar 19)

Index For Interviews Preparation    <<< Previously

AI Architect Interview – Structured Report

Based on one-sided candidate recording  |  Role: AI Architect  |  19 Mar 2026

Section 1 – Organized One-sided Transcript (Candidate’s Answers)

The following is the candidate’s side of the conversation, grouped by topic and lightly cleaned of filler words for readability while preserving the original ideas.

1.1 Introduction & Project Overview

I’m with Accenture, working on a project called AIOBI — a Digital Data Analytics Platform / Business Intelligence using Natural Language Query. It’s an agentic system with sub‑agents: RAG agent, Text‑to‑SQL agent, and a visualization agent, all managed by an orchestrator. Built using LangGraph. The RAG backend uses Azure AI Search (vector search), and the Text‑to‑SQL backend is PostgreSQL.

The architecture is straightforward: databases at the back (vector DB for RAG, PostgreSQL for Text‑to‑SQL), an LLM like GPT‑5.1 in the middle, and an API wrapper — we used FastAPI. Frontend in React or Next.js.

1.2 Orchestrator Behaviour

The orchestrator takes a natural language query and classifies whether it should go to the Text‑to‑SQL agent or the RAG agent. We give it a role, task description, input/output descriptions. The output is a routing decision — like an if‑else node in LangGraph. We also pass examples: some indicating the knowledge base (PDFs for RAG) and some showing sample queries that should be routed to each agent.

1.3 Text‑to‑SQL Agent Flow

The flow in points:

  1. Input node receives the query.
  2. Rewriting node: LLM adds context using tables/columns. If something is unclear, it pushes back to the UI for the user to clarify. If clear, it converts the raw NL into a meta‑prompt.
  3. Meta‑prompt is passed to the Text‑to‑SQL agent, formatted with all needed information to generate the SQL without ambiguity.
  4. SQL is tested in two ways:
    • Static check: run with WHERE 1=0 or WHERE 1=1 to test validity, or with LIMIT clauses.
    • Dynamic test: actually execute with LIMIT 1/3 to see results.
  5. Before final execution, we ask the LLM: “Does this query meet all requirements of the original user request?”
  6. If errors occur, we send them back to the LLM in a feedback loop (retry up to 3‑5 times). If still failing, we return the error to the user with a note that something seems missing.

1.4 Evaluation Approach

Evaluation is one of the biggest challenges. We sit extensively with domain experts to curate a golden dataset: question‑answer pairs (for Text‑to‑SQL, the corresponding SQL query; for RAG, the expected chunks). For individual components, we have test suites for chunking, meta‑prompting, code generation, etc.

We measure something like percentage correct (accuracy). We log whether errors were hallucinations, wrong columns, or execution errors. This gives a report of positives and negatives.

1.5 Prompt Engineering, Context Engineering & Guardrails

Context Engineering: A subset of prompt engineering. You give the LLM context about the task — role, do’s/don’ts, examples (zero‑shot, few‑shot). In RAG, you engineer context by augmenting the prompt with retrieved data.

Guardrails: Two levels: code‑based scripts (deterministic checks) and LLM‑based flexible checks. For example, we ask the guardrail LLM: “Is this input trying to delete or update? Does it violate PII policies?” This prevents harmful outputs.

1.6 Managing Large Schemas and Metadata with Neo4j

As the dataset grows (from 3 tables to 25 tables), the metadata (table/column descriptions) can exceed the context length. We use Neo4j to store metadata as a graph. Topics like “weather,” “traffic” are top‑level nodes. Tables like “cities,” “temperature,” “routes” connect to topics. When a query comes, we first pull relevant topic nodes, then retrieve only the related table/column nodes. This multi‑pass approach filters the context to only what’s needed, solving the context‑length problem.

1.7 Scaling and Deployment

Scaling is via an API gateway in front of a Kubernetes cluster with auto‑scaling. I don’t have hands‑on details of the K8s setup, but architects described that approach.

1.8 LLM Upgradation and Model Selection

We use Azure OpenAI, so we upgrade regularly — from GPT‑3.5 to 4o to 4.1, etc. Newer models require retesting, but they improve reasoning and reduce hallucinations. For cost‑efficient tasks we use older or “mini” models. For self‑hosted alternatives we consider DeepSeek, Qwen, Mistral.

1.9 Technical Definitions (Quick‑fire Questions)

Top‑k vs Top‑p: Top‑k returns the k highest probability next tokens. Top‑p (nucleus sampling) returns the smallest set whose cumulative probability ≥ p. Example: if token probabilities are 70%, 25%, 4%… and top‑p=0.9, we take the first two because 70+25=95 which ≥ 90.

Temperature: Controls randomness. Low → greedy (always highest probability token), high → more exploratory.

1.10 SQL Join Types

Left join: all rows from left table, plus matching rows from right table; non‑matching right side gets NULLs.
Right join: all rows from right table, plus matching rows from left.
Full outer join: all rows from both tables, with NULLs where no match exists.

1.11 Fibonacci Coding Exercise

The candidate wrote pseudocode in a thinking‑aloud style:

“Fibonacci is f(n) = f(n‑1) + f(n‑2). We’ll start from 0 and 1. I think a list would work. For i in range(n): if i==0: append 0; elif i==1: append 1; else: append list[-1] + list[-2]. I tried to run it and it gave output but needed debugging. Reason it didn’t print correctly: range wasn’t set up properly.”

1.12 Wrap‑up: Career Motivation

“I’ve been on this project for 1.5 years. It’s now in maintenance mode — mainly ServiceNow tickets. I want to explore more cutting‑edge agentic stuff, not just maintain what’s built.”

Section 2 – Reconstructed Interviewer Questions

Based on the candidate’s responses, the following questions were likely asked. They are presented in a logical order, paired with the relevant answer summary.

Q1: “Please introduce your current project and role.”
(See 1.1) The candidate described AIOBI, an agentic BI platform using NLQ, with RAG, Text‑to‑SQL, orchestrator, LangGraph, Azure AI Search, PostgreSQL.
Q2: “What is the system architecture?”
(See 1.1‑1.2) Backend DBs, LLM (GPT‑5.1), FastAPI middleware, React frontend; orchestrator classifies and routes queries.
Q3: “Can you walk me through how the Text‑to‑SQL agent works?”
(See 1.3) Detailed flow: rewriting node → meta‑prompt → SQL generation → static/dynamic tests → feedback loop.
Q4: “What challenges have you faced, especially around evaluation?”
(See 1.4) Curating golden datasets with domain experts, multi‑component test suites, accuracy metrics.
Q5: “How do you handle prompt changes without derailing outputs?”
The candidate alluded to iterative tuning and testing but did not give a structured answer (later critique).
Q6: “What is context engineering and how does it differ from prompt engineering?”
(See 1.5) Described context engineering as a subset; providing role, examples, do’s/don’ts, RAG context augmentation.
Q7: “How do you implement guardrails?”
(See 1.5) Two‑level: deterministic code‑based checks (e.g., for PII) and flexible LLM‑based checks (policy violations).
Q8: “What are the metrics you use for evaluating the Text‑to‑SQL and RAG agents?”
(See 1.4 & later parts) Accuracy/percentage correct. Mentioned hallucination, missing columns, wrong results. Did not name specific metrics like BLEU or Execution Accuracy.
Q9: “How do you deal with large database schemas when building prompts?”
(See 1.6) Neo4j metadata graph, topic‑based retrieval of relevant tables/columns to stay within context length.
Q10: “What about scalability and deployment?”
(See 1.7) API gateway + Kubernetes auto‑scaling, though admitted limited personal hands‑on.
Q11: “How do you decide which LLM version to use, and how do you manage upgrades?”
(See 1.8) Azure OpenAI partnership, upgrade to latest after retesting; older/mini models for cost; open‑source fallbacks like DeepSeek.
Q12: “Can you explain top‑k, top‑p and temperature?”
(See 1.9) Provided definitions with numerical example for top‑p.
Q13: “What are the differences between left, right, and outer joins in SQL?”
(See 1.10) Gave a correct, concise explanation.
Q14: (Coding exercise) “Write a Python function to generate the Fibonacci sequence up to n terms, using recursion.”
(See 1.11) Candidate attempted iterative list approach with debug commentary; did not use recursion as apparently requested.
Q15: “What is your motivation for leaving your current role?”
(See 1.12) Wants to move from maintenance to innovative agentic AI work.

Section 3 – Critique and Improved Answers

Below is a constructive evaluation of the candidate’s responses, highlighting weaknesses and offering a more polished, architect‑level answer.

3.1 Overall Delivery NEEDS WORK

  • Excessive fillers & rambling: The transcript contained many “yeah,” “I mean,” “like,” and tangential loops. An AI Architect must communicate with clarity and conciseness.
  • Lack of structure: Answers often wandered. For example, explaining the Text‑to‑SQL flow jumped between validation, rewriting, and guardrails without a clear narrative.
  • Vagueness on depth: When asked about scaling, the candidate said “I lack details” — unacceptable for an architect role. Better to say “While I haven’t provisioned the K8s cluster myself, the standard pattern we follow is…” and then describe the pattern confidently.
Better approach: Use the STAR method (Situation, Task, Action, Result) for complex descriptions. Speak slowly, think, then deliver a well‑formed paragraph without fillers.

3.2 Architecture Walkthrough FAIR

The candidate mentioned LangGraph, FastAPI, React, but left out crucial architectural diagrams and trade‑offs. As an architect, one should discuss why these choices were made.

Improved answer: “We selected a modular agentic architecture with LangGraph for its explicit state‑machine control. The orchestrator is a gating model that pre‑classifies NL inputs into RAG or Text‑to‑SQL branches using few‑shot prompts and a routing function. Each agent is encapsulated behind a FastAPI microservice, deployed on AKS for scale. We use Azure AI Search for vector retrieval (using Ada embeddings) and PostgreSQL for transactional SQL data. The frontend is a Next.js app that calls a unified /nlq endpoint. For observability, we integrate Phoenix/OpenTelemetry to track token usage, latency, and guardrail violations.”

3.3 Evaluation Answer INSUFFICIENT

The candidate only mentioned “accuracy” and “golden dataset”. An architect should know specific metrics: Execution Accuracy (EX), Exact Set Match (ESM), ROUGE‑L or BLEU for SQL, validation‑set coverage, hallucination rate, and for RAG, context precision/recall, faithfulness, answer relevancy. The answer lacked method naming and benchmark references.

Better answer: “For Text‑to‑SQL, we use Execution Accuracy (does the SQL produce the correct result set on a held‑out test DB) and Exact Set Match (comparing the result rows directly). We also compute SQL‑specific BLEU and ROUGE‑L against reference queries. For RAG, we measure context precision, context recall, faithfulness, and answer relevancy using LLM‑as‑a‑judge. We curate a golden dataset of 500+ question‑SQL‑answer triples. Additionally, we do component‑wise evaluations: chunking strategy (Hit Rate on top‑k), meta‑prompt accuracy, and visualization code correctness using unit test suites.”

3.4 Context Engineering vs Prompt Engineering DECENT

The candidate correctly called context engineering a subset, but the distinction was fuzzy. He should have explained that prompt engineering is the overarching practice of designing the entire prompt structure, while context engineering specifically deals with injecting relevant external information (retrieved chunks, metadata, user intent tags).

Better answer: “Prompt engineering covers the system message, instruction templates, output format, and few‑shot examples. Context engineering is the discipline of selecting and formatting the dynamic contextual data that augments the prompt — such as RAG‑retrieved chunks, table schemas for Text‑to‑SQL, or conversation history. It’s about what information you pack and how you serialize it to minimise the gap between the model’s training distribution and the inference need.”

3.5 Guardrails Answer ADVANCED

The answer touched on code‑based vs LLM‑based guardrails, which is good. But an architect should mention concrete libraries (Guardrails AI, NVIDIA NeMo Guardrails) and cite examples like PII scrubbing, SQL injection prevention, and output schema enforcement. Also, the candidate missed the importance of input guardrails (e.g., refusing “DROP TABLE” instructions).

Better answer: “We implement a layered guard strategy. On the input side, a regex‑based filter blocks dangerous keywords (DROP, DELETE) and an LLM classifier detects jailbreak attempts. On the output, we use a PII anonymizer library (like Presidio) and a second LLM call that validates the response against our content policy. We also use structured output (JSON mode or function calling) to enforce that SQL statements don’t contain malicious clauses. For the Text‑to‑SQL agent, before execution we run a static analysis that ensures only SELECT queries pass through.”

3.6 Large Schema Handling with Neo4j GOOD CONCEPT, POOR EXPLANATION

The idea of a topic‑driven metadata graph is innovative and architect‑level. However, the candidate struggled to articulate it clearly, using confusing “hierarchy in a graph” metaphors and failing to mention standard techniques like schema‑linking and query‑to‑schema tokenizer alignment. An architect would also mention alternatives like table‑selection via dense retrieval and why Neo4j was chosen (explicit relationship traversal, no need for embedding drift).

Better answer: “We built a semantic metadata graph in Neo4j where nodes represent topics (weather, traffic), tables, and columns, with edges for belongs‑to, references. When a query arrives, we perform a two‑hop traversal: first, we identify topic nodes relevant to the query using keyword matching and vector similarity on topic descriptions; then we traverse the graph to collect only the tables and columns linked to those topics. This prunes the schema context from ~10k tokens for a 25‑table database down to under 2k tokens. It also handles schema evolution gracefully — new tables just get new nodes. Compared to dense retrieval, the graph ensures consistent, deterministic schema linking, which is crucial for SQL accuracy.”

3.7 Fibonacci Coding Exercise MISMATCHED

The interviewer explicitly said “you have to use recursion.” The candidate wrote an iterative solution with a list and debugged it aloud. This shows a failure to listen and to translate a requirement into code. The correct recursive approach (with memoization due to exponential complexity) would be:

Correct implementation:
from functools import lru_cache

@lru_cache(None)
def fib(n):
    if n < 2:
        return n
    return fib(n-1) + fib(n-2)

def fib_sequence(n):
    return [fib(i) for i in range(n)]

print(fib_sequence(10))  # [0,1,1,2,3,5,8,13,21,34]
The candidate should have clarified the requirement (e.g., “first n numbers” vs “up to a maximum number”) and then presented a clean recursive solution, discussing time complexity and the importance of memoization.

3.8 SQL Joins SOLID

The explanation was accurate. However, the candidate hesitated and asked for the question to be repeated. For an architect, the immediate answer should have been crisp: “LEFT JOIN returns all rows from the left table and only the matches from the right; RIGHT JOIN is its mirror; FULL OUTER JOIN returns all rows from both, with NULLs where no match exists.” No need for the extra qualifiers. Still, the content was correct.

3.9 Career Motivation HONEST BUT NEGATIVE

“Maintenance mode… ServiceNow tickets” sounds like complaining. An architect should position the reason positively: “I’m eager to work on more complex, large‑scale agentic systems where I can apply my design skills to solve novel problems, and I see this role as aligned with that growth.”

Better answer: “My current project has moved into a steady‑state phase. I’m grateful for the learning, but I’m now seeking an opportunity where I can design next‑generation agentic architectures from scratch, tackle challenges like multi‑agent orchestration and autonomous tool use, and collaborate with a research‑focused team. Your opening seems perfectly aligned with that progression.”

3.10 Missing Topics GAPS

The candidate did not proactively discuss:

  • Observability tools: Only mentioned Phoenix and LangFuse vaguely. An architect should know OpenTelemetry, tracing, and metrics like faithfulness.
  • Cost optimization: No mention of token‑usage reduction, caching, semantic caching, or prompt compression.
  • Multi‑agent patterns: Although the project is multi‑agent, the candidate didn’t discuss debate, reflection, or plan‑execute patterns — all highly relevant for an agentic architect.
  • Security: Beyond guardrails, no discussion of RBAC, row‑level security in NLQ, or tenant isolation.

3.11 Suggested Talking Points for Future Interviews

  • Use concrete numbers: “Improved SQL accuracy from 82% to 93% by introducing table‑graph schema linking.”
  • Mention standard benchmarks: “We track BIRD, Spider, or WikiSQL metrics internally.”
  • Show impact: “Reduced prompt tokens per query by 60% using Neo4j metadata pruning.”
  • Discuss failure modes: “We handle ambiguous terms by engaging the user in a clarification loop, which improved first‑attempt success by 20%.”
  • Always bring the conversation back to architecture trade‑offs: why agentic vs single‑call, why LangGraph vs semantic kernel, why Azure vs AWS.

End of Report — Prepared by AI Interview Evaluator


Index For Interviews Preparation    <<< Previously

Wednesday, March 18, 2026

Interview at GSR Business Services for AI Architect Role (2026 Mar 11)

Index For Interviews Preparation    <<< Previously    Next >>>

Context about this post

...This is my way of helping others by sharing feedback I received from ChatGPT (5.4 Thinking) on an interview. I hope it can be useful to others who are preparing for interviews, especially in data science and AI/ML roles. Company: GSR Business Services Position: Lead AI Engineer / AI Architect



Me: Attached is one-sided call transcript of an interview. It contains just the answers of the candidate (me). Can you please organize it / structure it and provide what might have been the questions asked by the interviewer (putting back into the organized structure)? AND IMPORTANTLY: Critic me and my answers.


Yes. I reconstructed the interview as best as possible from your one-sided transcript. A few of the questions are necessarily approximate because the audio text is short, noisy, and contains some unclear phrases like “DB cluster” and “WILP.” Everything below is inferred from your answers.

Reconstructed interview structure

1) Tell me about yourself

Likely interviewer question:
“Please introduce yourself.”

Your answer, organized:

  • You introduced yourself as Ashish.

  • You said you have 13 years of experience, with 11 years in the AI-ML space.

  • You mentioned experience in traditional ML, deep learning, data science, generative AI, agentic AI, and prompt engineering.

  • You highlighted Python as a strength, with some familiarity in SQL, PL/SQL, JavaScript, and Java.

  • You described your recent project as an Accenture offering involving agentic capabilities such as text-to-SQL, RAG, generic knowledge agent, and visualization agent, orchestrated by a router.

  • You also mentioned domain experience in finance, telecom, and edtech.

  • You ended with a personal note about reading and coding in your free time.

Critique:
This was your strongest answer in substance, but it was too long and not sharply structured. You had good material, but it came out as a stream rather than a crisp executive summary.

Better version of the same answer:
“I’m Ashish, with 13 years of overall experience and about 11 years focused on AI/ML. My core strengths are Python, machine learning, deep learning, and, more recently, generative AI and agentic AI systems. In my recent work, I contributed to an agentic platform involving text-to-SQL, RAG, knowledge agents, and visualization agents coordinated through routing/orchestration. I’ve worked across finance, telecom, and edtech, and I’m most comfortable in roles that combine hands-on model work with applied business problem-solving.”


2) What AI/ML areas or algorithms are you comfortable with?

Likely interviewer question:
“What algorithms or ML areas are you comfortable with?”

Your answer, organized:

  • You said supervised and unsupervised learning are clear to you.

  • You said reinforcement learning is weaker for you.

  • You listed linear regression, decision trees, SVM, and random forest.

  • You said you are hands-on with almost all of them, mainly through scikit-learn APIs.

Critique:
This answer was acceptable, but too generic for an experienced candidate. Saying “I access them via scikit-learn API” undersells your capability. It sounds tool-level rather than concept-level. At 13 years of experience, the interviewer likely wants to hear:

  • when you use which model,

  • trade-offs,

  • evaluation criteria,

  • one real project example.

Better version:
“I’m strongest in supervised and unsupervised learning. I’ve worked with regression, tree-based methods, SVMs, and ensemble models such as random forests. My reinforcement learning exposure is limited compared to the others. In practice, I choose models based on data size, interpretability requirements, feature quality, latency constraints, and explainability needs. For tabular enterprise data, tree-based models have often given me a good balance of performance and interpretability.”


3) How well do you understand GPT, BERT, Gemini, or LLM internals?

Likely interviewer question:
“How deep is your understanding of GPT/BERT/Gemini?”
or
“Have you worked on model internals, or only via APIs?”

Your answer, organized:

  • You honestly said you do not know the internal details of GPT, BERT, or Gemini deeply.

  • You said you are familiar with using such models through APIs.

  • You added that you studied BERT when it came out.

  • You said you are familiar with Transformers, but not with the internals of the latest models.

Critique:
Your honesty was good. The problem was the framing. You repeated what you do not know several times. That creates a “defensive” impression. A better answer would briefly acknowledge the limit, then pivot quickly to what you do know and have done. Right now, the interviewer likely concluded: “He has mostly API-level LLM exposure.”

Better version:
“I haven’t worked on training foundation models from scratch or on proprietary internals of GPT-class systems. My experience is more on the applied side: using LLM APIs, prompt design, orchestration, RAG, text-to-SQL, and agent workflows. Conceptually, I understand the Transformer foundation and how these models are applied in production systems, but my hands-on depth is stronger in implementation and solution design than in pretraining internals.”

That answer is still honest, but sounds confident.


4) What is a Transformer?

Likely interviewer question:
“What is a Transformer?”
possibly followed by
“Which Transformer?” or “Are all Transformers the same?”

Your answer, organized:

  • You referred to the 2017 Google paper.

  • You said not every Transformer is the same and different organizations make modifications.

  • You mentioned Meta, Google, OpenAI, etc. modifying the architecture.

  • You said Transformer is open and modifiable, though you were unsure about patent/licensing details.

Critique:
This part was weak. Not because you lacked all knowledge, but because the answer drifted. The interviewer probably expected a technical explanation, something like self-attention, sequence modeling, parallelization, encoder/decoder, or why it replaced RNN/LSTM in many tasks. Instead, your answer moved toward licensing, patents, and company modifications. That likely made you sound unsure.

The line “I need to check if it is patented by Google or how it’s licensed” hurt the answer. In an interview, this sounds like you lost the thread of the question.

Better version:
“A Transformer is a neural network architecture introduced in the 2017 ‘Attention Is All You Need’ paper. Its key idea is self-attention, which lets the model weigh relationships between tokens in a sequence more effectively than older sequential architectures like RNNs. That made it much better for large-scale language tasks and parallel training. Modern LLMs are Transformer-based, though different organizations adapt the base architecture in different ways.”

That would have been a strong, clean answer.


5) Do you use Transformers directly?

Likely interviewer question:
“Have you worked directly with Transformer models?”
or
“Do you use Transformers at model level?”

Your answer, organized:

  • You said you use LLMs via APIs such as OpenAI and Gemini APIs.

  • You said you do not directly use Transformers themselves as components; rather, you use LLMs built on them.

Critique:
Reasonable answer, but again it emphasized limitation more than capability. You could have said:

  • API-based LLM integration,

  • prompt engineering,

  • RAG,

  • orchestration,

  • agent workflows,

  • evaluation,

  • reliability handling.

That would make your applied expertise sound much more valuable.


6) Explain decision tree vs random forest

Likely interviewer question:
“What is a decision tree?”
or
“What is the difference between a decision tree and random forest?”

Your answer, organized:

  • You said decision tree is a building block of random forest.

  • You described decision tree as data structured via nodes and edges.

  • You said random forest is an ensemble of many decision trees.

  • You said the final decision is based on the constituent trees’ outputs.

Critique:
The core idea was correct, but the wording was imprecise. “Hard or soft working” was unclear. Also, “data is structured in the form of nodes and edges” is not the best way to explain a decision tree in an interview. Better to explain splits, impurity reduction, overfitting, and aggregation.

Better version:
“A decision tree makes predictions by recursively splitting the data based on feature values. It is easy to interpret but can overfit. A random forest reduces that risk by combining many decision trees trained on bootstrapped samples and random subsets of features. The final output is aggregated across trees, usually by majority vote in classification or averaging in regression.”

That sounds much more interview-ready.


7) Unclear technical question: “DB cluster”

Likely interviewer question:
Possibly one of these:

  • “What is DBSCAN clustering?”

  • “What is density-based clustering?”

  • “What is clustering?”

  • “What is distributed database clustering?”
    This part is not clear from the transcript.

Your answer:
You said you were not sure.

Critique:
Saying “I’m not sure” is fine when needed. But the better approach is:
“I’m not fully sure I heard you correctly — did you mean DBSCAN clustering or database clustering?”
That would have bought you time, reduced misunderstanding, and perhaps turned a weak moment into a competent one.

This is one of the biggest interview habits to improve: clarify the question before conceding uncertainty.


8) Unclear technical question: “Depth decision tree”

Likely interviewer question:
Possibly:
“What is depth in a decision tree?”
or
“What is max depth in a decision tree?”

Your answer:
You said you know decision tree, but are not familiar with “depth decision tree.”

Critique:
This was likely a miss. If the interviewer asked about tree depth, that is a basic concept. Max depth controls how many levels the tree can grow and directly affects overfitting. Missing that probably weakened the perception of your ML fundamentals.

Ideal answer would be:
“Tree depth refers to the number of levels from root to leaf. It is an important hyperparameter because deeper trees can capture more complexity but are also more prone to overfitting.”


9) Was your M.Tech Regular?

Likely interviewer question:
“Yes, it’s WILP”
This was reference to Work Integrated Learning Program of BITS, Pilani.

Critique:
No fair criticism here because the transcript itself is unclear.


10) Who taught you machine learning?

Likely interviewer question:
“Who taught you machine learning?”
or perhaps a casual follow-up about education.

Your answer:
You said it was more than eight years ago and you do not recall the teacher’s name.

Critique:
This answer is not damaging by itself, but it ended weakly. A better recovery would be:
“I don’t recall the faculty name now, but the stronger learning actually came later through self-study and project work.”

That would turn an awkward personal-memory question into a professional-strength answer.


Overall assessment of your interview performance

What you did well

You came across as:

  • honest,

  • experienced,

  • broad in exposure,

  • genuinely hands-on in applied AI/ML work,

  • not pretending to know things you do not know.

Those are real positives.

What hurt you most

These were the main issues:

1. Too much hedging and filler language
You said “yeah,” “I mean,” and similar fillers a lot. That makes answers sound less authoritative.

2. You led with your limitations too often
Instead of briefly acknowledging a gap and pivoting to strengths, you stayed too long on what you did not know.

3. Conceptual answers were not crisp enough
Transformer, decision tree, random forest, and possibly tree depth were answered in a way that felt partially correct but not interview-polished.

4. You sounded more like an API user than an architect/expert
For a senior candidate, “I use scikit-learn APIs” and “I use OpenAI API / Gemini API” are not enough by themselves. The interviewer wants design thinking, trade-offs, production considerations, evaluation, and decision rationale.

5. You missed chances to clarify unclear questions
For things like “DB cluster,” you should have asked the interviewer to repeat or clarify.


My blunt verdict

You do not sound weak overall. You sound like someone with real industry exposure.
But in this transcript, you sounded under-packaged.

That means:

  • your real experience seems better than how it came across,

  • your delivery likely made you appear less technically sharp than you probably are,

  • especially for senior AI/ML/GenAI roles, this style can cost you.

The interviewer may have walked away thinking:

“Good practical exposure, but fundamentals are uneven and answers are not crisp enough for a senior-level technical conversation.”

That is fixable.

Biggest improvements to make before your next interview

  1. Prepare a 45-second self-intro.

  2. Prepare crisp 2-3 line definitions for:

    • Transformer

    • attention

    • supervised vs unsupervised vs reinforcement learning

    • decision tree

    • random forest

    • overfitting

    • precision/recall/F1

    • RAG

    • agents vs workflows

  3. Stop saying “I don’t know” too early. First clarify the question.

  4. Replace “I used APIs” with “I designed and implemented solutions using…”

  5. For every answer, use this format:

    • definition,

    • practical example,

    • trade-off / limitation.

If you want, I’ll turn this into a proper interview preparation sheet with two columns: Likely Question / Ideal Improved Answer.




Go ahead.

Absolutely. I turned your transcript into a cleaner interview-prep sheet. A few questions are still approximate because the source is one-sided and some phrases are unclear, especially “DB cluster.” The likely topics below are inferred from your answers.

Interview preparation sheet

Tell me about yourself.“I’m Ashish, with 13 years of overall experience and around 11 years focused on AI/ML. My core strengths are Python, machine learning, deep learning, and, more recently, generative AI and agentic AI systems. In my recent work, I contributed to an agentic offering involving text-to-SQL, RAG, knowledge agents, and visualization agents coordinated through routing/orchestration. I’ve worked across finance, telecom, and edtech, and I’m most effective in roles that combine hands-on AI delivery with business problem-solving.”
What AI/ML areas are you strongest in?“My strongest areas are supervised and unsupervised learning. I’ve worked with regression, tree-based models, SVMs, and ensemble methods. My reinforcement learning exposure is comparatively limited. In practice, I choose models based on data characteristics, explainability requirements, latency needs, and business constraints.”
Which algorithms have you worked with?“I’ve worked hands-on with linear regression, decision trees, random forests, SVMs, and other standard ML methods, mainly in Python using scikit-learn and related tooling. I’m comfortable not just calling the APIs, but also selecting models, tuning them, evaluating them, and interpreting results in business context.”
How deep is your understanding of GPT, BERT, Gemini, or LLMs?“My experience is stronger on the applied side than on foundation-model pretraining internals. I haven’t trained GPT-class models from scratch, but I’ve worked with LLM APIs, prompt design, RAG, text-to-SQL, and agent workflows. Conceptually, I understand the Transformer foundation and how these models are used in production systems.”
What is a Transformer?“A Transformer is a neural network architecture introduced in the 2017 paper ‘Attention Is All You Need.’ Its key idea is self-attention, which helps the model capture relationships between tokens more effectively than older sequential architectures like RNNs. That made Transformers highly effective for large-scale language tasks and became the foundation for modern LLMs.”
Are all Transformers the same?“No. The Transformer is the foundational architecture, but organizations adapt it in different ways depending on objectives such as scale, efficiency, context length, alignment, and inference cost. So while many LLMs are Transformer-based, their exact implementations and optimizations differ.”
Have you used Transformers directly, or mainly through APIs?“Mostly through applied LLM usage rather than building Transformer models from scratch. My hands-on experience is with LLM integration through APIs, prompt engineering, RAG pipelines, agent orchestration, and use-case implementation.”
Explain supervised vs unsupervised learning.“Supervised learning uses labeled data to learn a mapping from inputs to outputs, such as classification or regression. Unsupervised learning works on unlabeled data to identify structure, such as clustering, grouping, or dimensionality reduction. I’m comfortable with both, while reinforcement learning is an area I’d describe as less strong for me.”
What is a decision tree?“A decision tree is a model that makes predictions by recursively splitting data based on feature values. Each split is chosen to improve purity or reduce error. It’s interpretable and useful, but if unconstrained it can overfit.”
What is a random forest, and how is it different from a decision tree?“A random forest is an ensemble of multiple decision trees. Each tree is trained on a bootstrapped sample of the data and usually on random subsets of features. The final prediction is aggregated across the trees, typically by majority vote in classification or averaging in regression. Compared with a single tree, it usually generalizes better and is less prone to overfitting.”
What is tree depth or max depth in a decision tree?“Tree depth refers to the number of levels from the root to the deepest leaf. Max depth is an important hyperparameter because deeper trees can model more complex patterns but also increase the risk of overfitting. Setting an appropriate max depth is one way to control model complexity.”
What recent project have you worked on?“My recent work involved an agentic AI solution with components such as text-to-SQL, RAG, a knowledge agent, and a visualization agent, orchestrated through routing logic. The value was in breaking down user requests, sending them to the right agent, and combining outputs into a more usable enterprise workflow.”
Which programming languages are you comfortable with?“Python is my strongest language and the one I use most in AI/ML work. I also have familiarity with SQL, PL/SQL, JavaScript, and Java, though my core delivery strength is clearly in Python-based data and AI systems.”
Which domains have you worked in?“I’ve worked across finance, telecom, and edtech. That has helped me adapt AI/ML solutions to different types of business problems, data environments, and stakeholder expectations.”
Who taught you machine learning?“I don’t recall the faculty name now, because that was several years ago. But most of my real depth came later through hands-on project experience, self-study, and practical implementation in industry.”

Where your original answers were hurting you

Your transcript shows real experience, but the delivery weakened the impression. You repeatedly framed answers around what you didn’t know, especially around GPT/BERT/Gemini internals and Transformers, instead of quickly pivoting to what you do know and have done. You also sometimes answered in a tool-centric way, like “I use APIs” or “I access models via scikit-learn,” which sounds more junior than your actual experience level.

The biggest technical miss was around Transformer explanation and likely tree depth / max depth. Those are areas where the interviewer probably expected a cleaner, more fundamental explanation. The random forest answer had the right general direction, but it needed sharper wording around splits, ensembles, voting/averaging, and overfitting control.

Another issue was interview control. When the question sounded unclear, like “DB cluster,” the better move would have been to clarify the term before saying you were unsure. That alone can save weak moments in technical interviews.

Your best improvement pattern for next time

Use this answer structure every time:

Definition → practical experience → trade-off/limitation

Example:

Q: What is a Transformer?
“A Transformer is a neural architecture based on self-attention that became the foundation for modern LLMs. In my work, I use Transformer-based LLMs in applied settings such as RAG, prompt engineering, and agent workflows. I haven’t trained foundation models from scratch, so my strength is more on applied implementation than pretraining internals.”

That format makes you sound senior, honest, and structured.

My blunt coaching summary

You do not sound inexperienced.
You sound experienced but under-packaged.

That means your real capability is probably stronger than how it came across in the interview. The fix is not to learn everything from scratch. The fix is to make your answers:

  • shorter,

  • clearer,

  • more technical at the concept level,

  • more confident in framing.


Index For Interviews Preparation    <<< Previously    Next >>>