Thursday, May 14, 2026

Inflation at a 42-Month High, and Rupee in Freefall


See All News by Ravish Kumar    « Previously

The Great Indian Economic Illusion: Who Is Buying Gold at Rs 1,62,000?

May 2026 | A Critical Analysis

Namaskar. On the 14th of May, the price of 10 grams of gold touched Rs 1,62,000. The question that begs an answer is – who is buying gold at this price? Certainly not those who had to leave their cities because they couldn't get a cooking gas cylinder. Not those who shut their shops when a commercial cylinder crossed Rs 3,000. And definitely not the ones who break their backs doing home deliveries. The working class cannot even dream of buying gold at these rates. So then, who is buying?

The Prime Minister of India appeals to the public: "Don't buy gold." But he should first tell us – in the last 12 years, which category of people bought gold at prices exceeding Rs 1 lakh? How much gold did they buy? How many such people are there? The real game is elsewhere. Appeals are being made to those who write down every vegetable purchase in their diaries, but no information is being given about those who buy 20, 30, 50 lakhs worth of gold in a single transaction. This is India's money.

The Rupee's Free Fall: A Timeline of Negligence

Let me show you a news report from September 2024: among 48 Asian countries, the Bangladeshi Taka performed the worst, and the Indian Rupee came in second worst. Another report from January 2025 stated that India's currency was the worst performer in South Asia. By February 2025, the Rupee became Asia's worst-performing currency, with the dollar reaching Rs 87.58. Today, the Rupee is the worst-performing currency in the entire world.

If a global war has caused a crisis, why is India's Rupee suffering the most? On 28 February, the situation worsened due to war, but the Rupee had already been in serious decline for two years. By December 2025, reports predicted the Rupee would fall below 92 by March 2026. But it hit 92 in January 2026 itself, and crossed 95 in March. Let's look at the hard data:

DateUSD/INR RateKey Event / Report
September 2024~85-86Rupee 2nd worst in Asia
February 202587.58Asia's worst performer
December 22, 2025~90-91Report predicted below 92 by March 2026
January 202692.00Hit 92 – earlier than forecast
February 14, 2026~93Fitch said could reach 93 by year end
March 30, 202695.00+Rupee crosses 95 for first time
April 1, 2026~95-96Bloomberg warns could cross 100
May 12, 202695.63Chief Economic Advisor finally concerned
May 14, 202695.85Gold hits Rs 1,62,000 per 10g

Government's Denial: "Just a Number"

On 12 May 2026, when the dollar reached Rs 95.63, India's Chief Economic Advisor Anantha Nageswaran suddenly declared: "We must stop the Rupee from falling further." But just five months earlier, when the dollar was at Rs 90.21, he said, "I don't want to lose sleep over this." So at 90.21 he slept peacefully; at 95.63 an alarm went off. What changed? The truth is the Rupee has been falling since 2024, but the government never bothered.

On 30 March 2026 – the day the Rupee first crossed 95 – Finance Minister Nirmala Sitharaman was asked by Samajwadi Party MP Dharmendra Yadav why the Rupee was weakening. Her response? She claimed all economic fundamentals are strong, the fiscal situation is strong, and the whole world is praising India. She said the Rupee is doing fine compared to other emerging markets, and that critics are just fixated on one "issue" – the exchange rate. One issue? The currency collapse is just an "issue" for her.

But here is the reality: despite the RBI selling dollars from forex reserves, the Rupee crashed through 95 within a week. And why is the government appealing to citizens to save foreign currency? Because the real problem is a shortage of dollars.

The Real Reasons: Import Dependency and Failed 'Make in India'

India imports almost all the gold it consumes, 90% of its oil, natural gas, edible oils, fertilizers, electronics, and industrial inputs. All of these must be paid for in dollars. Over the last 12 years, India's import dependence has only increased. The "Make in India" initiative has failed. We did not become self-reliant; instead, we became more dependent on imports. The quality of exportable goods did not improve, and the world was not interested in buying from India. Meanwhile, the war, the Hormuz crisis, and oil price shocks increased demand for dollars while supply dried up. The Rupee is under pressure because of 12 years of accumulated failures.

Consider this: On 22 March 2026, despite RBI selling dollars in the forex market, the Rupee hit 93.72. A week later, it crossed 95. The RBI's interventions proved futile. And the Prime Minister's appeal not to buy gold is actually a cover for the dollar crisis. But instead of addressing structural issues, the government hides behind global crises.

The Great Hypocrisy: Appeals to the Poor, Silence for the Rich

The Prime Minister asks citizens to save foreign currency – use less oil, don't travel abroad, don't buy gold. But why doesn't he appeal to those who are taking dollars out of the country? Why no appeal to foreign investors who are fleeing? Why no request to the wealthy to bring back money parked abroad? In the last 12 years, those who have flourished under this government are the same ones sending their children to study abroad, sending dollars out, and acquiring foreign citizenship.

And what about the government's own extravagance? Between 2021 and 2025, PM Modi spent over Rs 460 crore on foreign tours. What was the outcome of those trips? No debate on that. Instead, the media focuses on who funds Rahul Gandhi's foreign visits. This is the pattern – first, ban red beacon lights to pretend an end to VIP culture, then travel in a convoy of 100 cars, and later reduce from 5 cars to 2 cars and call it sacrifice for the nation.

War as an Excuse: The Crisis Was Brewing Long Before

The government keeps repeating: global crisis, war, global crisis. But the Rupee started falling in 2024 – two full years before the February 28 attack. In September 2024, reports already warned that India's economy was in trouble. Even if there were no war, the Rupee was already on a downward slide. The government had 2 to 2.5 years before the war to fix the economy, but they failed because fundamental problems had become severe. Now, to hide this failure, they chant "global crisis" like a mantra.

And if the war is truly the cause, why hasn't the government criticized America's illegal war? Why are Indian ports turning away Russian and Iranian oil tankers due to fear of U.S. pressure? America said stop buying oil from Russia – and India stopped. Where is the bold foreign policy that would secure energy independence? India now begs the U.S. to extend waivers for Russian oil. This is a failure of economic diplomacy.

IndicatorValue (2024-2026)Change / Status
Forex Reserves (Feb 27, 2026)$728.5 billionDropped by $37.8 billion in 2 months
Forex Reserves (May 1, 2026)$690.7 billion
Hungary (oil/gas importer)Reserves +1.3%Increased despite war
Chile (oil/gas importer)Reserves +1.0%Increased despite war
Taiwan, PeruMinor declineMuch smaller drop than India
India's share of global market cap (peak)4.73%Fell to below 3%
India's share of global market cap (May 2026)Below 3%
FPI outflows (2025 - May 2026)Rs 3.6 lakh crore~$43 billion

Stock Market and Corporate Distress

Former finance secretary Subhash Chandra Garg wrote in Quint that India's stock market was among the worst in 2025, while other global markets saw rallies. In 2026 as well, Indian markets are lagging. Corporate profits are not exciting investors. The high growth era of IT and startups is over. India has no visible base in AI chips or energy transition. Retail investors who entered through SIPs and gold funds are now seeing negative returns. The risk-taking capacity of the market is evaporating.

And what about unemployment? The highest in 50 years. In 2025, demand started falling, and there were demands to reduce GST rates. The government made a grand announcement on August 15, 2025, that GST rates would be cut after Diwali. People stopped buying in anticipation, companies were stuck with inventory, and then nothing happened. The discussion just died because the government was winning elections.

The Modi Government's Obsession with Distraction

When COVID hit, Narendra Modi asked people to clap and ring bells. Now an economic storm is coming, and he asks people not to buy gold. The same government that cannot conduct a medical entrance exam without cheating wants us to believe that all economic fundamentals are strong. When the Rupee is falling, the Finance Minister says "it's just one number." When foreign investors are pulling out billions, she says they don't matter. But then why appeal to the common citizen to save foreign currency?

In February 2022, PM Modi dug up a 60-year-old speech by Jawaharlal Nehru where Nehru said that a war in Korea could affect Indian prices. Modi mocked Nehru for throwing up his hands. But today, Modi himself is throwing up his hands and blaming a global war. What's the difference? At least Nehru was honest about the limits of control. Modi pretends that everything is fine while the Rupee collapses.


Facts

  • - Gold price per 10 grams reached Rs 1,62,000 on May 14, 2026.
  • - USD/INR crossed 95 on March 30, 2026, and hit 95.85 on May 14, 2026.
  • - Between September 2024 and May 2026, the Indian Rupee was consistently among the worst-performing Asian currencies.
  • - Forex reserves fell from $728.5 billion (Feb 27, 2026) to $690.7 billion (May 1, 2026) – a drop of $37.8 billion.
  • - In the same period, Hungary and Chile (also oil importers) saw their reserves grow by 1.3% and 1% respectively.
  • - Foreign Portfolio Investors (FPIs) pulled out Rs 3.6 lakh crore (approx. $43 billion) from 2025 to May 2026.
  • - India's share of global stock market capitalization fell from 4.73% to below 3%.
  • - PM Modi's foreign travel expenses between 2021 and 2025 exceeded Rs 460 crore.
  • - Wholesale price inflation (WPI) jumped from 3.88% in March 2026 to 8.30% in April 2026 – a 3.5-year high. Fuel and power inflation stood at 24.71%.
  • - The government imposed a ban on export of raw and refined sugar from September 30, 2026 – at a time when it needs dollars the most.

Criticisms

  • - The Modi government deliberately ignored the rupee's decline for over two years, then blamed a war that happened much later.
  • - Finance Minister Nirmala Sitharaman dismissed the crashing rupee as "just an issue" and claimed strong fundamentals while the currency bled.
  • - Chief Economic Advisor Anantha Nageswaran slept when the dollar was at Rs 90, but panicked at Rs 95 – exposing his lack of foresight.
  • - The "Godi media" (compliant media) shielded the government by not reporting the rupee's true condition, allowing the crisis to deepen unnoticed.
  • - Make in India failed. Import dependence increased, and export competitiveness did not improve, leading to a perpetual dollar shortage.
  • - The government imposes export bans (e.g., sugar) when dollars are needed most, showing contradictory economic planning.
  • - PM Modi's foreign tours costing Rs 460 crore delivered no tangible economic benefit to the common citizen.
  • - The appeal to the poor and middle class to "save foreign currency" while the wealthy freely send dollars abroad for education, travel, and assets is hypocritical.
  • - The government cowardly bowed to U.S. pressure by stopping Russian oil imports, compromising India's energy security.
  • - The obsession with winning elections through religious politics has masked the worst unemployment crisis in 50 years and a consumption collapse.

This is the state of India's economy. And while the government hides behind war, global crises, and distractions, the common man pays the price – through inflation, unemployment, and a currency that has lost all respect. The question is not who is buying gold at Rs 1,62,000. The question is: when will the people stop buying the government's lies?

Written by DeepSeek.


See All News by Ravish Kumar    « Previously

Interview at Turing for Lead/Principal AI Engineer Role (2026 Feb 2)

Index For Interviews Preparation
« Previously    Next »
Interview Reconstruction & Critique

Lead AI Engineer
@ Turing

Candidate: Ashish Jain Recruiter: Ronak Company: Turing.com Mode: Screening Call
01

Organized Call Transcript

The call was a recruiter-led telephonic screening. It followed five natural phases: mutual introductions, company pitch, candidate background walk-through, technical Q&A, and logistics / next steps.

Phase A

Introductions & Role Context

Ronak introduces himself as a recruiter at Turing. He describes Turing's evolution from a staff-augmentation bridge connecting US companies with Indian developers (since 2018) to an AI-services company over the past three to four years. The open role is a Principal Engineer (IC-6) — one level below Engineering Manager — requiring 10+ years of experience, Python-heavy background, and 2–3 years of hands-on work in Generative AI, agentic systems, LangChain/LangGraph, RAG, and LLMs. The role is hybrid (Mumbai / Hyderabad / Bangalore / Gurgaon) with partial US-hours overlap (1 PM – 10 PM IST).

Phase B

Candidate Self-Introduction

Ashish introduces himself as currently working at Accenture as a Lead AI Engineer. His current project is a multi-agent Text-to-SQL platform for a telecom client, built with a Text-to-SQL agent, a RAG agent, and a Generic Knowledge agent. He declares 13 years of total experience, 5–6 years in GenAI, and 2–3 years in RAG and agentic systems. Education: M.Tech from BITS Pilani.

Phase C

Career History Walk-Through

Magic Software, NoidaSoftware Trainee — ~1 year
WebPlant Pvt. Ltd.Developer (e-commerce, US clients) — ~6–7 months
MobiliumSoftware Eng. / Data Analytics, Roaming Analytics — ~3.5–4 years
InfosysData Scientist → Tech Lead — ~5.5–6 years
AccentureLead AI Engineer — ~1.2–1.3 years (current)
Phase D

Technical Q&A

The recruiter asked five technical questions covering: multi-agent system design & challenges; LLM cloud deployment & scalability; requirements gathering involvement; agent vs. LLM-call distinction in LangGraph; and failure/dead-end handling strategies. Answers are reconstructed in Section 2.

Phase E

Motivation, Compensation & Logistics

Ashish cites project fit as the reason for leaving Accenture. Current CTC: ₹29.5 LPA. Expectation: ₹35 LPA. Competing offer: ₹32 LPA from an MNC. Notice period: serving notice, last day 21st April. Next steps agreed: coding challenge (Wednesday, 6 PM) — 45-minute RAG pipeline in Python using OOP, followed by a 5–6 question AI video screen, then a panel, then a technical + culture-fit round.

02

Reconstructed Q&A

Five technical questions were asked. Below is the probable exact phrasing, followed by the candidate's answer as delivered.

Q1

Can you walk me through a project where you built or extended a multi-agent workflow using LangChain, LangGraph, or an equivalent framework — and what challenges did you face during the orchestration, reasoning, or evaluation phases?

A

Ashish described the Data Analytics Platform at Accenture — a LangGraph-based multi-agent system for a telecom client. The orchestrator agent routes incoming queries to one of three branches: (1) a Text-to-SQL agent that converts natural language to SQL and fetches data, feeding a narrative/visualization agent; (2) a RAG agent for KPI/platform documentation queries; (3) a generic catch-all agent. He mentioned GPT-4.1 on Azure OpenAI as the LLM. The challenge cited was the model selection decision (GPT-4.1 vs. smaller models) and designing the agent graph topology.

Q2

Can you give me an example where you deployed an LLM-based system on a cloud platform and had to optimize it for scalability and performance?

A

Ashish was candid: the LLM was pre-hosted on Azure OpenAI and consumed as an API, so he did not personally fine-tune, host, or scale an LLM on cloud infrastructure. He acknowledged this as a gap in his experience.

Q3

Do you get involved in the requirements-gathering phase? If yes, what technical and non-technical aspects do you look out for from a client to better understand requirements?

A

Ashish said requirements gathering is handled by the associate director level and business analysts at Accenture. He mentioned BAs define acceptance criteria and KPIs to measure solution performance, but acknowledged he is not personally involved in that phase.

Q4

Can you explain what an agent is in LangGraph and how it is different from a simple LLM call?

A

Ashish explained that an agent is a task-performing entity — it can be an LLM call, a Python function, or a pre-written script. A simple LLM call is just hitting an API endpoint with a prompt and capturing the output. An agent need not be an LLM call; it has additional characteristics like defined inputs, outputs, and a specific purpose within the graph.

Q5

What do you do when an agent fails or hits a dead end in your workflow?

A

Ashish described the Text-to-SQL agent looping in a generate → critique → regenerate cycle. His solution was to add a recursion/loop limit on the graph to break out of infinite loops. He then mentioned a graceful fallback — returning a user-facing message such as "I wasn't able to find an answer" when the limit is hit.

03

Critique & Better Answers

Below is an honest, direct assessment of each answer — where it fell short and what a stronger response would have looked like. The goal is to help you walk into the next round sounding like the Principal Engineer the role demands.

01

Multi-Agent System Design & Challenges

Partial

What went wrong

The architecture description was accurate but surface-level. You described what the system does but not why the design decisions were made. More critically, you listed "model selection" as a challenge — that is a pre-build decision, not an orchestration challenge. The recruiter specifically asked about challenges in orchestration, reasoning, or evaluation. You missed all three of those dimensions.

Better Answer

"The hardest orchestration challenge was deterministic routing. The orchestrator had to decide — at inference time — whether a query belonged to Text-to-SQL, RAG, or the generic agent. Early on, the LLM-based router was misclassifying ambiguous queries, so we added a structured output schema with confidence scores and a fallback re-classification step. On the reasoning side, the Text-to-SQL agent was hallucinating table names that didn't exist in the schema, so we injected a schema-grounding step that dynamically fetches the relevant table DDLs and passes them in the prompt. For evaluation, we had no ground truth SQL, so I built a semantic equivalence evaluator that runs both the generated SQL and a gold-standard query on a small test dataset and compares result sets. That gave us a measurable accuracy metric instead of relying on vibe-checks."

02

LLM Cloud Deployment & Scalability

Weak

What went wrong

Admitting a gap is honest — and I respect that — but you stopped there. You had adjacent experience you could have leveraged: you consumed Azure OpenAI APIs at scale, which means you dealt with rate limits, token budgeting, latency, and cost. That is real, deployable knowledge. By saying only "I lack here," you left value on the table and gave the recruiter nothing to anchor on.

Better Answer

"I haven't personally fine-tuned and self-hosted an LLM, but I've worked extensively with Azure OpenAI at scale and hit the scalability problems from the consumer side — which is arguably where the real engineering lives in enterprise GenAI. We managed TPM and RPM rate limits by building a token-budget middleware layer, implemented exponential-backoff retry logic for 429s, and used async LangGraph execution to parallelize agent calls wherever the graph allowed it. On the evaluation side, I tracked P95 latency per agent node and used that to identify bottlenecks. If the role requires fine-tuning and hosting custom models, I'm confident I can close that gap quickly — I've been reading up on vLLM and GGUF quantization deployments on AKS."

03

Requirements Gathering Involvement

Weak

What went wrong

This question was a test of your Principal Engineer mindset — not just your current job description. A Principal IC is expected to be the technical voice in discovery conversations. By saying "that's handled above me," you signalled that you operate in execution mode only. That is a Lead Engineer answer, not a Principal Engineer answer.

Better Answer

"Formally, our BAs and associate directors own requirements. But in practice, for an agentic AI system, there are questions only an AI engineer can ask — and I make it a point to be present for at least the technical discovery sessions. The things I specifically probe for: What are the data sources and their query SLAs? Are there PII or data residency constraints that affect model choice? What does the client define as a 'wrong answer' — because that shapes the evaluation harness from day one. How tolerant is the user persona for latency vs. accuracy tradeoffs? And crucially — what does failure look like in production? Those questions save you weeks of rework."

04

Agent vs. LLM Call in LangGraph

Adequate

What went wrong

Your core definition was correct. But for a Principal Engineer role, "correct" is baseline — the recruiter wanted to hear you talk in LangGraph's actual mental model: nodes, state, conditional edges, and the ReAct loop. The answer lacked any LangGraph-specific vocabulary and felt generic.

Better Answer

"In LangGraph, an agent is a node in the state graph — it receives the shared state, performs some transformation (LLM call, tool invocation, business logic, or even a deterministic function), and writes back to the state. A simple LLM call is stateless by definition — it has no awareness of what happened before or what comes after. What makes something an agent in the LangGraph sense is the combination of state awareness, decision-making capability (conditional edges), and the ability to invoke tools or call sub-graphs. The ReAct pattern — Reason, Act, Observe — is the canonical loop that distinguishes a reasoning agent from a simple completion call. In our platform, even our 'generic' agent was technically an LLM node, but it had tool-binding to a document retriever and could loop until it had enough context, which is what made it an agent rather than a single-shot call."

05

Agent Failure & Dead-End Handling

Adequate

What went wrong

Mentioning the recursion limit and graceful fallback is good — that's a real production pattern. But you presented it as the only solution to one specific failure mode (infinite loop). A Principal Engineer would enumerate a taxonomy of failure modes and a layered resilience strategy. You only covered one layer.

Better Answer

"We categorise agent failures into three buckets. First, infinite loops — the generate-critique-regenerate cycle you've seen; we handle this with LangGraph's recursion_limit and a human-in-the-loop node that fires when the limit is hit. Second, tool errors — the SQL agent trying to query a table that doesn't exist; we wrap every tool call in a structured error schema so the agent receives a machine-readable error rather than an exception traceback, which lets it self-correct in the next iteration. Third, semantic dead ends — the agent produces a technically valid answer that doesn't address the user's intent; we detect this with a lightweight LLM-as-judge evaluator node that scores relevance before the response is surfaced. If the score is below threshold, the query is routed to a human escalation channel. The key principle is: every failure mode needs a different recovery strategy, so you build resilience in layers, not as a single catch-all."

06

Reason for Change & Motivation

Weak

What went wrong

You said "the project wouldn't suit me too much." That is vague to the point of raising flags. To a recruiter, an unexplained dissatisfaction with a current project can read as a conflict with the team, a performance issue, or a lack of commitment. It also makes you sound passive — as if work happens to you rather than you driving it. This was a low-risk question that deserved a sharper, more intentional answer.

Better Answer

"The work at Accenture has been technically solid, but the scope of the Principal Engineer role I'm stepping into here is different in kind. I want to be the person who owns the full technical lifecycle — from shaping what we build in requirements to the evaluation harness in production — not just the senior person who executes well within a defined scope. The Turing role, as you've described it, is explicitly that: an IC-6 close to the EM layer, building agentic systems from the ground up. That's the trajectory I'm optimising for."

Overall Assessment

Technical Depth
58 / 100
Architecture Articulation
65 / 100
Principal Engineer Framing
40 / 100
Confidence & Delivery
50 / 100

You know your material. The multi-agent platform you built is genuinely impressive — but the interview didn't reflect that. The gap was not knowledge but framing. You consistently answered at Lead Engineer altitude when the role demands Principal Engineer altitude. The recruiter even flagged this: "you're giving right answers but getting confused in between."

Before the next round, internalise one rule: every answer should include (a) what you built, (b) why you made the key decisions, and (c) what you would do differently. That is Principal-level thinking. It's the difference between narrating a project and demonstrating architectural judgment.

Report generated from call recording transcript — Turing Recruitment Screening — Lead AI Engineer

Interview at Cognizant for Lead AI Engineer Role (2024 May 22)

Index For Interviews Preparation
<<< Previously    Next >>>

INTERVIEW INTELLIGENCE REPORT

Lead AI Engineer
Call Reconstruction & Critique

Position: Lead AI Engineer Format: Telephonic Screening Analyst: Claude Sonnet

Organised & Structured Transcript

The raw call recording was fragmented and conversational. Below is the cleaned, logically sequenced account of what the interviewee communicated, grouped by topic.

Topic A

The Anomaly Detection Project — Amex Loyalty Platform

  • Project involved detecting anomalies in credit card transaction data on the American Express loyalty platform.
  • Anomaly categories targeted: unusually large-amount transactions, unusually small-amount transactions, and anomalies by merchant type.
  • Business outcome: the client used these flagged anomalies to generate alerts in their platform and decide whether to block suspicious transactions.
  • New data was provided on a quarterly basis for ongoing inference.
Topic B

Data Engineering & Infrastructure

  • Historical training data spanned approximately one to two years of credit card transactions.
  • Data originated from Amex's mainframe systems.
  • A dedicated data engineering team was responsible for extracting and loading data from mainframes into Hive-based databases.
  • The data science team consumed this data via PySpark, running on a Jupyter-like notebook environment within a platform called Cornerstone (a mixed/managed compute platform).
  • Data was entirely structured (tabular credit card transaction records).
Topic C

Feature Engineering & Modelling Approach

  • Although the raw data had many columns, the team narrowed focus to four to five key features: transaction amount, merchant type, and time (used primarily for visualisation).
  • The problem was framed as unsupervised learning — no ground-truth labels existed.
  • Three model architectures were evaluated:
    1. Isolation Forest
    2. Autoencoder (neural network based)
    3. K-Medians clustering
Topic D

Contamination Factor & Model Validation

  • Because there were no labels, a Gaussian Mixture Model (GMM) was used to estimate the contamination factor — the expected proportion of anomalies in the dataset.
  • Anomaly scores from Isolation Forest and the Autoencoder were plotted in a scatter plot. Density analysis revealed two regions: a high-density core (normal) and a sparse periphery (anomalous).
  • The sparse cluster's percentage of total points became the contamination factor fed into the final models.
  • A human-in-the-loop existed: the loyalty/transaction team monitored alerts raised by the system, each alert triggering a ticket for review.
  • Precision was reported as above 75–80%, with acknowledged volatility during trend shifts (e.g., Christmas peak spend causing a temporary spike in false positives).
Topic E

Generative AI Experience — Semantic Search POC

  • While on the bench at Cognizant (first two months), developed an internal semantic search POC.
  • Source corpus: issues and Q&A threads scraped from GitHub and Stack Overflow.
  • Questions and answers were converted to vector embeddings and stored in a vector database.
  • At query time, the input was embedded and compared against the stored embeddings using similarity search to retrieve closest matches.
  • Self-characterised as a "POC-level" GenAI engagement — not a production deployment.
Topic F

Location Preferences & HR Discussion

  • Based in Delhi (Inderlok area), commuting to Tikri Sector 48 office; also stays at Sector 79.
  • Family constraints (mother) tie him to Delhi NCR.
  • First preference: Delhi. Acceptable: Gurgaon, Noida. Difficult: Pune, Indore. Not preferred: Bangalore, Chennai.
  • Asked about current HCM (Shivam Shrivastav, from AML CoE) — was told HCM may change on project allocation.

Reconstructed Q&A — Full Dialogue

The interviewer's questions have been inferred from context and the interviewee's responses. Each exchange is presented as a coherent dialogue unit.

QCould you walk me through your most recent project?
AMy last project was anomaly detection for the Amex loyalty platform — detecting anomalies in credit card transactions. We looked at large-amount transactions, small-amount transactions, and merchant type patterns. The client used the flagged anomalies to raise alerts in their system and decide whether to block those transactions. We got new data to run inference on every quarter.
QCan you elaborate on the training data — how did you start model training? What historical data did you use?
AWe had one to two years of historical transaction data — I believe it was closer to two years. The data came from Amex's mainframe systems. Their data engineering team was responsible for pulling it from the mainframe into Hive-based databases. From there, we connected using PySpark in a Jupyter-like environment on a managed platform called Cornerstone.
QWas the data structured or unstructured? And what were the data challenges — how did you handle data cleaning?
AIt was structured data. The main challenge was feature selection — there were many available columns, so we narrowed down to about four or five features: merchant type, transaction amount, and time. Time was used more for visualisation and trend plotting. For anomaly detection itself, we treated the data as a batch.
QSince there were no labels, how did you validate that your model was correctly detecting anomalies? What was the validation mechanism?
AWe used a contamination factor approach. We ran both the Autoencoder and the Isolation Forest models, obtained anomaly scores from each, and plotted them in a scatter plot. Using density analysis — essentially a Gaussian Mixture Model — we identified the dense normal cluster and the sparse anomalous periphery. The proportion of points in the sparse cluster gave us the contamination factor, which we used to tune how aggressive our anomaly threshold should be.
QWas there any human validation loop involved?
AYes — the loyalty and transaction monitoring team at Amex reviewed every alert raised. Each flag generated a ticket, so there was a human-in-the-loop reviewing the outputs.
QWhat was the accuracy — or in this case, the precision of the model?
ASince it's unsupervised, accuracy isn't the right metric — precision is more appropriate. We were above 75–80%. The number did fluctuate, especially during trend shifts like Christmas, where genuine transaction spikes initially generated more false positives before the model adjusted.
QWas there any attempt to improve that 75% number?
AIt's hard to pin down a single improvement target because precision fluctuates naturally with evolving spending behaviour. The contamination factor approach helps it self-adjust over time, but trend breaks do cause temporary dips.
QCan you walk me through a Generative AI project you've worked on?
AWhile on bench at Cognizant, I built a semantic search POC. We collected questions and answers from GitHub issues and Stack Overflow, converted them into embeddings, and stored them in a vector database. When a new query came in, we embedded it and retrieved the closest matching entries from the database using similarity search.
QAre you open to relocation, or do you have a location preference?
AMy first preference is Delhi NCR — I have family here. Gurgaon and Noida are also acceptable. Pune and Indore are more difficult due to distance. Bangalore and Chennai would be quite challenging for me at this point.

Critique & Better Answers

A frank, point-by-point evaluation of the responses — identifying weaknesses in communication, technical depth, and strategic framing, with the sharper answer each question deserved.

On introducing the Amex anomaly detection project
Needs Work
What Went Wrong

The introduction was rambling and repetitive — "large transactions or very small number of transactions, large amount transaction or small amount transactions" was said almost verbatim twice. The business impact was buried and vague ("should we block them"). There was no STAR-style framing: no clear statement of scale, no team context, no timeline, and no outcome lead. An interviewer for a Lead role expects structured, confident narration — not a stream-of-consciousness recall.

The Better Answer
"At Amex's loyalty platform, I led the data science effort on an unsupervised anomaly detection system for credit card transactions. The problem had three anomaly signals: outlier transaction amounts — both unusually large and unusually small — and abnormal merchant-type patterns that deviated from a cardholder's historical behaviour. The business use case was operational risk: alerts fed directly into the platform's transaction-blocking pipeline. We processed roughly two years of historical mainframe data, built our pipeline in PySpark on Hive, and ran quarterly inference cycles. I drove the model selection, contamination factor calibration, and the human-review integration with the loyalty operations team."
On data challenges and feature engineering
Superficial
What Went Wrong

You reduced an inherently rich challenge to "we zeroed down on four or five features." For a Lead role, the interviewer wants to understand how you chose those features — what was your methodology? Was there domain knowledge involved? Did you run correlation analysis, VIF, or feature importance from a supervised proxy? You also glossed over data quality issues entirely — mainframe-sourced transaction data is notoriously messy (encoding issues, missing fields, schema drift). Saying "it was structured data" and moving on was a missed opportunity.

The Better Answer
"The raw data had 30-plus columns from the mainframe. Feature selection was a deliberate process — we started by eliminating PII and low-variance fields, then used domain knowledge from the Amex loyalty team to shortlist candidates. We settled on transaction amount, merchant category code (MCC), transaction frequency per time window, time-of-day, and days since last transaction. One non-trivial challenge was schema drift — the mainframe schemas had evolved and we had to handle column remapping across data batches. We also had to normalise amounts for currency and seasonal effects before any modelling."
On the model architecture and contamination factor
Strong, But Unclear
What Went Wrong

The technical substance here was genuinely solid — the GMM-based contamination estimation, ensemble of Isolation Forest and Autoencoder anomaly scores, and density-based threshold-setting is a legitimate and thoughtful methodology. However, the explanation was confused and hard to follow. The phrase "we create two clusters… not two clusters, basically one cluster and outside" is almost incoherent when spoken. For a Lead AI Engineer, clarity of technical communication is as important as the technical knowledge itself. You also never explained why you chose Isolation Forest + Autoencoder specifically, or why you dropped K-Medians.

The Better Answer
"We chose Isolation Forest for its efficiency on high-dimensional tabular data — it's interpretable and handles sparse anomalies well. The Autoencoder complemented it by capturing non-linear feature interactions through reconstruction error. K-Medians was explored but dropped because it was sensitive to our choice of K and the clusters weren't semantically meaningful. To set the contamination threshold — the proportion of anomalies to expect — we used a Gaussian Mixture Model on the joint anomaly score distribution from both models. The GMM naturally separated the dense inlier mass from the diffuse anomalous tail, giving us an empirically grounded contamination estimate rather than a hand-tuned guess."
On precision of ~75% and improvement efforts
Defensive & Incomplete
What Went Wrong

When pushed on why 75% precision wasn't higher, the response became defensive and wandered into an explanation of seasonal false positives — which, while valid, sounded like excuse-making rather than engineering problem-solving. A Lead Engineer should respond to a precision ceiling by describing active remediation strategies: retraining cadence, concept drift detection, ensemble re-weighting, or feedback loop design. You also never mentioned whether you measured recall or framed a precision-recall tradeoff, which is critical in fraud/anomaly contexts where false negatives (missed frauds) are often costlier than false positives.

The Better Answer
"75–80% precision was our baseline. We tracked precision alongside alert-review conversion rates from the operations team to estimate recall indirectly. To address precision degradation during trend shifts, we implemented a quarterly model retraining pipeline where the contamination factor was recalibrated using the previous quarter's confirmed anomaly tickets as soft labels. We also explored a sliding-window retraining scheme for faster adaptation to spend pattern shifts — though that was still in progress when the engagement ended. The goal was to reach and sustain above 85% precision without increasing the operations team's alert review load."
On GenAI experience — semantic search POC
Significantly Undersold
What Went Wrong

This is the most damaging part of the interview. You were interviewing for a Lead AI Engineer role in 2024–25 — a role that almost certainly has significant GenAI expectations. You self-described your GenAI background as "not much experience" and spent fewer than five sentences on the only GenAI project you mentioned. You did not name the embedding model used, the vector database, the chunking strategy, the retrieval method (cosine similarity? FAISS? ANN?), or any evaluation approach. You also did not mention any current reading, self-directed learning, or projects in LLMs, RAG pipelines, or LangChain/LangGraph — all of which you've actually explored. This is a credibility-damaging gap for a Lead role.

The Better Answer
"The semantic search POC used sentence-transformers — specifically the all-MiniLM-L6-v2 model — to embed GitHub and Stack Overflow Q&A pairs. We stored embeddings in FAISS with an IVF index for efficient approximate nearest-neighbour retrieval. Beyond this POC, I've been deepening my GenAI stack — I've worked with RAG pipeline architectures, explored LangGraph for agentic workflows, and studied LLM evaluation frameworks including RAGAS for retrieval quality measurement. My core ML background in anomaly detection gives me strong fundamentals in embedding spaces and distance-based reasoning, which translates well into modern GenAI retrieval problems."
On location preferences — HR discussion
Manageable, But Risky
What Went Wrong

Raising the location constraint repeatedly and with evident anxiety signals to the interviewer that you may be inflexible. Asking whether it would "impact your availability to the client" reveals a self-awareness about the disadvantage — which, when voiced aloud, reinforces it. In a bench situation, flexibility is a competitive advantage. The better approach is to state a preference clearly and confidently once, without revisiting it or asking the interviewer to manage it for you.

The Better Answer
"My primary preference is Delhi NCR — Delhi, Gurgaon, or Noida. I can make that work immediately. For the right opportunity, I'm open to discussing other locations on a case-by-case basis — especially if there's flexibility in terms of hybrid or project-phase travel. I'd appreciate it if that preference is noted, but I don't want it to be a limiting factor in the evaluation."
Overall Assessment
Technical Depth
70%
Communication Clarity
45%
GenAI Readiness (as presented)
30%
Leadership Signalling
35%
Problem-Solving Framing
60%
HR / Positioning
55%
Verdict

The underlying expertise is real — the contamination factor methodology, the ensemble approach, and the PySpark/Hive stack show genuine ML engineering experience. But the presentation of that expertise was significantly below what a Lead AI Engineer role demands. Two changes would have materially improved the outcome: (1) preparing structured, confident narration for each project using a STAR framework, and (2) leading with GenAI competence rather than apologising for its limits. The knowledge is there — the packaging needs work.