Thursday, April 30, 2026

Interview at Bechtel for AI Architect Role (2026 Mar 19)

Index For Interviews Preparation    <<< Previously

AI Architect Interview – Structured Report

Based on one-sided candidate recording  |  Role: AI Architect  |  19 Mar 2026

Section 1 – Organized One-sided Transcript (Candidate’s Answers)

The following is the candidate’s side of the conversation, grouped by topic and lightly cleaned of filler words for readability while preserving the original ideas.

1.1 Introduction & Project Overview

I’m with Accenture, working on a project called AIOBI — a Digital Data Analytics Platform / Business Intelligence using Natural Language Query. It’s an agentic system with sub‑agents: RAG agent, Text‑to‑SQL agent, and a visualization agent, all managed by an orchestrator. Built using LangGraph. The RAG backend uses Azure AI Search (vector search), and the Text‑to‑SQL backend is PostgreSQL.

The architecture is straightforward: databases at the back (vector DB for RAG, PostgreSQL for Text‑to‑SQL), an LLM like GPT‑5.1 in the middle, and an API wrapper — we used FastAPI. Frontend in React or Next.js.

1.2 Orchestrator Behaviour

The orchestrator takes a natural language query and classifies whether it should go to the Text‑to‑SQL agent or the RAG agent. We give it a role, task description, input/output descriptions. The output is a routing decision — like an if‑else node in LangGraph. We also pass examples: some indicating the knowledge base (PDFs for RAG) and some showing sample queries that should be routed to each agent.

1.3 Text‑to‑SQL Agent Flow

The flow in points:

  1. Input node receives the query.
  2. Rewriting node: LLM adds context using tables/columns. If something is unclear, it pushes back to the UI for the user to clarify. If clear, it converts the raw NL into a meta‑prompt.
  3. Meta‑prompt is passed to the Text‑to‑SQL agent, formatted with all needed information to generate the SQL without ambiguity.
  4. SQL is tested in two ways:
    • Static check: run with WHERE 1=0 or WHERE 1=1 to test validity, or with LIMIT clauses.
    • Dynamic test: actually execute with LIMIT 1/3 to see results.
  5. Before final execution, we ask the LLM: “Does this query meet all requirements of the original user request?”
  6. If errors occur, we send them back to the LLM in a feedback loop (retry up to 3‑5 times). If still failing, we return the error to the user with a note that something seems missing.

1.4 Evaluation Approach

Evaluation is one of the biggest challenges. We sit extensively with domain experts to curate a golden dataset: question‑answer pairs (for Text‑to‑SQL, the corresponding SQL query; for RAG, the expected chunks). For individual components, we have test suites for chunking, meta‑prompting, code generation, etc.

We measure something like percentage correct (accuracy). We log whether errors were hallucinations, wrong columns, or execution errors. This gives a report of positives and negatives.

1.5 Prompt Engineering, Context Engineering & Guardrails

Context Engineering: A subset of prompt engineering. You give the LLM context about the task — role, do’s/don’ts, examples (zero‑shot, few‑shot). In RAG, you engineer context by augmenting the prompt with retrieved data.

Guardrails: Two levels: code‑based scripts (deterministic checks) and LLM‑based flexible checks. For example, we ask the guardrail LLM: “Is this input trying to delete or update? Does it violate PII policies?” This prevents harmful outputs.

1.6 Managing Large Schemas and Metadata with Neo4j

As the dataset grows (from 3 tables to 25 tables), the metadata (table/column descriptions) can exceed the context length. We use Neo4j to store metadata as a graph. Topics like “weather,” “traffic” are top‑level nodes. Tables like “cities,” “temperature,” “routes” connect to topics. When a query comes, we first pull relevant topic nodes, then retrieve only the related table/column nodes. This multi‑pass approach filters the context to only what’s needed, solving the context‑length problem.

1.7 Scaling and Deployment

Scaling is via an API gateway in front of a Kubernetes cluster with auto‑scaling. I don’t have hands‑on details of the K8s setup, but architects described that approach.

1.8 LLM Upgradation and Model Selection

We use Azure OpenAI, so we upgrade regularly — from GPT‑3.5 to 4o to 4.1, etc. Newer models require retesting, but they improve reasoning and reduce hallucinations. For cost‑efficient tasks we use older or “mini” models. For self‑hosted alternatives we consider DeepSeek, Qwen, Mistral.

1.9 Technical Definitions (Quick‑fire Questions)

Top‑k vs Top‑p: Top‑k returns the k highest probability next tokens. Top‑p (nucleus sampling) returns the smallest set whose cumulative probability ≥ p. Example: if token probabilities are 70%, 25%, 4%… and top‑p=0.9, we take the first two because 70+25=95 which ≥ 90.

Temperature: Controls randomness. Low → greedy (always highest probability token), high → more exploratory.

1.10 SQL Join Types

Left join: all rows from left table, plus matching rows from right table; non‑matching right side gets NULLs.
Right join: all rows from right table, plus matching rows from left.
Full outer join: all rows from both tables, with NULLs where no match exists.

1.11 Fibonacci Coding Exercise

The candidate wrote pseudocode in a thinking‑aloud style:

“Fibonacci is f(n) = f(n‑1) + f(n‑2). We’ll start from 0 and 1. I think a list would work. For i in range(n): if i==0: append 0; elif i==1: append 1; else: append list[-1] + list[-2]. I tried to run it and it gave output but needed debugging. Reason it didn’t print correctly: range wasn’t set up properly.”

1.12 Wrap‑up: Career Motivation

“I’ve been on this project for 1.5 years. It’s now in maintenance mode — mainly ServiceNow tickets. I want to explore more cutting‑edge agentic stuff, not just maintain what’s built.”

Section 2 – Reconstructed Interviewer Questions

Based on the candidate’s responses, the following questions were likely asked. They are presented in a logical order, paired with the relevant answer summary.

Q1: “Please introduce your current project and role.”
(See 1.1) The candidate described AIOBI, an agentic BI platform using NLQ, with RAG, Text‑to‑SQL, orchestrator, LangGraph, Azure AI Search, PostgreSQL.
Q2: “What is the system architecture?”
(See 1.1‑1.2) Backend DBs, LLM (GPT‑5.1), FastAPI middleware, React frontend; orchestrator classifies and routes queries.
Q3: “Can you walk me through how the Text‑to‑SQL agent works?”
(See 1.3) Detailed flow: rewriting node → meta‑prompt → SQL generation → static/dynamic tests → feedback loop.
Q4: “What challenges have you faced, especially around evaluation?”
(See 1.4) Curating golden datasets with domain experts, multi‑component test suites, accuracy metrics.
Q5: “How do you handle prompt changes without derailing outputs?”
The candidate alluded to iterative tuning and testing but did not give a structured answer (later critique).
Q6: “What is context engineering and how does it differ from prompt engineering?”
(See 1.5) Described context engineering as a subset; providing role, examples, do’s/don’ts, RAG context augmentation.
Q7: “How do you implement guardrails?”
(See 1.5) Two‑level: deterministic code‑based checks (e.g., for PII) and flexible LLM‑based checks (policy violations).
Q8: “What are the metrics you use for evaluating the Text‑to‑SQL and RAG agents?”
(See 1.4 & later parts) Accuracy/percentage correct. Mentioned hallucination, missing columns, wrong results. Did not name specific metrics like BLEU or Execution Accuracy.
Q9: “How do you deal with large database schemas when building prompts?”
(See 1.6) Neo4j metadata graph, topic‑based retrieval of relevant tables/columns to stay within context length.
Q10: “What about scalability and deployment?”
(See 1.7) API gateway + Kubernetes auto‑scaling, though admitted limited personal hands‑on.
Q11: “How do you decide which LLM version to use, and how do you manage upgrades?”
(See 1.8) Azure OpenAI partnership, upgrade to latest after retesting; older/mini models for cost; open‑source fallbacks like DeepSeek.
Q12: “Can you explain top‑k, top‑p and temperature?”
(See 1.9) Provided definitions with numerical example for top‑p.
Q13: “What are the differences between left, right, and outer joins in SQL?”
(See 1.10) Gave a correct, concise explanation.
Q14: (Coding exercise) “Write a Python function to generate the Fibonacci sequence up to n terms, using recursion.”
(See 1.11) Candidate attempted iterative list approach with debug commentary; did not use recursion as apparently requested.
Q15: “What is your motivation for leaving your current role?”
(See 1.12) Wants to move from maintenance to innovative agentic AI work.

Section 3 – Critique and Improved Answers

Below is a constructive evaluation of the candidate’s responses, highlighting weaknesses and offering a more polished, architect‑level answer.

3.1 Overall Delivery NEEDS WORK

  • Excessive fillers & rambling: The transcript contained many “yeah,” “I mean,” “like,” and tangential loops. An AI Architect must communicate with clarity and conciseness.
  • Lack of structure: Answers often wandered. For example, explaining the Text‑to‑SQL flow jumped between validation, rewriting, and guardrails without a clear narrative.
  • Vagueness on depth: When asked about scaling, the candidate said “I lack details” — unacceptable for an architect role. Better to say “While I haven’t provisioned the K8s cluster myself, the standard pattern we follow is…” and then describe the pattern confidently.
Better approach: Use the STAR method (Situation, Task, Action, Result) for complex descriptions. Speak slowly, think, then deliver a well‑formed paragraph without fillers.

3.2 Architecture Walkthrough FAIR

The candidate mentioned LangGraph, FastAPI, React, but left out crucial architectural diagrams and trade‑offs. As an architect, one should discuss why these choices were made.

Improved answer: “We selected a modular agentic architecture with LangGraph for its explicit state‑machine control. The orchestrator is a gating model that pre‑classifies NL inputs into RAG or Text‑to‑SQL branches using few‑shot prompts and a routing function. Each agent is encapsulated behind a FastAPI microservice, deployed on AKS for scale. We use Azure AI Search for vector retrieval (using Ada embeddings) and PostgreSQL for transactional SQL data. The frontend is a Next.js app that calls a unified /nlq endpoint. For observability, we integrate Phoenix/OpenTelemetry to track token usage, latency, and guardrail violations.”

3.3 Evaluation Answer INSUFFICIENT

The candidate only mentioned “accuracy” and “golden dataset”. An architect should know specific metrics: Execution Accuracy (EX), Exact Set Match (ESM), ROUGE‑L or BLEU for SQL, validation‑set coverage, hallucination rate, and for RAG, context precision/recall, faithfulness, answer relevancy. The answer lacked method naming and benchmark references.

Better answer: “For Text‑to‑SQL, we use Execution Accuracy (does the SQL produce the correct result set on a held‑out test DB) and Exact Set Match (comparing the result rows directly). We also compute SQL‑specific BLEU and ROUGE‑L against reference queries. For RAG, we measure context precision, context recall, faithfulness, and answer relevancy using LLM‑as‑a‑judge. We curate a golden dataset of 500+ question‑SQL‑answer triples. Additionally, we do component‑wise evaluations: chunking strategy (Hit Rate on top‑k), meta‑prompt accuracy, and visualization code correctness using unit test suites.”

3.4 Context Engineering vs Prompt Engineering DECENT

The candidate correctly called context engineering a subset, but the distinction was fuzzy. He should have explained that prompt engineering is the overarching practice of designing the entire prompt structure, while context engineering specifically deals with injecting relevant external information (retrieved chunks, metadata, user intent tags).

Better answer: “Prompt engineering covers the system message, instruction templates, output format, and few‑shot examples. Context engineering is the discipline of selecting and formatting the dynamic contextual data that augments the prompt — such as RAG‑retrieved chunks, table schemas for Text‑to‑SQL, or conversation history. It’s about what information you pack and how you serialize it to minimise the gap between the model’s training distribution and the inference need.”

3.5 Guardrails Answer ADVANCED

The answer touched on code‑based vs LLM‑based guardrails, which is good. But an architect should mention concrete libraries (Guardrails AI, NVIDIA NeMo Guardrails) and cite examples like PII scrubbing, SQL injection prevention, and output schema enforcement. Also, the candidate missed the importance of input guardrails (e.g., refusing “DROP TABLE” instructions).

Better answer: “We implement a layered guard strategy. On the input side, a regex‑based filter blocks dangerous keywords (DROP, DELETE) and an LLM classifier detects jailbreak attempts. On the output, we use a PII anonymizer library (like Presidio) and a second LLM call that validates the response against our content policy. We also use structured output (JSON mode or function calling) to enforce that SQL statements don’t contain malicious clauses. For the Text‑to‑SQL agent, before execution we run a static analysis that ensures only SELECT queries pass through.”

3.6 Large Schema Handling with Neo4j GOOD CONCEPT, POOR EXPLANATION

The idea of a topic‑driven metadata graph is innovative and architect‑level. However, the candidate struggled to articulate it clearly, using confusing “hierarchy in a graph” metaphors and failing to mention standard techniques like schema‑linking and query‑to‑schema tokenizer alignment. An architect would also mention alternatives like table‑selection via dense retrieval and why Neo4j was chosen (explicit relationship traversal, no need for embedding drift).

Better answer: “We built a semantic metadata graph in Neo4j where nodes represent topics (weather, traffic), tables, and columns, with edges for belongs‑to, references. When a query arrives, we perform a two‑hop traversal: first, we identify topic nodes relevant to the query using keyword matching and vector similarity on topic descriptions; then we traverse the graph to collect only the tables and columns linked to those topics. This prunes the schema context from ~10k tokens for a 25‑table database down to under 2k tokens. It also handles schema evolution gracefully — new tables just get new nodes. Compared to dense retrieval, the graph ensures consistent, deterministic schema linking, which is crucial for SQL accuracy.”

3.7 Fibonacci Coding Exercise MISMATCHED

The interviewer explicitly said “you have to use recursion.” The candidate wrote an iterative solution with a list and debugged it aloud. This shows a failure to listen and to translate a requirement into code. The correct recursive approach (with memoization due to exponential complexity) would be:

Correct implementation:
from functools import lru_cache

@lru_cache(None)
def fib(n):
    if n < 2:
        return n
    return fib(n-1) + fib(n-2)

def fib_sequence(n):
    return [fib(i) for i in range(n)]

print(fib_sequence(10))  # [0,1,1,2,3,5,8,13,21,34]
The candidate should have clarified the requirement (e.g., “first n numbers” vs “up to a maximum number”) and then presented a clean recursive solution, discussing time complexity and the importance of memoization.

3.8 SQL Joins SOLID

The explanation was accurate. However, the candidate hesitated and asked for the question to be repeated. For an architect, the immediate answer should have been crisp: “LEFT JOIN returns all rows from the left table and only the matches from the right; RIGHT JOIN is its mirror; FULL OUTER JOIN returns all rows from both, with NULLs where no match exists.” No need for the extra qualifiers. Still, the content was correct.

3.9 Career Motivation HONEST BUT NEGATIVE

“Maintenance mode… ServiceNow tickets” sounds like complaining. An architect should position the reason positively: “I’m eager to work on more complex, large‑scale agentic systems where I can apply my design skills to solve novel problems, and I see this role as aligned with that growth.”

Better answer: “My current project has moved into a steady‑state phase. I’m grateful for the learning, but I’m now seeking an opportunity where I can design next‑generation agentic architectures from scratch, tackle challenges like multi‑agent orchestration and autonomous tool use, and collaborate with a research‑focused team. Your opening seems perfectly aligned with that progression.”

3.10 Missing Topics GAPS

The candidate did not proactively discuss:

  • Observability tools: Only mentioned Phoenix and LangFuse vaguely. An architect should know OpenTelemetry, tracing, and metrics like faithfulness.
  • Cost optimization: No mention of token‑usage reduction, caching, semantic caching, or prompt compression.
  • Multi‑agent patterns: Although the project is multi‑agent, the candidate didn’t discuss debate, reflection, or plan‑execute patterns — all highly relevant for an agentic architect.
  • Security: Beyond guardrails, no discussion of RBAC, row‑level security in NLQ, or tenant isolation.

3.11 Suggested Talking Points for Future Interviews

  • Use concrete numbers: “Improved SQL accuracy from 82% to 93% by introducing table‑graph schema linking.”
  • Mention standard benchmarks: “We track BIRD, Spider, or WikiSQL metrics internally.”
  • Show impact: “Reduced prompt tokens per query by 60% using Neo4j metadata pruning.”
  • Discuss failure modes: “We handle ambiguous terms by engaging the user in a clarification loop, which improved first‑attempt success by 20%.”
  • Always bring the conversation back to architecture trade‑offs: why agentic vs single‑call, why LangGraph vs semantic kernel, why Azure vs AWS.

End of Report — Prepared by AI Interview Evaluator


Index For Interviews Preparation    <<< Previously

Direct Plan vs Regular Plan: A Layman's Guide to Two Types of Investment Funds


Lessons in Investing    <<< Previously

Direct Plan vs Regular Plan: A Layman's Guide

Understanding the difference can save you lakhs over time — explained with ICICI Prudential BSE Sensex Index Fund as an example.

If you've ever looked at a mutual fund scheme, you've probably seen two versions: ICICI Prudential BSE Sensex Index Fund – Direct Plan – Growth and ICICI Prudential BSE Sensex Index Fund – Regular Plan – Growth. Both sound similar, and they invest in exactly the same stocks. So what's the difference, and why should you care? Let's break it down in simple terms.

1. The Backstory: Why Two Plans Exist

In 2013, India's market regulator SEBI made it mandatory for all mutual fund houses to offer a separate Direct Plan for investors who want to invest on their own, without going through a middleman.[12] Before this, everyone invested through distributors (agents, banks, brokers) who earned a commission. The Direct Plan was created to give investors a lower-cost option if they didn't need that middleman.

So today, every mutual fund scheme comes in two flavors:

  • Regular Plan – bought through a distributor/advisor.
  • Direct Plan – bought directly from the fund house (or through platforms that offer direct plans).

2. What's the Same? (The Similarities)

The Direct Plan and the Regular Plan of the same scheme are identical in almost every way:

  • Same portfolio: Both invest in the exact same set of stocks or bonds. In the case of the ICICI Prudential BSE Sensex Index Fund, both plans replicate the S&P BSE Sensex index, holding the same 30 stocks in the same proportions.
  • Same fund manager: The same person or team manages both plans.
  • Same investment objective: Both aim to track the BSE Sensex Total Return Index.
  • Same risk level: Both carry the same "Very High" risk rating.

3. The Key Difference: Cost (Expense Ratio)

The single most important difference comes down to cost. Every mutual fund charges an annual fee called the Total Expense Ratio (TER) to cover management, administration, and other expenses. This fee is deducted from your returns daily.

Here's how it works for the two plans based on publicly available expense ratio data[13][14][15]:

Plan Type Expense Ratio (approx.) Why?
Direct Plan 0.20% No distributor commission; you pay only the fund management fee
Regular Plan 0.28% – 0.30% Includes a distributor commission (trail fee) embedded in the expense ratio

That 0.08%–0.10% difference may look tiny, but over many years, it adds up significantly — thanks to the magic of compounding. Think of it as a small leak in a bucket: you don't notice it day to day, but over time, a lot of water escapes.

In plain language: When you invest in a Regular Plan, a part of your returns is quietly being paid to the distributor who sold you the fund. In a Direct Plan, that money stays in your account and keeps compounding.

4. NAV Difference: A Result of Cost, Not Performance

You'll often notice that the Direct Plan has a slightly higher Net Asset Value (NAV) than the Regular Plan. This doesn't mean the Direct Plan performed better; it's simply because fewer expenses are deducted from it each day. The table below shows approximate NAVs for our example fund as per industry trackers[13][14][15].

Plan NAV (approx., Apr 2026) Expense Ratio
ICICI Pru BSE Sensex Index Fund – Direct Plan ₹25.77 0.20%
ICICI Pru BSE Sensex Index Fund – Regular Plan ₹25.43 0.30%

The ₹0.34 gap arises purely from the difference in expenses, not from any difference in the underlying investments.

5. The Real-World Impact: How Much Can You Lose or Gain?

Let's bring the numbers to life. Suppose you invest ₹10,000 per month via SIP for 20 years, and the fund delivers a pre-expense return of 12% per year. The table below shows the approximate final corpus under different expense scenarios, as illustrated in various cost-impact studies[16][17].

Scenario Expense Ratio Final Corpus (approx.) Difference
Direct Plan 0.20% ₹80.6 Lakh +₹5 Lakh
Regular Plan (0.5% higher cost) 0.70% ₹75.5 Lakh

Even a 0.5% annual cost difference can snowball into a ₹5 lakh gap over two decades. Over 10 years with a ₹15,000 monthly SIP, the gap can reach around ₹2 lakhs.

To put it succinctly: the expense ratio difference between Direct and Regular equity mutual fund plans can range from 0.4% to as high as 2.0%, with an industry average of about 1.2%. It's a significant drag that erodes wealth silently.[16][17]

6. Hands-On vs. Hand-Holding: The Service Difference

The choice isn't just about cost — it's also about who does the work. The table below consolidates features commonly highlighted by fund houses and financial portals[18][19][20].

Aspect Direct Plan Regular Plan
How you buy Through the AMC website, app, or direct platforms (e.g., Zerodha Coin, Groww, Kuvera) Through a distributor, bank RM, or broker
Advice You research and choose funds yourself Advisor helps select funds, allocate assets, and handle paperwork
Support Limited; you manage transactions independently Distributor provides hand-holding, reminders, and behavioral coaching during market volatility
Cost Lower (no embedded commission) Higher (commission built into TER)
Suitable for DIY investors comfortable with online platforms and basic fund research Those who value professional guidance, especially beginners or busy professionals

Think of it this way: a Direct Plan is like buying medicines directly from a pharmacy after Googling your symptoms. A Regular Plan is like visiting a doctor — you pay a consultation fee, but you get expert advice and reassurance.

7. Switching and Tax Implications

If you already hold a Regular Plan and want to move to a Direct Plan, you can do so by submitting a switch request. However, this is treated as a "sale" for tax purposes, and you may have to pay capital gains tax on any profits.[21] So before switching, it's wise to consult a tax advisor and weigh whether the tax hit is worth the long-term savings.

8. Quick Comparison: ICICI Prudential BSE Sensex Index Fund at a Glance

Feature Direct Plan – Growth Regular Plan – Growth
Expense Ratio (approx.) 0.20% 0.28% – 0.30%
NAV (Apr 2026) ₹25.77 ₹25.43
Minimum Lumpsum ₹100 ₹100
Minimum SIP ₹100 ₹100
Exit Load 0% 0%
Fund Manager Same (Nishit Patel & team) Same
Portfolio Replicates BSE Sensex Replicates BSE Sensex
Risk Very High Very High

9. Which One Should You Choose?

There's no universal "right" answer — it depends on your comfort level and goals.

  • Go for Direct Plan if: You are comfortable researching funds online, handling KYC and transactions yourself, and you don't need a distributor to guide you. You'll save on costs and keep more of your returns.
  • Choose Regular Plan if: You're new to investing, prefer someone to explain the options, help with paperwork, and provide emotional support during market ups and downs. The extra cost is the price of that service.

Some investors even use a mix: hold their core, long-term investments in Direct Plans, and use Regular Plans for more complex or advice-heavy situations.

10. The Bottom Line

In the battle of Direct vs Regular, there's no mystery. The two plans are like two doors to the same room — you'll end up in the same place (the same portfolio), but one door has a lower ticket price. The question is whether you want to pay for a guide to walk you through that door.

For the ICICI Prudential BSE Sensex Index Fund — and indeed for any mutual fund — the choice ultimately rests on how confident you feel managing your own investments. If you're a hands-on investor who values every percentage point of return, the Direct Plan is a powerful tool. If you'd rather have professional hand-holding, the Regular Plan's slightly higher cost is a fair trade.

Remember: the best plan is the one you'll stick with for the long haul.

References

  1. SEBI circular on introduction of Direct Plans — The Hindu BusinessLine.
  2. Expense ratio and NAV tracking for ICICI Pru BSE Sensex Index Fund — Economic Times.
  3. Fund expense ratio comparison data — ET Money.
  4. Mutual fund NAV and performance data — Value Research.
  5. Cost difference impact illustrations for Direct vs Regular plans — Value Research.
  6. Long-term SIP return differences — PersonalFN research.
  7. Features and benefits of Direct vs Regular plans — Kotak Mutual Fund.
  8. Direct Plan advantages explained — Bajaj Finserv AMC.
  9. Comparison guides on mutual fund plans — Moneycontrol.
  10. Tax implications of switching from Regular to Direct Plan — The Hindu BusinessLine.

Determine Your True Goals


See Other Book Summaries on "Goal Setting"
Download Book    Download Chapter    <<< Previously

Dated: 2026-Apr-30

What do I really want to do with my life?

I could write a long post to answer this question. I don't know for sure... Also, that's a very broad question. I want to do something for the betterment of my country India, for the betterment of my state Haryana, for the betterment of my city Gurgaon. I want to do something for the cause of education. I want to do something for the poor, towards elevating poverty.

Decide What You Really Want

You start with your general goals and then move to more to more specific goals: 1. What are your three most important goals in your business and career, right now? 2. What are your three most important financial goals right now? 3. What are your three most important family or relationship goals, right now? 4. What are your three most important health and fitness goals, right now?

My Answers

Three most important goals in business and career: # Get a project # Learn Agentic AI # Improve my soft skills Three most important financial goals: # Be debt free # Build a financial cushion # Build retirement corpus Three most important relationship goals: # Improve my relations with my mother # Improve my relations with my sisters (Anu and Srishti) # Be a better friend to my friends Three most important goals health/fitness wise: # Reduce weight to 65 kgs # Do 40+ pushups daily # Do 10K+ steps daily

What are my three biggest worries or concerns in life, right now?

Where does it hurt? 1: It feels like I have been running since 2008. That's 19 years. I don't know how and when would it end. Does it even have an end? 2: Would I become a better person? Be able to work on my mistakes, work on my shortcomings? Or would I be a spoilt child for the rest of my life? 3: Would I ever have enough money? Now think on following points... 1. What are the ideal solutions to each of these problems? 2. How could I eliminate these problems or worries immediately? 3. What is the fastest and most direct way to solve this problem?

Six Months To Live

Here is another goal setting question that reflects your true values. Imagine that you went to a doctor for a full medical check-up. Your doctor calls you back a few days later and says, “I have good news for you and I have bad news for you. The good news is that, for the next six months, you are going to live the healthiest and most energetic life you could possibly imagine. The bad news is that, at the end of 180 days, because of an incurable illness, you will drop stone dead.” If you learned today that you only had six months left to live, how would you spend your last six months on earth? Who would you spend the time with? Where would you go? What would you strive to complete? What would you do more of, or less of? When you ask yourself this question, what comes to the top of your mind will be a reflection of your true values. Your answer would almost always include the most important people in your life. Very few people in this situation would say, “Well, I’d like to get back to the office and return a few phone calls.”

The Instant Millionaire

Here is another goal setting question: “If you won a million dollars tomorrow, cash, tax free, how would you change your life?” What would you do differently? What would you get into or out of? What would you do more of or less of? What would be the first thing you would do if you learned today that you had just received one million dollars cash? This is a way of asking the question, “How would you change your life if you were completely free to choose? The primary reason that we stay in situations that are not the best for us is because we fear change. But when you imagine that you have all the money that you will ever need, to do or be whatever you want, your true goals often emerge. For example, if you were currently in the wrong job for you, the idea of winning a large amount of money would cause you to think about quitting that job immediately. If you were in the right job for you however, winning a lot of money would not affect your career choice at all. So ask yourself: “What would I do if I won a million dollars cash, tax free, tomorrow?”

Recap

Determine Your True Goals 1. Write down your three most important goals in life right now. 2. What are your three most pressing problems or worries right now? 3. If you won a million dollars cash, tax free, tomorrow, what changes in your life would you make immediately? 4. What do you really love to do? What gives you the greatest feelings of value, importance and satisfaction? 5. If you could wave a magic wand over your life and have anything you wanted, what would you wish for? 6. What would you do, how would you spend your time, if you only had six months left to live? 7. What would you really want to do with your life, especially if you had no limitations?

See Other Book Summaries on "Goal Setting"
Download Book    Download Chapter    <<< Previously

Wednesday, April 29, 2026

The Two Travelers Arriving At a New Village


All Buddhist Stories


This is a well-known Zen or Buddhist parable often titled "The Two Travelers" or "Moving to a New City." It illustrates that our perception of the world is a reflection of our own mindset, rather than an objective reality.

The Story
A traveler was moving to a new, unfamiliar village. Wishing to know if he would like living there, he approached an old man sitting by the side of the road at the entrance of the village.
"What kind of people live in this village?" the traveler asked.
The old man, who was a wise teacher, replied with another question: "What kind of people live in the village you have just come from?"
The traveler frowned, his face filled with resentment. "They were mean, cruel, rude, and dishonest. They were terrible people, and I'm glad to be leaving them behind."
The wise old man shook his head sadly. "I am afraid you will find the exact same kind of people in this village, too." Disappointed, the traveler walked away, intending to look elsewhere.
Later that same day, a second traveler passed by, heading toward the same village. He also approached the old man with the same question: "What kind of people live here? I'm thinking of moving here."
Again, the old man asked, "What kind of people live in the village you are leaving?"
The second traveler’s eyes softened. "They were wonderful, generous, kind, and helpful people," he said with a smile. "I'll miss them terribly, but I must move on."
The old man smiled back warmly. "You will find the exact same kind of people in this village, too."
The Moral Lesson
A bystander who heard both conversations was confused and asked the old man, "Why did you tell the first man the people here are awful, and the second man that they are wonderful?"
The old man replied, "Because people don't see the world as it is—they see it as they are."
  • Mindset is Reality: The first traveler carried his anger and negativity with him, and therefore, he only perceived negativity in others.
  • The World is a Mirror: If you approach the world with kindness and optimism, you will find kindness and optimism in others.
The lesson is that our experiences and relationships are often shaped by our own inner attitudes rather than the environment itself.
Tags: Buddhism,Video,