Thursday, April 16, 2026

Honoring My Managers: Deepika Saxena


My Meditations    <<< Previously

Good morning!

In my 13 years of experience, I have worked under a lot of managers. To keep this post manageable for me to write and for the readers to read, I am going to stick to a few of those managers (and sometimes to their qualities, rather than to them as a person).

Let’s me start with my first manager, Deepika Saxena.

I was there at the NetEdge Computing in Noida for internship, and I wasn’t doing exactly well with the Android app that I was building. So one bright sunny day, around 10, 10.30, Deepika called me into the conference room. She sat on one side of the oval table and I sat on the other, but not facing each other, and like 1 or 2 chairs away.

Now she said something on these lines: “...We at NetEdge look for a few qualities in a person when we hire him or her and so do every other company. And let me tell you that you do have those qualities in you…” And she went on with that conversation, asking questions normally about work, about my college, education, my interests, family, health and other things.

The qualities that she mentioned were:
- Attitude: maintain a positive attitude towards work expected from you, and people you work with
- Competency: This is the most visible and reported quality in corporate world
- General Intelligence: Not everyone can know everything all the time, but you could cultivate the basic ingredient that makes you knowledgeable and that ingredient is general intelligence
- Commitment: Companies and managers like to hire people they see long term future with
- Humility: Deepika used to say a phrase to people and it was “There is always going to be some person better than you...”

~~~

One observation I had that’s work related from my time at NetEdge Computing (under Deepika) and at Magic Software (under Vikas Kumar Gupta) was that:

“In POCs and Projects with research like goals, you got to come back fast with response and development to a task / ask / or query – a little bit late and your manager or your client or your company may start to lose interest and start questioning the requirement itself.”

Thanks for reading!
Stay tuned!

Dated: 2026-Apr-15, 9AM

‘Challenging, unrealistic’: Women gig workers in Noida stage protest; demand fixed working hours and basic facilities


See All on Minimum Wages And Cost of Living Adjustment (COLA)    <<< Previously   



The women described a system in which their earnings could fluctuate sharply based on customer ratings and strict punctuality metrics.

Sonakshi Thapa (27) who had joined Urban Company to support her family, now dreads the commute — twenty-minute-long walks under the scorching sun or when it is raining heavily.

She is among the thousands of female gig workers who work as partners for platforms that provide at-home services.

Noida has been witnessing a series of protests by workers over wages in the last few days. A smaller group of women, all gig workers, gathered on Wednesday morning, but with a different demand: not more pay, but more predictable hours and basic dignity at work.

About 40 women who work with Urban Company, including Sonakshi, assembled outside a training centre in Noida Sector 60. They demanded an eight-hour shift, weekly time-off and access to essential facilities like drinking water and toilets.

The women on Wednesday said their concerns were rooted less in how much they earned than how they were made to work. They said the time was right as the demand put forward by several other workers was being heard by the UP government.

Sonakshi, who started working eight months back, said workers are given 15 minutes to travel between appointments — a target she termed as “unrealistic”. “It takes at least 20 minutes because we have to walk… it is challenging,” she said.

Thapa also pointed to challenges faced specifically by female workers. “We need to change sanitary pads. Every woman faces this issue,” she said. “We cannot do that in customers’ homes. We need proper facilities.”

She said that after deductions linked to ratings and attendance, her monthly earnings had dropped to about Rs 18,000 in recent months.

Another gig worker associated with the company for five months, Neha Devi (25), who earns about Rs 25,000 a month, echoed the same concern. “We are not asking them to increase our salaries. We are asking for fixed working hours and basic facilities.”

Devi said that although government norms prescribe an eight-hour workday, she and her colleagues are often required to work up to 11 hours. Absences on weekends, she said, can lead to disproportionately high deductions. “If my daily wage is Rs 833, why is Rs 1,000 rupees cut?” she said.

The women described a system which leads to fluctuations in their earnings, sharply based on customer ratings and strict punctuality metrics. “Even if we are late by a minute, our daily earnings are slashed by half,” she added.

The protesting women also said that supervisors were often unreachable and, at times, allegedly threaten them regarding account deactivation.

The nature of their work — traveling from a customer’s home to another — also involved lack of access to basic amenities, they said. “We are told to use customers’ washrooms,” Devi said. “But many times, we are shooed away.”

Pinky Kumari (30), quickly unlocked her phone and opened WhatsApp. A series of texts to her supervisor read, “Sir please remove the cancellation”, “Only you could do it. Rs 1,000 would be cut.”

Showing the messages requesting a cancellation reversal, she said those went unanswered. “We were told during training that if we don’t cancel, our money won’t be deducted,” she said. “But no one listens.”

She added that while complaints raised by workers about customers rarely lead to action, even a minor complaint from a customer can result in immediate suspension of a worker’s account.

Wednesday’s protest was cut short later in the morning.

Police escorted the women in buses and removed them from the site. A senior officer present at the spot said the gathering had been allegedly prompted by a “misleading” message circulating among workers and described it as part of a broader pattern of mobilisation seen in recent days.

Queries sent to Urban Company remained unanswered.

Ref
Tags: Indian Politics,Management,

CITU seeks Rs 23,196 minimum wage for entire NCR


See All on Minimum Wages And Cost of Living Adjustment (COLA)    <<< Previously    Next >>>

CITU seeks Rs 23,196 minimum wage for entire NCR; Faridabad on alert

CITU Haryana General Secretary Jay Bhagwan said the demand for Rs 23,196 as the base is not arbitrary. The Center of Indian Trade Unions (CITU), affiliated to CPI(M), on Tuesday intensified its demand for a uniform minimum wage across the National Capital Region (NCR), proposing a floor of Rs 23,196 per month. The demand comes amid escalating industrial unrest in Delhi-NCR’s manufacturing hubs, with the union calling for a mass mobilisation at all district collector offices on April 16. CITU Haryana General Secretary Jay Bhagwan said the demand for Rs 23,196 as the base is not arbitrary and that a committee – comprising representatives from factory owners’ associations, trade unions, the state government, and the labour department – had arrived at the figure in a meeting on December 29, 2025. “Whether it is Gurgaon, Panipat, Faridabad or Bahadurgarh, industrial associations are issuing statements claiming they cannot implement the new rates,” vice-president Vinod Kumar said in connection to the Haryana government, on April 9, revising minimum wages to Rs 15,220, with effect from April 1, 2026. Meanwhile, in response to the growing unrest, the Faridabad Police has issued a public advisory warning against any disruption of law and order. A police spokesperson stated that for the last two days, employees of Motherson Sumi Wiring India Limited in Sarai Khwaja have been protesting for a wage hike. To manage the situation, more than 1,500 police personnel have been put on standby. In Gurgaon, too, police intervened at ShadowFax company in Pathredi-Bilaspur after workers gathered to demand a salary hike on Tuesday. Ref
Tags: Indian Politics,Management,

Wednesday, April 15, 2026

In Gurgaon, workers voice opposition over wages


See All on Minimum Wages And Cost of Living Adjustment (COLA)    <<< Previously    Next >>>

‘New labour codes not in our interest’: In Gurgaon, workers voice opposition

Workers argued that this increase fails to keep pace with soaring inflation in consumer goods and housing. Amid protests by factory workers in Noida’s industrial belt demanding fair wages, representatives of the Municipal Corporation Employees Union in Gurgaon pointed out that the new wages announced by the Haryana government are inadequate. On April 9, the state government notified a 35 per cent hike in minimum wages across categories — raising the monthly pay for unskilled workers from Rs 11,274 to Rs 15,220, and for skilled workers from Rs 13,704 to Rs 18,500. Workers, however, argued that this increase fails to keep pace with soaring inflation in consumer goods and housing. Municipal union leader Vasant Kumar said, “How can one live on such wages in a city like Gurgaon? The new labour codes, LPG crisis and lack of proper work conditions are not in the interest of workers and we will continue our protests against them.” As per protesting workers, allied municipal and state employees have announced a three-hour work boycott on April 16 to protest against the government’s handling of workers’ issues across the state since the Manesar protests. Workers are also opposed to the new labour codes introduced by the Centre. Explaining why, Centre of Indian Trade Unions (CITU) district president Suresh Nouhra said they allow for 12-hour shifts sans overtime compensation and restrict unions. “A member will not be able to express themselves properly and the benefit will only be for corporates. They should be abolished. Factories are trying to start 12-hour shifts but thanks to the protest in Panipat, they could not for now.” On February 23, at the Indian Oil Corporation Ltd’s Panipat refinery, at least 30,000 contractual workers staged protests demanding better wages and working conditions. Unions contend that the successful agitation in Panipat, located in Haryana’s crucial industrial corridor, has temporarily halted similar attempts across other manufacturing units in the state. The municipal union members have been supporting a stir by fire department workers, who have been demanding regularisation and better pay while protesting against “untrained” drivers being deployed to man fire engines. The sit-in protest in front of the Sector 29 fire station in Gurgaon entered its seventh day on Tuesday. Around 200-odd municipal union members had joined the protest around noon. Sahun Khan, president of the Gurgaon Fire Department Union, claimed the government’s temporary measure of deploying untrained Haryana Roadways drivers and inexperienced youth to operate fire engines poses a severe public safety hazard. “Roadways drivers and youths from training centres have no prior training in operating firefighting equipment,” said Joginder Karotha, State Secretary of the Sarv Karamchari Sangh Haryana. He warned that in the event of a major fire, the lack of trained personnel could lead to a substantial loss of life and property, for which the state government would be solely responsible. Addressing the media at the protest site, union representatives reiterated their long-standing demands, which include: Free medical treatment for severely injured personnel, treating their recovery period as active duty, a monthly risk allowance of Rs 5,000 at par with police personnel, timely disbursement of medical, uniform, and washing allowances, and regularising their employment. Fire Safety Officer Jai Narayan acknowledged the manpower shortage, but said they have drivers and firemen on duty as of now. Ref

Wage hike protests in Noida (UP)


See All on Minimum Wages And Cost of Living Adjustment (COLA)    <<< Previously    Next >>>

Wage hike protests: Noida's 10-year minimum pay rise half of Delhi, Gurugram; not enough to offset inflation


NEW DELHI: While minimum wages for unskilled workers have risen by just 42% in Noida/Ghaziabad over the last decade, the increases in neighbouring Delhi, Gurgaon and Faridabad have been close to 90%, shows data.

This also means that while wage increases have outpaced inflation in Delhi and its neighbouring Haryana towns, the hike has been not even enough to offset price rise in Noida/Ghaziabad.

Analysis of past data on minimum wages shows that from Rs 7,936 per month in Oct 2016, the minimum wage for unskilled workers employed in shops and establishments in Uttar Pradesh has increased to Rs 11,314 per month, an increase of 42.6%.

Since 2016, the base year of the consumer price index for industrial workers (CPI-IW), the increases in prices in Ghaziabad and Gautam Buddha Nagar have been 51.3% higher than the increase in nominal minimum wages. In effect, therefore, the real minimum wage now is lower than a decade ago.The situation was similar in Haryana, before the increase in minimum wages announced on April 9, following protests in the state's industrial belts around Manesar.

The minimum wage in the state had increased from Rs 8,070 per month in July 2016 to Rs 11,274 per month before April 9. This 40% increase was lower than the rise in prices rate affecting industrial workers during this period - 52.7% in Gurgaon and 48.1% in Faridabad.

Following the April 9 hike, the minimum wage is now 88.6% higher than in 2016. Delhi has the highest minimum wage in the NCR region. At Rs 18,456 per month, the minimum wage in the national capital has increased by nearly 90% as compared to Rs 9,724 per month in Oct 2016. Interestingly, the national capital also saw somewhat lower inflation during this period than its satellite towns in Haryana and UP, as consumer prices for industrial workers increased by 43.7%.
Ref

Tuesday, April 14, 2026

The Battle For Voice In Digital India


See All News by Ravish Kumar
<<< Previously


Press Freedom · Digital Rights · India

When the Government
Bans the Joke,
It Confesses Its Own Fear

India's ruling dispensation is no longer satisfied silencing journalists. It has moved on to comedians, cartoonists, animators — and now, you.

There is something uniquely revealing about a government that is afraid of a joke about a cooking-gas cylinder. A comedian makes a reel. It goes viral. And within days, his Facebook page — built over years, his livelihood — disappears from India. No explanation. No notice. No due process. Just: gone.

That is where we are. That is what India's digital landscape looks like in 2026. Pages are being pulled down. YouTube channels suspended. News portals blocked. Cartoonists' work removed from the internet and, in quiet defiance, pinned to the walls of Delhi's Press Club. An animation studio's three videos banned, not for incitement, not for sedition — but for existing at a frequency the government finds uncomfortable.

Ask yourself one question: if the government were confident, why would it be afraid of a cartoon?

"If you cannot handle a question, banning the questioner is not governance. It is cowardice dressed up in the language of national security."

The Takedown Machine

The cases are no longer isolated. Over the last several weeks, a pattern has crystallised into something systemic. Comedian Rajeev Nigam's Facebook page was blocked in India. He told The Quint that he was not even informed which post triggered the action — his best guess is a satirical reel about LPG prices.[1] "My page will not be visible to people in India," he tweeted. "And this has not happened only to me."

He is right. The satirical outlet Molitix had its Facebook page restricted under Section 79(3) of the IT Act — again, without being told which content violated which rule, and without being given an opportunity to respond.[2] Its cartoons — which Indian audiences had freely viewed for years — were pulled from the internet and displayed physically at Delhi's Press Club, because that was the only screen the government couldn't reach.

News channel 4PM has been blocked and has approached the Delhi High Court. National Dastak was targeted. Dhruv Rathi's three animation videos were banned in India.[3] The Kerala-based MediaOne TV was shuttered for over a year on "national security" grounds — a claim the Supreme Court eventually tore apart, fined the government for, and reversed. But by then, the channel had lost journalists, revenue, and months of its institutional life.[4]

This is not a crackdown on disinformation. This is a crackdown on discomfort.

12 Years in power — zero press conferences held by PM Modi
1 hr New proposed deadline for platforms to remove flagged content (down from 2–3 hours)
79(3) IT Act provision cited to block pages — no reasons given, no right of reply

The American Mirror

Sometimes it takes a foreign government's bureaucratic paperwork to state plainly what domestic silence refuses to say. The Office of the United States Trade Representative submitted a report to the US Congress and President on March 31, 2025. Its finding on India was blunt: tech companies — YouTube, Twitter, Facebook, Instagram — are receiving content removal orders from Indian authorities at such volume and at such speed that they are unable to comply in time.[5]

The report further observed that the manner in which these orders are being issued appears to be politically motivated — not a response to genuine threats, but a routine mechanism of suppression.[6]

Read that carefully. The United States government — hardly a crusading civil liberties organisation — has put on record that India's content-removal regime looks like politics, not policy.

And yet, India's IT Minister Ashwini Vaishnaw has pointed to deepfakes and AI-generated misinformation as justification for these crackdowns. That argument might carry weight if the targets were deepfake factories. But Molitix is not a deepfake studio. 4PM is not an AI bot. Rajeev Nigam is a human being who made a joke about a gas cylinder. The AI defence is a red herring, and a transparent one.

"The US government's trade report said what Indian mainstream media would not: India's content-removal orders appear politically motivated."

The Law They Are Building

What is happening today through executive orders is about to be institutionalised through law. The government has proposed sweeping amendments to India's digital media regulations, with public consultations open until April 14. What the new draft would establish, if passed, is a surveillance architecture of remarkable scope.[7]

Under the proposed rules: social media platforms would be required to conduct pre-upload content checks; user data would have to be retained and handed over on government demand; and — most significantly — the Digital Media Ethics Code, previously applicable only to registered news publishers, would now extend to any individual who posts news or current affairs content on social media.[8]

That means you. The person who makes a reel about a politician's speech. The student who shares a video of a protest. The homemaker who reposts a news clip. All of you would fall under a government-supervised content-review mechanism. You could be reported, reviewed — and silenced — even without a formal complaint.

The Internet Freedom Foundation's Apar Gupta has warned that the draft's implications go far beyond what the government is advertising. This is not about cleaning up misinformation. This is about building the infrastructure for total digital control — and doing it while the public is still being told it is about deepfakes.

The Double Standard That Tells the Whole Story

Here is a simple question. Which channels were found guilty of spreading hate speech and communal content by broadcast regulators? The answer is the same channels that have been receiving thousands of crores of rupees in government advertising contracts. The News Broadcasters' Standards Authority levied fines. Anchors were censured. And yet — not one of these channels was taken off air for a single day.[9]

Meanwhile: independent journalists whose channels receive no government advertising find their Facebook pages blocked, their YouTube handles suspended, their income streams severed.

The principle being applied is not legality. It is loyalty.

Channels that ask no questions get crores. Channels that ask questions get shut down. This equation is not a conspiracy theory — it is the observable, documented reality of Indian media in 2025. The public has understood it. Viewership of so-called "godi media" has collapsed, not because of regulation, but because audiences stopped trusting them. But the government's response to losing the information war is not to earn trust — it is to delete the competition.

What Kind of Democracy Remains?

Narendra Modi has been Prime Minister for over a decade. He has not held a single press conference in twelve years.[10] Not one. He speaks in monologues — to a camera, on his terms, with no questioner, no follow-up, no accountability. That is his relationship with the free press: it does not exist.

And now, having converted mainstream television into a stage-managed applause machine, the government has turned its attention to the only spaces where inconvenient questions were still being asked — social media, independent YouTube channels, satirical pages, comedy reels.

Compare this to the United States, where Tucker Carlson — a deeply controversial commentator — openly accused the American government of being controlled by Israeli interests. No takedown. No criminal case. No page restriction. Trump's government dislikes him. But it has not deleted him.

In India, the threshold for deletion is a joke about a cooking-gas cylinder.

And when you silence that joke, when you pull down that cartoon, when you block that satirical page — you are not protecting national security. You are announcing, to your own people and to the world, that you cannot handle the truth. That you have run out of answers. That the only tool left in your hands is fear.

One hundred and forty crore people deserve a Prime Minister who can face their questions. What they have is a government that deletes the questions instead.

Facts

  • Comedian Rajeev Nigam's Facebook page was blocked in India without prior notice or stated reason; he told The Quint he suspects it was due to a satirical post about LPG cylinder prices.
  • Satirical outlet Molitix had its Facebook page restricted in India under IT Act Section 79(3); it was not informed which content violated any rule, nor given opportunity to respond. Its removed cartoons were subsequently displayed at Delhi Press Club.
  • The US Office of the Trade Representative submitted a report to Congress on March 31, 2025, stating that India's content-removal orders are issued at such frequency and speed that tech platforms cannot comply in time, and that the orders "appear politically motivated."
  • The Indian government proposed reducing the mandatory content-removal window from 2–3 hours to 1 hour, as reported by The Indian Express citing government sources.
  • The proposed amendments to India's digital media rules — open for public consultation until April 14, 2025 — would extend the Digital Media Ethics Code to any individual posting news or current affairs content on social media, not just registered publishers.
  • Dhruv Rathi's three animation videos were banned in India.
  • Kerala's MediaOne TV was banned for over a year by the central government citing national security. The Supreme Court overturned the ban and issued a strong rebuke to the government.
  • 4PM News channel has been restricted and has filed a petition before the Delhi High Court.
  • Prime Minister Narendra Modi has not held a press conference in over 12 years in power.
  • The News Broadcasters' Standards Authority has fined and censured pro-government TV channels for spreading hate speech and communal content — yet none were taken off air, and these channels continue to receive substantial government advertising.

Criticisms

  • The Modi government has weaponised IT Act provisions — particularly Section 79(3) — as a tool of political suppression, blocking independent journalists and satirists without due process, notice, or right of reply.
  • Twelve years in power without a single press conference is not humility — it is contempt for democratic accountability. A head of government who refuses to be questioned is not governing; he is ruling.
  • The government's use of "national security" as a blanket justification for banning channels like MediaOne — a claim the Supreme Court dismantled — reveals a pattern of using legal weaponry not to protect the nation but to protect the ruling party from scrutiny.
  • The proposed digital media rules, which would subject ordinary citizens' social media posts to government review and potential deletion, represent an authoritarian expansion of state power over public speech dressed up as a regulatory reform.
  • The double standard is indefensible: pro-government channels found guilty of hate speech by independent broadcast bodies face zero action and continue to receive thousands of crores in government advertisements, while independent platforms are blocked for asking factual questions.
  • Framing the crackdown on independent media under the banner of fighting deepfakes and AI misinformation is dishonest. The targeted accounts — Molitix, 4PM, Rajeev Nigam, Dhruv Rathi — are identifiable human journalists, satirists, and animators, not AI bots.
  • The government's IT Cell has industrialised disinformation and communal propaganda on social media for years. The selective enforcement of content rules against critics, while leaving this ecosystem untouched, is not neutrality — it is complicity.
  • By attacking the livelihoods of content creators — not just their speech — the government is deploying economic violence as a tool of censorship, targeting people's incomes and livelihoods to enforce silence.
  • BJP's silence — from party workers to MPs to Mohan Bhagwat — in the face of this press freedom assault is a form of institutional endorsement. If they genuinely believe in democracy, they must say so publicly and loudly.
  • A government so fearful of a cartoon, a comedy reel, and an animation video has already answered the question of whether it has the confidence to face its own people.

Sources & Citations

  1. Rajeev Nigam statement to The Quint regarding Facebook page restriction, 2025.
  2. Molitix statement on Facebook page ban, citing IT Act Section 79(3), reported by multiple outlets, 2025.
  3. Reports on Dhruv Rathi animation video bans in India, 2025.
  4. Supreme Court of India ruling overturning the central government's ban on MediaOne TV; Court reprimand on record.
  5. Office of the United States Trade Representative (USTR), 2025 National Trade Estimate Report on Foreign Trade Barriers, submitted to US Congress and President, March 31, 2025.
  6. Ibid., USTR Report on India section: characterisation of content-removal orders as appearing "politically motivated."
  7. Draft amendments to India's Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021; public consultation period ending April 14, 2025.
  8. Internet Freedom Foundation (IFF); Apar Gupta's public commentary on proposed draft rules, 2025.
  9. News Broadcasters Standards Authority (NBSA) orders against pro-government television channels for content violations, 2022–2024.
  10. Multiple documented instances confirming PM Modi's 12-year record of no formal press conferences; cited by domestic and international press freedom organisations.
Tags: Hindi,Ravish Kumar,Indian Politics,Video,

Monday, April 13, 2026

Can AI Be Your Financial Advisor?


Other Articles on Retirement <<< Previously


Personal Finance · Artificial Intelligence

Can ChatGPT
Plan Your Retirement?

AI is already reading every financial news article ever published, works 24/7, never charges a commission — and yet it can also give you advice that might get a human advisor arrested. Here's the full picture.

12 min read · April 2026

The $114 Trillion Question

There are roughly 15,000 financial advisors in the United States, collectively managing around $114 trillion on behalf of about 62 million clients. That sounds like a lot of coverage — until you start counting the tens of millions of people who need financial guidance but can't access it, because professional advisors are simply not interested in clients without large portfolios.

For the wealthy, the proposition is comfortable: your advisor reads the markets every morning, fields your calls by the afternoon, and is legally bound to act in your interest. For everyone else, the advice ecosystem barely exists. A single bad decision — panic-selling during a downturn, or misallocating retirement savings — can permanently alter the trajectory of a family's financial life.

This gap has prompted a serious question from researchers and economists: can large language models — AI systems like ChatGPT — actually step in and serve as trusted financial advisors for the people who need it most? The answer, as it turns out, is complicated, fascinating, and not quite what you'd expect.

Bad advice can do a lot of harm. And the people who need advice most are exactly those that professional advisors are uninterested in having as clients.

Loss Aversion: Why We Panic When We Shouldn't

Before evaluating whether AI can guide financial decisions, it helps to understand the central psychological weakness that any good financial system — human or artificial — must account for: our deeply irrational relationship with losses.

In behavioural economics, this is called loss aversion. We feel the pain of a financial loss far more acutely than we feel the pleasure of an equivalent gain. Losing ₹10,000 stings roughly twice as hard as gaining ₹10,000 feels good. This asymmetry isn't logical, but it's deeply human — and it drives some of the worst financial decisions people make.

The clearest real-world illustration happened during the 2008 financial crisis. Between the fourth quarter of 2008 and the first quarter of 2009, the S&P 500 dropped roughly 50% from peak to trough. Retirement accounts that had been built up over decades were nearly halved overnight. Investors panicked, and they did what panicking investors do — they sold everything and moved to cash.

📉
The 2008 Panic in Numbers
A $100,000 retirement account invested in US equities shrank to roughly $50,000 by early 2009. Investors who sold and moved to cash locked in that 50% loss — and many did not re-enter the market for years, missing the recovery entirely.

Here is what makes this particularly painful: one money manager, five years after the crisis, was still sitting on the sidelines, asking whether it was "time to put the money back in the market." He had successfully avoided some of the final wave of losses in 2009 — but in doing so, he also missed one of the greatest bull markets in modern history. His clients paid dearly for his caution.

This is the freak-out factor in action. Fear of further loss overrides rational calculation. And it is precisely the kind of irrational pattern that a well-designed AI financial advisor, one that is not subject to emotional panic, could theoretically help prevent.

The Psychological and Financial Traps Set for Ordinary Investors

Loss aversion is not the only force working against ordinary investors. There is an entire architecture of psychological and structural tricks — some accidental, some deliberate — that can drain money from those who are least equipped to defend against them.

The Arbitrage Problem in Shared Portfolios

Consider a scenario where different offices within the same financial institution are independently managing different pieces of a client's portfolio. One office, evaluating a binary choice between two assets, might rationally favour option A. Another office, looking at its own slice of the same portfolio, might independently favour option D. Locally, each decision seems defensible. But when you consolidate the books globally, the combined position creates a structural imbalance — an arbitrage opportunity that sophisticated actors can exploit to extract value from the portfolio. No single bad actor is needed. The system bleeds money simply because the left hand doesn't know what the right hand is doing.

The Ultimatum Game and How Humans Are Exploited

Behavioural economists use a tool called the ultimatum game to expose how people respond to perceived unfairness in financial transactions. The setup is simple: one person proposes how to divide a sum of money, and the other either accepts the split or rejects it entirely — in which case neither party receives anything.

Rational economic theory would predict that the receiving party should accept any non-zero offer, since something is always better than nothing. But that is not what happens in practice. People routinely reject low offers, even at personal cost, to punish what feels like an unfair proposal. Research consistently shows that offers below 40% of the total are rejected most of the time.

Financial products exploit this psychology constantly. A product that offers you a 25% chance of losing ₹2,40,000 and a 75% chance of gaining ₹7,60,000 can sound appealing in isolation — but the framing, the presentation, and the sequence of information can be manipulated to make the same deal seem terrifying or irresistible depending on how it is packaged. Most investors have no reliable way to detect this manipulation.

Complexity as a Weapon

Complicated financial engineering is not always a sign of sophistication. It can also be a deliberate strategy to obscure what is actually happening inside a product. When the mechanics are opaque, it becomes nearly impossible for an ordinary investor to identify whether the risks they're taking on are proportionate to the returns they're being promised. Complexity, in these cases, is not a feature. It is a fog.

When You Can Trust AI With Your Money

With those risks in mind, let's examine what AI systems — specifically large language models — actually do well in a financial context.

01
Always Available

A large language model doesn't sleep, doesn't take holidays, and is never on hold. It is available at 3am when you're lying awake worried about your portfolio — precisely the moment when panic-driven decisions are most likely to happen.

02
Comprehensively Informed

The best human financial analyst can read dozens of research reports a week. An AI system can, in principle, have ingested every piece of financial news, every earnings report, and every academic paper on investing ever published.

03
No Hidden Incentives

A human advisor who earns commissions on the products they recommend has an incentive that may conflict with your best interests, even unconsciously. An AI has no commission structure. It can, in principle, be designed purely to optimise for you.

04
Better General Advice Than Some Professionals

When researchers tested GPT-4 against the same financial scenario used to evaluate human advisors, the AI's response was, in multiple instances, more comprehensive and more sensibly structured than advice received from licensed professionals.

The implication is clear: for the large majority of people who have never had access to any financial advisor at all, a well-aligned AI system could represent a massive improvement over the current reality of nothing.

When You Should Not Trust AI With Your Money

The capabilities above are real — but they come with meaningful caveats that are easy to overlook when the technology feels impressive.

AI Does Not Know Your Life

Good financial advice is deeply personal. Your age, risk tolerance, employment stability, health costs, family obligations, short-term liquidity needs, and a dozen other factors all bear on what the right advice looks like for you specifically. A generalised recommendation — even a smart-sounding one — that ignores your individual circumstances is not just unhelpful. In some regulatory contexts, dispensing such advice to all clients indiscriminately could constitute a legal violation of the duty to account for personal needs.

AI Can Hallucinate Confidence

Large language models are, at their core, pattern-completion systems. They generate responses that sound plausible and authoritative — but that surface-level confidence has no relationship to actual accuracy. An AI can cite a non-existent regulation, misquote a fund's historical returns, or describe a market mechanism incorrectly with exactly the same tone it uses when it is right. In medicine or law, this is a serious problem. In finance, it can be catastrophic.

AI Has No Fiduciary Duty — Yet

In financial regulation, a fiduciary is someone legally required to act in your interests ahead of their own. Your portfolio manager, if licensed as a fiduciary, can be held accountable — fined, sued, or de-licensed — for bad advice. An AI system, as of now, carries no such legal accountability. If it gives you terrible advice and you lose money, there is no straightforward avenue for recourse. The technology has outpaced the legal framework designed to protect people from it.

The Mistakes AI Makes — and Why They Matter

When researchers tested an early version of ChatGPT by asking what a person should do after losing 25% of their savings, the AI produced a list. Some of it was sensible — advice to stay calm, avoid impulsive decisions. But buried in that list were two recommendations that illustrated the danger clearly.

Mistake #1
Rebalance your portfolio — This advice might be appropriate in a stable, liquid market. But in the middle of a sharp, ongoing drawdown with thin liquidity, forced rebalancing can crystallise losses and create additional transactional costs at the worst possible moment.
Mistake #2
Consider dollar-cost averaging — In theory, buying more at lower prices can reduce your average cost basis and improve long-term outcomes. But recommending this as blanket advice to every investor who has lost money is dangerous. Some of those investors cannot afford further exposure. Some need liquidity. Applying this suggestion uniformly, without individual context, is not just inadvisable — it is the kind of recommendation that, if made by a licensed human advisor to all clients simultaneously, could trigger regulatory action.

The upgraded GPT-4 performed significantly better in the same test, producing a response that was thoughtful, nuanced, and — according to researchers — better than advice that some real people had received from licensed professionals. But "better than a bad human" is not the same as "good enough to trust with your retirement." The margin for error in financial planning is narrow, and the stakes are high.

The Alignment Problem: Making AI Truly Trustworthy

Even if an AI system has the domain knowledge and the data, the deeper challenge is whether it can be made to reliably act in your interest — and only in your interest. In computer science, this is known as the alignment problem: the challenge of ensuring that an AI system's behaviour is aligned with the values and goals of the humans it serves.

Researchers are beginning to use behavioural economics tools to test how well AI systems are actually aligned with human intuitions. The ultimatum game is one such tool. When you run a large language model through thousands of iterations of this game, you can map its negotiating behaviour and compare it against established norms of human fairness. Does the AI make offers that most humans would consider fair? Does it reject exploitative proposals? Does it behave consistently, or can its responses be gamed?

Some current models perform reasonably well on these tests. Others do not. The point is that measurable alignment testing is becoming possible — which means that, over time, it may become possible to certify that an AI financial system is genuinely trustworthy in a rigorous, verifiable way, much the way that we certify human advisors through licensing exams and codes of conduct.

We teach children the golden rule on the playground. The challenge now is teaching the same principle to software — and verifying that it actually learned it.

The Real Opportunity: Serving Those Who Are Left Out

Wealthy investors already have everything AI promises. They have advisors who know their portfolios in real time. They have analysts reading the news. They have on-call access and institutional-grade research. For them, an AI advisor is a marginal improvement at best.

The transformative potential is elsewhere. It is in the middle-class family saving for their child's education without knowing whether they're taking too much risk. It is in the young professional who just started a SIP and doesn't understand what happens to it during a market crash. It is in the retiree who doesn't know whether their corpus will outlast them.

These are the people for whom professional financial advice is economically inaccessible — not because it doesn't exist, but because the economics of financial advisory make them unprofitable clients. An AI system that can provide personalised, contextually appropriate, ethically sound financial guidance to millions of such individuals simultaneously would represent one of the most consequential welfare improvements technology has delivered in recent memory.

That is the ambition. The gap between ambition and reality is still real, but it is narrowing.

Where Does This Leave Us?

The question of whether AI can plan your retirement does not have a clean yes-or-no answer today — and that is actually the most honest thing that can be said about it.

What we know is this: AI systems already demonstrate domain-level financial competence that in some cases matches or exceeds that of human professionals. They are available when human advisors are not. They can be designed without the conflicts of interest that complicate human advice. And they are improving rapidly.

What we also know is that current AI systems can give blanket advice that ignores your individual circumstances. They can be wrong with complete confidence. They carry no legal accountability. And the alignment challenge — ensuring that these systems act in your interest and not in the interest of whoever built or deployed them — has not been fully solved.

The most responsible position is to treat AI financial tools the way you would treat a well-read, always-available, but unlicensed friend who happens to know a lot about investing. Their perspective is valuable. Their general knowledge is useful. But before you make a major financial decision, you still want a human professional who knows your circumstances, who is legally bound to protect your interests, and who can be held accountable if they get it wrong.

For now, the best use of AI in personal finance is not as a replacement for good advice — it is as a tool that helps you ask better questions, understand your options more clearly, and resist the panic-driven impulses that behavioural economics has shown, time and again, to be the single greatest threat to your financial wellbeing.

The retirement planning AI is coming. Whether it will be trustworthy enough to fully replace human judgment is the question researchers are still working to answer.

Tags: Investment,Artificial Intelligence,