Wednesday, May 6, 2026

How to survive layoffs?

See All Articles on Layoffs    « Previously


Career · Jobs · AI Economy

The Pink Slip
Survival Guide

Layoffs are no longer one-off crises — they have become the background hum of the modern economy. Here is how to keep your footing when the ground shifts beneath you.

May 20268 min read

When Job Cuts Become the Norm

It is hard to scroll through a business newsfeed today without stumbling onto another headline about corporate restructuring. Cognizant, the US-based IT giant, is reportedly planning to eliminate around 15,000 positions globally — with the heaviest cuts expected in India. Coinbase, the American crypto exchange, has announced it is slashing 14% of its workforce. PayPal is trimming 20% of its staff. These are not fringe cases. They are part of a gathering wave: more than 90,000 tech employees have already been handed pink slips so far in 2026 alone.

The common thread running through almost every announcement is artificial intelligence. A recent Gartner survey found that 80% of companies deploying AI are actively reducing headcount. This is the uncomfortable truth the industry is dancing around: AI is not just supplementing human work — in many roles, it is replacing it outright.

"Artificial intelligence is not assisting jobs. It is redesigning them — and in many cases, eliminating them entirely."

But here is what nobody tells you in those press releases: a layoff does not merely remove a salary. It dismantles routine. It chips away at confidence. One terse email, and suddenly a person is asking themselves questions they have no business asking — Was I not good enough? The answer, almost always, is no. Structural forces — automation cycles, cost pressures, macro headwinds — do not discriminate between the talented and the mediocre. They sweep wide.

What follows is not a lament about the job market. It is a practical guide for navigating it.

By the Numbers

Company Sector Jobs Cut Status
Cognizant IT Services ~15,000 globally Reported
Coinbase Crypto / Fintech 14% of workforce Announced
PayPal Payments 20% of staff Planned
Tech sector (2026 YTD) Multiple 90,000+ Confirmed
Statistic Finding Source
Companies using AI that are cutting staff 80% Gartner Survey
Job seekers who apply online ~82% Industry data
Online applicants who land an interview Only 2% Industry data

Five Steps to Come Back Stronger

These five principles are not feel-good platitudes. Each one addresses a specific, practical failure mode that makes the difference between a three-month gap on a résumé and a genuine career pivot.

01
Accept It — But Don't Resign to It

Denial is comfortable. It is also dangerous. Grieve the job — that is entirely valid — but do not pitch a tent inside the grief. There is a critical difference between acknowledging what happened and surrendering to it. You cannot fight a storm you refuse to see.

02
Upskill With Urgency

The same companies doing the cuts are investing heavily in AI, automation, and digital transformation. That is your roadmap. Prompt engineering, data analysis, cloud certifications, product thinking — pick a lane and persist. Your degree got you hired once. Your skills keep you employed.

03
Rebuild Your CV for Impact

Most résumés are packed with buzzwords and long paragraphs that say almost nothing. Lead with a single sentence that tells a hiring manager who you are in under ten seconds. Highlight impact, not duties. Recruiters do not read résumés — they scan them for value. Make sure that value is unmissable.

04
Network — For Real

Eighty-two percent of job seekers apply online. Only 2% land an interview that way. Think of online applications as crowded highways and referrals as fast lanes. Reach out to former managers, ex-colleagues, and mentors. Most roles are filled before they are ever posted publicly. Referrals are not just helpful — they are essential.

05
Protect Your Mental Health — Fiercely

A layoff can shake your identity to its core. The morning anxiety. The compulsive LinkedIn scrolling. The toxic comparisons to someone else's highlight reel. None of that helps. Build a daily routine. Move your body. Eat well. Reach out to people you trust. The currency of today's job market is adaptability — and adaptability requires a mind that is not running on empty.

A Note on the Mental Weight of Job Loss

It bears saying plainly: losing a job is a genuinely difficult life event, not merely a logistical inconvenience. Identity, self-worth, and social connection are all tied up in work in ways we rarely acknowledge until it is gone.

Practical habits that measurably help:

  • Maintain a structured daily schedule — even without a 9-to-5 anchoring it
  • Limit passive social media consumption, especially LinkedIn during vulnerable hours
  • Exercise consistently — even short walks count
  • Designate job-search hours rather than letting anxiety make it all-day
  • Talk to a friend, therapist, or peer group — isolation amplifies distress

Adaptability Is the New Job Security

Every wave of technological disruption — mechanisation, the internet, mobile — has produced its equivalent of today's headlines. Workers who treated the disruption as a signal to retool fared significantly better than those who waited for normalcy to return. It never quite returned; it evolved.

The AI transition is different in speed and scale, but not in kind. The roles being automated today are, broadly, those defined by repetition and pattern-matching. The roles being created require judgment, creativity, contextual understanding, and the very human capacity to manage AI systems effectively. The gap between those two categories is the upskilling opportunity hiding inside every redundancy notice.

People who use this moment to learn, reconnect, and rebuild do not just survive layoffs. They tend to emerge into roles that are more interesting, better compensated, and more durable than the ones they left.

What to Remember

  • Layoffs are increasingly structural, not personal — AI-driven restructuring affects entire functions, not just individuals.
  • Online job applications have a staggeringly low conversion rate (2%). Invest in human networks instead.
  • Upskilling is not optional — it is the direct counter to automation. Target skills the companies cutting jobs are simultaneously buying.
  • A well-crafted CV leads with impact, not job descriptions. Ten seconds is all a recruiter typically gives it.
  • Mental health is not a soft footnote — it is the precondition for effective job searching. Neglect it and every other step suffers.
  • The instinct to withdraw during a layoff is natural and almost always counterproductive. Visibility and connection are what accelerate re-employment.

Every résumé in circulation today once belonged to someone who was starting over. Success is not a checklist, and a career is not a competition scored against other people's LinkedIn timelines. Make room for fear. Make room for uncertainty. Just do not let either of them take the wheel.

The job market will reset — it always does. The question is only whether you will be ready when it does.


See All Articles on Layoffs    « Previously Tags: Layoffs,Interview Preparation,

Interview at Deloitte for Lead AI Engineer Role (2024 Jul 30)

Index For Interviews Preparation
<<< Previously    Next >>>
Interview Reconstruction & Critique

Lead AI Engineer @ Deloitte

One-sided transcript analysis — candidate answers reconstructed with inferred questions

Candidate: Ashish
Company: Cognizant
Experience: 10 years (8 DS + 2 SE)
Location: Gurugram

Organized Transcript

Domain
Data Science / GenAI
Languages
PythonJavaScriptSQL
Frameworks / Tools
scikit-learnKerasTensorFlow HuggingFaceOpenAIPySpark AzureLangChain / GenAI
Topic 01 · Introduction

Tell me about yourself and your experience.

Answer

10 years total experience at Cognizant — 8 years in data science, 2 years in software engineering. Core expertise covers machine learning, deep learning, and NLP. Currently working on generative AI projects. Languages include Python (self-rated 7/10), JavaScript, and SQL. Python packages: scikit-learn, Keras, TensorFlow, HuggingFace, and OpenAI/GPT models.

Topic 02 · Domain Coverage

Have you worked in computer vision or time series forecasting?

Answer

No experience in computer vision. Studied time series forecasting but hasn't worked on it professionally. Has hands-on experience with RNN and LSTM architectures.

Topic 03 · Deep Learning — LSTM

Can you explain the LSTM architecture and how it differs from RNN?

Answer

LSTM stands for Long Short-Term Memory — a type of neural network cell embedded in a deep neural network. RNNs (Recurrent Neural Networks) were simpler but suffered from the vanishing gradients problem, making it difficult to capture long-range dependencies. LSTMs solved this via a memory state cell and a forget gate that learns, at each time step, whether to retain or discard a piece of information. After LSTMs, GRU (Gated Recurrent Unit) was introduced as a simpler variant, followed by bidirectional LSTM architectures that process sequences in both directions.

Topic 04 · ML Project — Anomaly Detection (Amex)

Tell me about a traditional ML project. What data volumes have you worked with?

Answer

Led an anomaly detection project for American Express. Data: 500 million records from a mainframe system, ingested into Amex's Cornerstone Cloud using PySpark. Three models were explored during the project lifecycle: (1) Statistical modeling based on log-likelihood to flag low-probability data points. (2) Autoencoder (POC phase) — anomalies yield high reconstruction error since the model reconstructs them toward the trained distribution. (3) Isolation Forest — final choice, tree-based approach where anomalous points can be isolated with fewer splits. K-medians (distance-based) was also tried but rejected due to high latency. Final choice was Isolation Forest, which offered better interpretability, lower latency, and good accuracy.

Topic 05 · Stakeholder Communication

How did you explain model selection to a non-technical governance team?

Answer

Used a physical paper-folding analogy to explain Isolation Forest: a point surrounded by other points requires many folds to isolate, whereas an outlier point can be isolated with a single fold. For the autoencoder, explained reconstruction error by example — if a model is trained on values clustering around 5, a value of 10 will be reconstructed back to ~5, producing a high reconstruction error (~5), versus ~1–2 for normal points, which signals an anomaly.

Topic 06 · PySpark

How comfortable are you with PySpark?

Answer

Comfortable with PySpark overall, though slightly out of touch with RDDs. PySpark offers three interfaces: Resilient Distributed Datasets (RDDs), SQL DataFrames, and the Pandas API on PySpark. Deployment in the Amex project leveraged PySpark UDFs (User Defined Functions) integrated with Hive.

Topic 07 · Deployment Experience

How have you deployed ML models? Walk me through a deployment architecture.

Answer

Deployment was handled by a dedicated team; personal exposure is limited to architecture understanding. Described a generative AI architecture: an OpenAI GPT-4 model exposed via an Azure endpoint → consumed by Azure Functions (business logic / API layer) → connected to a vector database or SQL database → surfaced via a web, mobile, or bot interface. In the Amex project, deployment involved Hive + PySpark UDFs but the candidate was not directly responsible.

Topic 08 · Current Projects

What are you currently working on?

Answer

Two GenAI projects at Cognizant (non-billable): (1) Jira Epic & Story Writer for Elevens Health (a health insurance company) — uses a requirements document and historical Jira dumps as RAG context to generate epics and user stories via an LLM. Also leverages a knowledge graph and vector database. Future roadmap includes fine-tuning the model on prompt-completion pairs. (2) A second GenAI project (not detailed) also in the GenAI space.

Topic 09 · ML Concepts — Feature Scaling

When is feature scaling required, and which algorithms need it?

Answer

Feature scaling is required when features are on different scales. Distance-sensitive algorithms — KNN, SVM, and neural networks — will incorrectly treat large-magnitude features (e.g., salary in thousands) as more important than small-magnitude features (e.g., age in hundreds). Scaling normalizes features to the same range (e.g., 0 to 1 via min-max). Tree-based algorithms like Decision Trees and Random Forests do not require feature scaling.


Inferred Interviewer Questions

The following questions were reconstructed from the candidate's responses. They are presented in order of likely occurrence.

Q1 Tell me about yourself and your background.
Q2 Have you worked in computer vision? What about time series forecasting?
Q3 Can you explain the LSTM architecture? How does it improve upon vanilla RNN?
Q4 Have you deployed ML models? Walk me through how you would approach deployment.
Q5 How would you rate your Python proficiency?
Q6 Tell me about a traditional ML project you've worked on. What kind of data volumes were involved?
Q7 Are you comfortable with PySpark? Which APIs have you used?
Q8 Why did you choose Isolation Forest over the autoencoder and K-medians? How did you explain the model to the governance team?
Q9 What are you currently working on? Why are you looking for a change?
Q10 When is feature scaling required, and which algorithms are affected?

Critique & Recommended Answers

3
Critical Gaps
4
Partial Answers
3
Solid Answers
⚠ Communication Pattern — Filler Word Overload

The transcript contains an extremely high density of "basically" (used 30+ times), "so" as a sentence opener (25+ times), and "like" as a filler (~15 times). For a Lead AI Engineer role, communication clarity and executive presence are evaluated as seriously as technical depth. Excessive filler words project uncertainty and dilute the impact of otherwise good answers. A rule of thumb: if a word can be removed without changing meaning, remove it before speaking.


Self-Introduction
Partial

The introduction listed skills and years competently, but for a Lead AI Engineer role the evaluator is listening for leadership signals: team sizes led, cross-functional ownership, business outcomes delivered. Listing tools and packages up front reads like a junior profile. There was also no "hook" — something memorable that positions you as a distinct candidate. The self-rating of Python as 7/10 is a common trap; interviewers will probe for the missing 3 points and it seeds doubt unnecessarily.

Stronger Opening
"I'm Ashish, with 10 years at Cognizant — 8 in data science, 2 in software engineering. Over the last 3 years I've focused on production GenAI systems, most recently leading a RAG-based automation pipeline for a Fortune 500 health insurer and an anomaly detection system processing 500 million records for American Express. I work across the full lifecycle — from problem framing and model selection to advising stakeholders on interpretability — and I have hands-on depth in Python, PySpark, and the OpenAI/Azure GenAI stack. I'm currently a senior data scientist and technical lead at Cognizant, looking to move into a role where I can own the end-to-end AI architecture."
LSTM Architecture
Partial

The conceptual explanation was accurate — vanishing gradients, forget gate, GRU, bidirectional extensions. But a Lead-level candidate is expected to also address the practical trade-offs: LSTM vs Transformer for sequence tasks, when to use what today (transformers have largely supplanted LSTMs for NLP), and awareness of attention mechanisms as the key architectural leap. The answer also rambled without a clear structure — no clear beginning, middle, end. The interviewer likely wanted a concise, structured walkthrough, not a stream of consciousness.

Stronger Answer
"LSTM — Long Short-Term Memory — was designed to fix the vanishing gradient problem in vanilla RNNs, which struggled to learn dependencies across long sequences. The core addition is a cell state, a persistent memory, controlled by three gates: the forget gate (what to discard), the input gate (what to write), and the output gate (what to expose). GRU later simplified this to two gates with comparable performance. Bidirectional variants process sequences in both directions, useful for tasks like NER. Today, for most NLP tasks, transformers with self-attention have replaced LSTMs because they parallelize better and capture longer-range dependencies without sequential bottlenecks — though LSTMs still have a place in real-time, low-latency applications where autoregressive transformers are too slow."
Deployment Experience
Critical Gap

This is the most damaging answer in the interview for a Lead AI Engineer role. Opening with "I am not aware" and "there was a team handling deployments" is a red flag at this seniority level. A Lead AI Engineer is expected to own or deeply understand the deployment pipeline. The Azure architecture described was generic and at a very high level — any engineer who has read one blog post on Azure could say the same. Missing entirely: MLOps tooling (MLflow, BentoML, Kubeflow), containerization (Docker, Kubernetes), model monitoring, CI/CD pipelines, A/B testing models, and model drift detection. These are table-stakes topics for the role.

Stronger Answer
"My deployment involvement has grown over the years. On the Amex project, the model artifacts were packaged as PySpark UDFs deployed on Hive — I worked closely with the data engineering team on that interface even though they owned the infra layer. In my recent GenAI projects I've been more directly involved: we containerize the inference layer with Docker, expose it through Azure Functions as REST endpoints, and use Azure AI Studio for model versioning. I track model behavior post-deployment using a lightweight logging layer that flags input distribution shifts — basic drift detection. For future projects I'm actively upskilling on MLflow for experiment tracking and Kubernetes-based serving for higher-throughput scenarios."
Amex Anomaly Detection Project
Strong

This was your best answer. The data volume (500M records), multi-model comparison (autoencoder → K-medians → Isolation Forest), the decision rationale based on interpretability and latency, and the stakeholder communication (paper-folding analogy) all told a coherent story. However, it lost points because it was delivered in a very unstructured, stream-of-consciousness manner. Using the STAR framework (Situation, Task, Action, Result) would have made this a standout answer. Also missing: what was the final precision/recall or anomaly rate detected? Quantified outcomes make answers memorable.

Enhancement
Add a closing quantified result: "The deployed Isolation Forest model flagged approximately X% of transactions as anomalous, which the business team validated against known fraud cases, achieving a precision of Y%. This replaced a fully manual review process that was covering only a fraction of that volume." Even approximate numbers demonstrate ownership.
Revealing Non-Billable Status
Critical Gap

Voluntarily disclosing that you are on a non-billable project and that it affects your performance rating is a significant strategic error in an interview. It signals to the interviewer that you may be on the bench, which is interpreted as either a supply-demand mismatch in your skills or an availability concern. Even if the context is nuanced (which it is — you are a manager on internal work), the framing you used led with the negative. Never volunteer information that a recruiter will interpret through a worst-case lens.

Reframing
"I'm currently leading two internal GenAI initiatives at Cognizant in a senior data scientist + tech lead capacity. These are strategic internal projects, but I'm looking for an external-facing role where I can directly own AI product outcomes and work with a broader scope of problems — specifically in LLM application engineering and MLOps, which is where I see the most exciting growth right now."
Current Project — Jira Epic & Story Writer
Partial

The project itself is relevant and interesting for a Lead AI Engineer role — RAG with LLMs, knowledge graphs, vector databases, and a fine-tuning roadmap. But the delivery was extremely disorganized. The explanation went back and forth between the architecture, the business problem, future roadmap, and technical limitations without a logical flow. The candidate also never stated a clear outcome or current status metric. For a lead role, you should be able to describe any project in 90 seconds using: Problem → Approach → Architecture → Outcome.

Structured Version
"We're building a Jira Epic and Story Writer for Elevens Health, a health insurance company. The problem: their business analysts were spending significant time writing detailed Jira user stories and epics that were inconsistent in quality. Our approach: a RAG pipeline where a requirements document and historical Jira dumps (vectorized in a vector DB with knowledge graph enrichment) are passed as context to an LLM, which then generates structured epics and stories. We're currently in pilot — the system reduces story-writing time by roughly 60% and we're collecting human feedback to build a fine-tuning dataset for the next phase."
Feature Scaling
Solid

Accurate and clear — correctly identified that distance-based algorithms (KNN, SVM, neural networks) are sensitive to scale, gave a concrete example (age vs salary), and correctly excluded tree-based algorithms. Minor gaps: didn't distinguish between Min-Max scaling and StandardScaler (Z-score normalization) and when each is preferred. Didn't mention that regularization (L1/L2) in linear models also makes them sensitive to feature scale. But for the question asked, this was a satisfactory answer.

Enhancement
Add: "The choice of scaler also matters — Min-Max scaling (0 to 1) is useful when you know the bounds and the distribution isn't heavily skewed. StandardScaler (zero mean, unit variance) is preferred when the feature follows a roughly Gaussian distribution or when using regularized linear models like Ridge/Lasso, since regularization penalizes coefficient magnitude."
What Was Missing — Lead-Level Topics Never Addressed
Critical Gap

For a Lead AI Engineer position, the interview conspicuously lacked any discussion of: (1) System design and scalability — how would you architect an ML system for 10x load? (2) Team leadership — how many engineers have you mentored or managed? (3) LLM evaluation — how do you evaluate RAG pipelines for hallucination, faithfulness, and retrieval quality? (4) Cost optimization — GPT-4 API calls are expensive; how do you optimize prompts or use smaller models intelligently? (5) Responsible AI — bias detection, fairness considerations, especially important in insurance use cases. None of these were raised, likely because the candidate's answers didn't invite those threads. A Lead-level candidate should steer conversations toward these domains proactively.

Proactive Topics to Introduce
In your Jira project answer, add: "One challenge we're managing is hallucination in the generated stories — we implemented an evaluation layer using RAGAS that scores faithfulness and context relevance before stories are presented to the user." In the deployment answer: "I'm also thinking about cost — GPT-4 is expensive at scale, so we route simpler requests to GPT-3.5 and only invoke GPT-4 for complex multi-requirement prompts." These signals differentiate a lead from a senior IC.
INTERVIEW ANALYSIS REPORT · LEAD AI ENGINEER
Candidate: Ashish · Cognizant

Index For Interviews Preparation
<<< Previously    Next >>>

Managing Git PAT and Credentials on Ubuntu Laptop

See All Posts on GitHub    <<< Previously

my git push command always prompts for username and password. 
Can't I put it in environment variables or something to not having to enter it again and again? 
I am using Ubuntu and repo is on GitHub

Yes—you shouldn’t need to type your username/password every time. In fact, GitHub no longer supports password auth for Git pushes, so what you’re seeing is usually because you're using HTTPS without proper credential storage.

Here are the best ways to fix it (ranked) 👇


✅ Option 1 (Recommended): Use SSH (No prompts ever)

This is the cleanest, most developer-friendly setup.

Step 1: Generate SSH key

Bash
ssh-keygen -t ed25519 -C "your_email@example.com"

Press Enter through prompts.


Step 2: Add SSH key to ssh-agent

Bash
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519

Step 3: Copy public key

Bash
cat ~/.ssh/id_ed25519.pub

Step 4: Add it to GitHub

Go to:
👉 GitHub → Settings → SSH and GPG Keys → New SSH Key

Paste the key.


Step 5: Switch your repo to SSH

Check current remote:

Bash
git remote -v

Change it:

Bash
git remote set-url origin git@github.com:USERNAME/REPO.git

Done 🎉

Now:

Bash
git push

👉 No username/password ever again.


✅ Option 2: Use Git Credential Helper (HTTPS but cached)

If you want to stay on HTTPS:

Store credentials permanently

Bash
git config --global credential.helper store

Then do one push:

Bash
git push

Enter username + Personal Access Token (NOT password) once.

After that, Git saves it in:

~/.git-credentials

More secure option (recommended over store)

Bash
git config --global credential.helper cache
  • Keeps credentials in memory (default 15 min)

  • You can extend:

Bash
git config --global credential.helper 'cache --timeout=3600'

⚠️ Important (GitHub authentication change)

Since 2021, GitHub requires:

  • ❌ Password → NOT allowed

  • ✅ Personal Access Token (PAT)

Create one here:
👉 GitHub → Settings → Developer Settings → Personal Access Tokens


🚫 About Environment Variables

You can technically do this:

Bash
export GIT_USERNAME=...
export GIT_PASSWORD=...

But:

  • ❌ Git doesn’t use these directly for auth

  • ❌ Not secure (visible in process/env)

  • ❌ Not standard practice

👉 So avoid this approach.


💡 My Recommendation for You

Since you're a developer (and doing frequent pushes):

👉 Switch to SSH — one-time setup, zero friction afterward.


If you want, I can:

  • Check your current repo setup (git remote -v)

  • Or give you a 1-command script to fully switch to SSH 🚀




I prefer this Option 2: Use Git Credential Helper (HTTPS but cached) 
What happens when my PAT changes after 90 days?


Good question—this is exactly where people get tripped up with HTTPS + PAT 👍


🔄 What happens when your PAT expires?

When your Personal Access Token (PAT) expires (e.g., after 90 days on GitHub):

👉 Next git push:

  • Git tries using the old cached token

  • GitHub rejects it ❌

  • You’ll see something like:

    remote: Invalid username or password
    fatal: Authentication failed

🧠 What Git does internally

If you're using:

1. credential.helper store

  • Old PAT is saved in:

    ~/.git-credentials
  • Git keeps using it → keeps failing until you replace it


2. credential.helper cache

  • Stored in memory only

  • Once expired → you’ll be prompted again automatically

👉 This is why cache is smoother for PAT rotation.


🛠️ How to fix when PAT expires

Option A: Just retry push (simplest)

Bash
git push
  • It will prompt again

  • Enter:

    • Username: your GitHub username

    • Password: new PAT

👉 Git updates stored credentials automatically (for both store and cache)


Option B: Manually remove old credentials (if stuck)

If Git keeps failing without prompting:

For store:

Bash
nano ~/.git-credentials

Delete the line with GitHub credentials.

Or:

Bash
rm ~/.git-credentials

Then:

Bash
git push

Option C: Force Git to forget credentials

Bash
git credential reject

Or:

Bash
git config --global --unset credential.helper
git config --global credential.helper store

🔐 Best Practice for You

Since you're on Ubuntu and want low friction:

👉 Recommended setup:

Bash
git config --global credential.helper cache
git config --global credential.helper 'cache --timeout=86400'
  • 24-hour memory cache

  • No file storage

  • Smooth re-auth when PAT changes


⚡ Pro Tip (Optional but powerful)

Instead of expiring tokens every 90 days:

👉 When creating PAT on GitHub:

  • Choose longer expiry (or no expiry if allowed in your org)

  • Scope: repo (usually enough)


🧭 Summary

ScenarioWhat happensWhat you do
PAT expiresPush failsRe-enter new PAT
Using storeKeeps failing silentlyDelete ~/.git-credentials
Using cachePrompts again automaticallyJust re-enter

If you want, I can show you a one-time setup using Git Credential Manager (GCM) which is even cleaner (and handles PAT refresh better).


See All Posts on GitHub    <<< Previously