Thursday, December 25, 2025

Manager - Data science, Machine Learning, Gen AI - People Staffing - 11 to 14 yoe


See All: Miscellaneous Interviews @ Naukri.com

Q. How do you stay updated with the latest trends in AI and machine learning?

A.
# Staying updated with the latest trends in AI and machine learning is crucial for my professional development. I regularly engage with online courses and certifications, such as those from DeepLearning.AI and Google Cloud Skills Boost, to enhance my knowledge.

# Additionally, I follow industry leaders and research publications to keep abreast of new developments. Participating in webinars and conferences also provides valuable insights into emerging technologies and methodologies.

# Networking with other professionals in the field allows me to exchange ideas and learn from their experiences. This collaborative approach helps me stay informed about best practices and innovative solutions in AI.

# By actively pursuing continuous learning, I ensure that I remain at the forefront of advancements in AI and machine learning, which ultimately benefits my projects and teams.


Q: What strategies do you use for fine-tuning large language models for specific applications? A: Fine-tuning large language models (LLMs) is a critical aspect of my work. I typically start by understanding the specific requirements of the application, such as the domain and the type of data involved. For instance, in my project 'AI Over BI', we fine-tuned models to handle domain-specific queries effectively. My approach involves collecting a diverse dataset that reflects the types of queries users might pose. I then preprocess this data to ensure it is clean and relevant. Using frameworks like LangChain and HuggingFace Transformers, I implement fine-tuning techniques that adjust the model's weights based on this dataset. Moreover, I continuously evaluate the model's performance through metrics such as accuracy and response time, making iterative adjustments as necessary. This process ensures that the model not only understands the context but also provides accurate and timely responses.
Q. Can you discuss your experience with cloud technologies in deploying AI solutions? A: My experience with cloud technologies is extensive, particularly in deploying AI solutions. I have worked with platforms such as Azure Cloud and Google Collaboratory to implement scalable AI applications. In my current role at Accenture, I have utilized Azure ML Studio to deploy models developed for various projects, including the 'AI Over BI' initiative. This platform allows for seamless integration of machine learning workflows and provides tools for monitoring and managing deployed models. Additionally, I have experience with MLOps practices, which are essential for maintaining the lifecycle of AI models. This includes version control using Git and implementing CI/CD pipelines to automate the deployment process. By leveraging cloud technologies, I ensure that the AI solutions I develop are not only efficient but also scalable, allowing them to handle large datasets and user requests effectively.
Q. How do you approach prompt engineering for LLMs in your projects? A: Prompt engineering is a crucial skill in my toolkit, especially when working with large language models. In my role at Accenture, I have developed a systematic approach to crafting effective prompts that yield the best results from LLMs. Initially, I focus on understanding the specific task at hand and the expected output. For example, in the 'AI Over BI' project, I needed to ensure that the prompts accurately reflected the user's intent when querying the database. I employ techniques such as meta-prompting, where I refine the initial prompt based on feedback from the model's responses. This iterative process allows me to adjust the wording and structure of the prompts to improve clarity and relevance. Additionally, I leverage my knowledge of embeddings and attention mechanisms to enhance the prompts further. By incorporating context and ensuring that the prompts are concise yet informative, I can significantly improve the model's performance in generating accurate and relevant outputs.
Q. Can you describe your experience with Generative AI and how you've applied it in projects? A: In my current role as a Senior Data Scientist at Accenture Solutions Pvt Ltd, I have been deeply involved in projects leveraging Generative AI. One notable project is the 'AI Over BI' initiative, where we developed a solution that utilizes Generative AI to enhance business intelligence across various domains, including Telco and Healthcare. This project involved preprocessing data to generate metadata about tables and columns, which was then stored in a vector database. We implemented a Natural Language Query (NLQ) system that allowed users to interact with the data using natural language, which was re-engineered into SQL queries for execution. This not only streamlined data access but also improved user engagement with the analytics platform. Additionally, I worked on an English Language Learning App as part of Accenture's CSR initiative, which utilized Generative AI for features like sentence generation and story creation. This project aimed to enhance learning experiences for students in low-income schools, showcasing the versatility of Generative AI in educational contexts.
Q. How do you ensure that your AI solutions are scalable and maintainable? Ensuring scalability and maintainability in AI solutions is a priority in my work. I adopt a modular approach when designing AI systems, which allows for easier updates and scalability. For example, in my projects at Accenture, I utilize cloud-native architectures that support scaling based on user demand. This includes leveraging services from Azure Cloud to handle increased workloads without compromising performance. Additionally, I implement best practices in coding and documentation, which are essential for maintainability. Clear documentation helps other team members understand the system, making it easier to onboard new developers and maintain the codebase. Regular code reviews and testing are also part of my strategy to ensure that the solutions remain robust and scalable. By continuously evaluating the architecture and performance, I can make necessary adjustments to accommodate growth.
Q. Can you explain your experience with Retrieval-Augmented Generation (RAG) architectures? A: My experience with Retrieval-Augmented Generation (RAG) architectures is extensive, particularly in my current role at Accenture. In the 'AI Over BI' project, we developed a RAG pipeline that integrated structured and unstructured data to enhance user interactions with our analytics platform. The RAG architecture allowed us to retrieve relevant information from a vector database based on user queries. This involved preprocessing data to create metadata and using it to inform the model about the context of the queries. By re-engineering user prompts through meta-prompting techniques, we ensured that the LLM could generate accurate SQL queries. This architecture not only improved the accuracy of the responses but also enhanced the overall user experience by providing contextually relevant information quickly. The success of this project demonstrated the power of RAG in bridging the gap between data retrieval and natural language generation.
Q. Can you describe a project where you had to mentor junior data scientists? A: Mentoring junior data scientists has been a rewarding aspect of my career, particularly during my time at Accenture. One notable project involved the development of the 'Wiki Spinner' knowledge automation platform. In this project, I took on the responsibility of guiding a team of junior data scientists through the complexities of implementing Generative AI solutions. I provided them with insights into best practices for data preprocessing, model selection, and evaluation techniques. We held regular meetings to discuss progress and challenges, fostering an environment of open communication. I encouraged them to share their ideas and approaches, which not only boosted their confidence but also led to innovative solutions. By the end of the project, the junior team members had significantly improved their skills and contributed meaningfully to the project's success. This experience reinforced my belief in the importance of mentorship in fostering talent within the field of data science.
Q. What role do you believe team collaboration plays in successful AI projects? A: Team collaboration is vital in the success of AI projects. In my experience, particularly at Accenture, I have seen firsthand how effective collaboration can lead to innovative solutions and improved project outcomes. Working in cross-functional teams allows for diverse perspectives, which is crucial when developing complex AI solutions. For instance, in the 'AI Over BI' project, collaboration with product managers and engineers helped us align our AI capabilities with business needs. I also believe in fostering an environment where team members feel comfortable sharing ideas and feedback. This open communication leads to better problem-solving and enhances the overall quality of the project. Moreover, mentoring junior data scientists is another aspect of collaboration that I value. By sharing knowledge and best practices, I contribute to the team's growth and ensure that we collectively advance our skills in AI and machine learning.
Q. What are your long-term goals in the field of data science? A: My long-term goals in the field of data science revolve around advancing my expertise in Generative AI and its applications across various industries. I aspire to lead innovative projects that leverage AI to solve complex problems and drive business value. Additionally, I aim to contribute to the development of best practices in AI ethics and governance, ensuring that the solutions we create are responsible and beneficial to society. Mentoring the next generation of data scientists is also a key goal for me. I believe that sharing knowledge and experiences can help cultivate a strong community of professionals who are equipped to tackle the challenges of the future. Ultimately, I envision myself in a leadership role where I can influence the strategic direction of AI initiatives within an organization, driving innovation and fostering a culture of continuous learning and improvement.
Q. What is your experience with model evaluation and optimization techniques? A: Model evaluation and optimization are integral parts of my workflow as a Senior Data Scientist. I utilize various metrics to assess model performance, including accuracy, precision, recall, and F1 score, depending on the specific application. For instance, in my recent projects at Accenture, I implemented a rigorous evaluation framework for the models used in the 'AI Over BI' initiative. This involved testing the models against a validation dataset to ensure they could handle real-world queries effectively. Once the models were evaluated, I focused on optimization techniques such as hyperparameter tuning and feature selection. By adjusting parameters and selecting the most relevant features, I was able to enhance the model's performance significantly. Moreover, I continuously monitor the models post-deployment to ensure they maintain their performance over time. This proactive approach allows me to identify any degradation in model accuracy and make necessary adjustments promptly.

Gen AI Manager - Mount Talent Consulting - 11 to 14 yoe


See All: Miscellaneous Interviews @ Naukri.com

Q. How do you stay updated with the latest trends in AI and Generative AI?
A.
Staying updated with the latest trends in AI and Generative AI is essential for my role. I actively engage in continuous learning through various channels, including online courses, webinars, and industry conferences.

For instance, I have completed several certifications from DeepLearning.AI, including ‘Generative AI and Large Language Models’ and ‘Agentic AI’. These courses have provided me with valuable insights into the latest advancements in the field.

I also follow leading AI research publications and blogs to keep abreast of emerging technologies and methodologies. This helps me identify innovative solutions that can be applied to our projects.

Additionally, I participate in professional networks and forums where AI practitioners share their experiences and insights. This collaborative approach not only enhances my knowledge but also allows me to contribute to the broader AI community.


Q. What role does communication play in your AI project management? A. Communication is a cornerstone of effective AI project management. In my experience at Accenture, I have found that clear and consistent communication is essential for aligning team members and stakeholders. I prioritize regular updates and check-ins with my team to ensure everyone is on the same page regarding project goals and progress. This fosters a collaborative environment where team members feel empowered to share their ideas and concerns. Moreover, I tailor my communication style to suit different audiences. For instance, when presenting to C-level executives, I focus on high-level insights and strategic implications, while for technical teams, I delve into the specifics of the AI models and algorithms. Additionally, I encourage open dialogue with stakeholders throughout the project lifecycle. This not only builds trust but also allows us to address any issues proactively, ensuring that the project stays on track and meets client expectations.
Q. Can you explain your experience with AI strategy development for clients? A. My experience in developing AI strategy is extensive, particularly in my current role at Accenture Solutions Pvt Ltd. I have been responsible for guiding the development of enterprise-wide AI strategies for various clients, focusing on aligning AI initiatives with their business objectives. For example, in the AI Over BI project, I led the team in designing a strategy that integrated Generative AI into business intelligence processes. This involved identifying key areas where AI could enhance decision-making and operational efficiency. I also emphasize the importance of benchmarking against global research and industry peers to ensure that our strategies are competitive and innovative. This approach has allowed me to recommend cutting-edge AI solutions that meet the evolving needs of clients. Moreover, I collaborate closely with cross-functional teams, including business experts and technology engineers, to ensure that the AI strategies we develop are practical and executable. This collaborative approach has been key to successfully implementing AI solutions that deliver tangible results for clients.
Q. Can you discuss your experience with cloud platforms for AI solutions? A: My experience with cloud platforms is extensive, particularly with Azure Cloud and Google Cloud. In my current role at Accenture Solutions Pvt Ltd, I have utilized Azure ML Studio for developing and deploying machine learning models. For instance, in the AI Over BI project, we leveraged Azure Functions to automate data processing tasks, which significantly improved the efficiency of our workflows. This experience has equipped me with the skills to design scalable AI solutions that can handle large volumes of data. Additionally, I have worked with Google Collaboratory for prototyping and testing AI models, which has allowed me to experiment with different algorithms and frameworks in a collaborative environment. My familiarity with cloud platforms also extends to implementing security measures and ensuring compliance with data protection regulations. This holistic understanding of cloud technologies enables me to guide clients in selecting the right platforms for their AI initiatives.
Q. What techniques do you use for effective stakeholder management? A: Effective stakeholder management is crucial for the success of any AI project. In my role at Accenture, I prioritize building strong relationships with stakeholders through regular communication and transparency. One technique I employ is to establish clear expectations from the outset. This involves defining project goals, timelines, and deliverables in collaboration with stakeholders to ensure alignment. I also utilize feedback loops to keep stakeholders informed about project progress and gather their input at key milestones. This iterative approach not only fosters trust but also allows us to make necessary adjustments based on stakeholder feedback. Additionally, I focus on understanding the unique perspectives and concerns of each stakeholder. By actively listening and addressing their needs, I can tailor our AI solutions to better meet their expectations, ultimately leading to higher satisfaction and project success.
Q. Can you share your experience with data preprocessing and metadata generation? A. Data preprocessing and metadata generation are critical steps in any AI project. In my role at Accenture, I have led efforts in preprocessing data for various projects, including the AI Over BI initiative. This involved cleaning and transforming raw data into a structured format suitable for analysis. I utilized tools like Pandas and PySpark to efficiently handle large datasets and generate metadata that provided insights into the data's structure and quality. For instance, we created metadata about tables, columns, and sample queries, which facilitated the development of our AI models. This metadata was crucial for ensuring that our models were trained on high-quality data, ultimately improving their performance. Moreover, I emphasize the importance of documenting the preprocessing steps and metadata generation processes. This not only aids in reproducibility but also helps stakeholders understand the data's context and relevance to the AI solutions we develop.
Q. How do you ensure your AI solutions are aligned with responsible AI principles? A: Ensuring that AI solutions align with responsible AI principles is a priority in my work. At Accenture, I actively engage in discussions around ethical AI practices and the implications of AI technologies on society. One of the key aspects of my approach is to incorporate fairness, accountability, and transparency into the AI solutions we develop. For instance, during the development of the Wiki Spinner project, I implemented guidelines to ensure that the generated content was unbiased and accessible to diverse audiences. I also stay informed about the latest developments in responsible AI frameworks and tools, which allows me to guide my team in making informed decisions that adhere to ethical standards. This includes conducting regular audits of our AI systems to identify and mitigate any potential biases. Moreover, I believe in fostering a culture of responsibility within my team by encouraging open discussions about the ethical implications of our work. This collaborative approach not only enhances our understanding of responsible AI but also ensures that we are collectively committed to ethical practices.
Q. How do you approach collaboration with business experts and technology teams? A: Collaboration is essential in my work, especially when it comes to integrating AI solutions into business processes. At Accenture, I regularly collaborate with business experts to gain insights into their operational challenges and identify opportunities for AI implementation. For example, during the development of the English Language Learning App, I worked closely with educators to understand their needs and ensure that the AI features we developed were aligned with educational objectives. I also engage with technology teams to ensure that our AI solutions are technically feasible and can be seamlessly integrated into existing systems. This involves regular meetings and brainstorming sessions to align our goals and address any potential challenges. Furthermore, I believe in fostering a culture of open communication and feedback within the team. This collaborative approach not only enhances the quality of our solutions but also builds strong relationships with stakeholders, ultimately leading to successful project outcomes.
Q. What methodologies do you use to validate AI solutions during development? A: Validation of AI solutions is a critical step in my development process. At Accenture, I employ a combination of testing methodologies to ensure that our AI solutions are robust and reliable. For instance, during the development of the AI Over BI project, we implemented rigorous testing protocols to validate the generated SQL queries and the overall functionality of the system. One key methodology I use is cross-validation, which helps assess the performance of our models on different datasets. This approach ensures that our AI solutions generalize well and perform effectively in real-world scenarios. Additionally, I focus on user feedback during the testing phase. Engaging with end-users allows us to gather insights on the usability and effectiveness of the AI solutions, which is invaluable for making necessary adjustments before deployment. Furthermore, I emphasize the importance of continuous monitoring post-deployment to ensure that the AI solutions remain effective and relevant. This iterative approach to validation not only enhances the quality of our solutions but also builds trust with clients.
Q. How do you stay updated with the latest trends in AI and Generative AI? A: Staying updated with the latest trends in AI and Generative AI is essential for my role. I actively engage in continuous learning through various channels, including online courses, webinars, and industry conferences. For instance, I have completed several certifications from DeepLearning.AI, including ‘Generative AI and Large Language Models’ and ‘Agentic AI’. These courses have provided me with valuable insights into the latest advancements in the field. I also follow leading AI research publications and blogs to keep abreast of emerging technologies and methodologies. This helps me identify innovative solutions that can be applied to our projects. Additionally, I participate in professional networks and forums where AI practitioners share their experiences and insights. This collaborative approach not only enhances my knowledge but also allows me to contribute to the broader AI community.
Q. How do you measure the success of AI implementations? A: Measuring the success of AI implementations is critical for demonstrating value to clients. In my role at Accenture, I utilize a combination of quantitative and qualitative metrics to assess the effectiveness of our AI solutions. For instance, I track key performance indicators (KPIs) such as accuracy, response time, and user satisfaction. These metrics provide valuable insights into how well the AI solution is performing and whether it meets the defined objectives. Additionally, I conduct post-implementation reviews to gather feedback from stakeholders and end-users. This qualitative data helps us understand the user experience and identify areas for improvement. Furthermore, I emphasize the importance of aligning success metrics with the client's business goals. By demonstrating how our AI solutions contribute to their overall objectives, we can showcase the tangible benefits of our work and build long-term relationships with clients.
Q. How do you approach defining AI problems and prioritizing use cases for clients? A: Defining AI problems begins with understanding the client's specific needs and pain points. In my role at Accenture, I regularly interact with stakeholders to gather insights into their challenges. This involves conducting discovery workshops to elicit AI opportunities and client pain areas. Once I have a clear understanding, I prioritize use cases based on factors such as potential impact, feasibility, and alignment with the client's strategic goals. For instance, in the AI Over BI project, we prioritized use cases that could deliver immediate value, such as automating data retrieval and visualization. Furthermore, I leverage my knowledge of technology trends across Data and AI to recommend solutions that not only address current problems but also position clients for long-term success. This strategic approach ensures that the AI initiatives we undertake are both impactful and sustainable. Ultimately, my experience in defining AI problems and prioritizing use cases is rooted in a collaborative process that involves continuous communication with clients and a deep understanding of their business objectives.

Sunday, December 21, 2025

Differences in connecting to MySQL from Bash Console and from MySQL Console (At PythonAnywhere)

To See All Articles About Technology: Index of Lessons in Technology
#1 MySQL Console

#2 Bash Console (With Error Message)

Can you tell me what's the issue with my bash

Yes — this is a very common PythonAnywhere confusion, and your setup is actually fine 👍
The issue is not your bash. It’s where MySQL exists on PythonAnywhere.


What you’re seeing (and why)

✅ Image 1 (MySQL console works)

You opened PythonAnywhere’s MySQL console:

pgsql
MySQL: ashishjain1545$default Server version: 8.0.40 Source distribution

✔ This is PythonAnywhere’s managed MySQL server
✔ It lives on PythonAnywhere infrastructure, not inside your bash VM
✔ PythonAnywhere connects you using TCP, not a local socket

So this works ✅


❌ Image 2 (Bash → mysql fails)

In the Bash console you ran:

bash
mysql

And got:

vbnet
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock'

This happens because:

There is NO local MySQL server running inside your bash container
/var/run/mysqld/mysqld.sock does not exist
❌ PythonAnywhere does not allow you to run MySQL locally

So Bash is doing exactly what it should — it’s just looking in the wrong place.


🔑 Key Concept (Important)

On PythonAnywhere:

ThingExists?
Local MySQL server in bash❌ No
/var/run/mysqld/mysqld.sock❌ No
PythonAnywhere MySQL service✅ Yes
Access via TCP host✅ Yes

✅ Correct way to use MySQL from Bash on PythonAnywhere

You must specify host, user, and database:

bash
mysql \ -u ashishjain1545 \ -h ashishjain1545.mysql.pythonanywhere-services.com \ -p

OR (short form):

bash
mysql -u ashishjain1545 -h ashishjain1545.mysql.pythonanywhere-services.com -p

Then enter your MySQL password (from the Databases tab).


❌ What NOT to do on PythonAnywhere

bash
mysql mysql -u root mysql -u ashish mysql --socket=/var/run/mysqld/mysqld.sock

These only work on machines where MySQL is installed locally, which PythonAnywhere does not allow.


✅ Correct mental model

Think of PythonAnywhere like this:

arduino
[ Your Bash Console ] | | TCP v [ PythonAnywhere MySQL Server ]

Not:

css
[ Bash ][ Local MySQL ]

✅ Summary (TL;DR)

  • ✔ Your MySQL is working

  • ✔ Your bash is working

  • ❌ There is no local MySQL daemon

  • ✅ Always connect using -h <username>.mysql.pythonanywhere-services.com

  • ✅ Or use the MySQL console tab (which you already did)


Query: $ mysql -u ashishjain1545 -h ashishjain1545.mysql.pythonanywhere-services.com -p
Tags: Technology,Database,Cloud,

Saturday, December 20, 2025

Project Setup For "Reading App Using React Native" (Dec 2025)

To See All React Native Articles: Index of Lessons in Technology

1. Clone the repository.

https://github.com/ashishjain1547/pvt_reading_mobile_app_using_react_native

2. Create a local working copy.

/home/jain/Desktop/ws/react_native_apps/Reading_App_For_Bright_Orange

3. Connect test device (phone/tablet) with the laptop

How I enabled "Wireless Debugging" on my Samsung phone

4. Trial launch of the app on the test device (phone/tablet)

5. Check whether the backend API is operational.

https://ashishjain1545.pythonanywhere.com/

6. Setup the local database (MySQL) DB Details (local and remote) can be found here: /home/jain/Desktop/ws/react_native_apps/Reading_App_For_Bright_Orange/design_documents/flask_app.py db_details = { "host": "localhost", "user": "ashish", "password": "pass", "database": "reading_app_v2" } Testing in MySQL Workbench (local)
Checking if remote DB also has this one table: reading_artifacts
Query: show tables; Well: the mobile app on the device won't be able to connect to local DB so we would need to work with the hosted (remote) DB.

7: Getting 'create' statement for a table

Query: SHOW CREATE TABLE reading_artifacts; CREATE TABLE `reading_artifacts` ( `id` int NOT NULL AUTO_INCREMENT, `book_title` varchar(255) DEFAULT NULL, `sequence_number_as_per_book` int DEFAULT NULL, `reading_artifact_type` varchar(150) NOT NULL, `reading_artifact_title` varchar(4000) DEFAULT NULL, `reading_artifact_text` varchar(4000) DEFAULT NULL, `reading_artifact_metadata` longtext, `image_base64` mediumtext, `description_hindi` text CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci, `soft_delete_flag` tinyint(1) NOT NULL DEFAULT '0', `date_created` timestamp NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=31 DEFAULT CHARSET=utf8mb3;

8: Basic Report From 'reading_artifacts'

select id, book_title, sequence_number_as_per_book, reading_artifact_type, reading_artifact_title from reading_artifacts order by book_title, reading_artifact_type, sequence_number_as_per_book;

9: How to launch the app

$ pwd /home/jain/Desktop/ws/private/pvt_reading_mobile_app_using_react_native TO START THE METRO SERVER: $ npx react-native start TO CHECK IF YOUR DEVICE IS CONNECTED: (base) jain@jain-ThinkPad-T440s:~/Desktop/ws/private/pvt_reading_mobile_app_using_react_native$ adb devices List of devices attached adb-R9ZY3098GYX-8MEOzx._adb-tls-connect._tcp device TO RUN THE APP ON THE DEVICE: $ npx react-native run-android Addendum Repository for design documents / supporting code / meeting notes and presentations related to this app: https://github.com/ashishjain1547/reading_app_design_docs_for_bright_orange_school
Tags: Technology,React Native,

How I enabled "Wireless Debugging" on my Samsung phone

To See All React Native Articles: Index of Lessons in Technology

1 - Settings

2 - About Phone

3 - Software Information

Tap 'Build Number' 7 times to enable 'Developer Options'

4 - Settings with Developer Options Enabled

5 - Wireless Debugging Settings in Developer Options

6 - Wireless Debugging Settings

7 - Pairing Development Laptop and Test Device (Phone)

(base) jain@jain-ThinkPad-T440s:~/Desktop/ws/react_native_apps/Reading_App_For_Bright_Orange$ adb version Android Debug Bridge version 1.0.41 Version 36.0.0-13206524 Installed as /home/jain/Android/Sdk/platform-tools/adb Running on Linux 6.14.0-37-generic (x86_64) 'Pair Device With Pairing Code' Screen on my phone
(base) jain@jain-ThinkPad-T440s:~/Desktop/ws/react_native_apps/Reading_App_For_Bright_Orange$ adb pair 192.168.1.3:38407 Enter pairing code: 698705 Successfully paired to 192.168.1.3:38407 [guid=adb-R9ZY3098GYX-8MEOzx]

One Cause For Connection Refused Error With "adb connect" Command...

  • adb pair 192.168.1.3:39931SUCCESS

  • adb connect 192.168.1.3:46419Connection refused

This happens because pairing port ≠ connection port.

Get the correct IP:PORT Re-open Wireless Debugging info on phone On your Android phone: Settings → Developer options → Wireless debugging You will see two things: IP address & Port (this is the port you must use with adb connect) Pair with pairing code (you already did this) 👉 Note the IP:PORT shown under “Wireless debugging”, NOT the pairing dialog. Code (base) jain@jain-ThinkPad-T440s:~/Desktop/ws/react_native_apps/Reading_App_For_Bright_Orange$ adb connect 192.168.1.3:46419 (base) jain@jain-ThinkPad-T440s:~/Desktop/ws/react_native_apps/Reading_App_For_Bright_Orange$ adb devices List of devices attached adb-R9ZY3098GYX-8MEOzx._adb-tls-connect._tcp device
Tags: Technology,React Native,

When 5 + 5 + 5 = 550


See All: Motivation For Interview Preparation


A small riddle about big thinking

At first glance, it sounds like a bad joke or a trick meant to waste time.

5 + 5 + 5 = 550
Using just one line, validate this statement.

Anyone with basic arithmetic will immediately object.
It’s wrong. Plainly, obviously, mathematically wrong.

And that’s exactly where most people stop.

But this question isn’t about arithmetic. It never was.


The candidate’s first reaction: resistance

The candidate in the interview reacts the way most of us would:

  • “This isn’t a mathematical statement.”

  • “It’s not a computer-generated expression either.”

  • “It’s literally impossible to solve.”

All of those statements are correct.
And yet, they miss the point.

The interviewer isn’t testing math.
They’re testing how you think when the rules aren’t clear.


A subtle shift: from solving to validating

Pressed to try anyway, the candidate does something interesting.

Instead of trying to force a solution, they reframe the problem.

They add a single line — not to make the equation true, but to make the logic valid:

5 + 5 + 5 ≠ 550

With one small stroke, the statement is now correct.

Is this what the panel originally had in mind?
Probably not.

Is it still a valid solution?
Absolutely.

This is the moment where reasoning matters more than correctness.


Why this answer is actually brilliant

From an interviewer's perspective, the candidate demonstrated something crucial:

  • They questioned the assumptions of the problem

  • They didn’t panic under ambiguity

  • They reframed the objective instead of rejecting it outright

That’s lateral thinking in action.

In real-world work—engineering, data science, product design—the hardest problems rarely come with clean constraints. The ability to say “maybe we’re interpreting this wrong” is often more valuable than knowing the formula.


The “expected” solution (and the hidden trick)

After the discussion, the interviewer reveals another way to solve it.

Look closely at the symbols.

If you draw a slanted line through the first plus sign, it turns into a 4:

545 + 5 = 550

Suddenly, the equation works.

Most people are trained to see:

  • standing lines

  • sleeping lines

But slanted lines often escape perception.

That’s the trick.


The deeper lesson

This question isn’t about cleverness for its own sake. It highlights a fundamental idea:

Constraints are often softer than they appear.

Some people try to solve problems strictly inside the box.
Others step back and ask whether the box itself can move.

Competitive interviews, research problems, and real-life decision-making all reward the second group.


Final thought

When you encounter a problem that feels impossible, pause before rejecting it.

Ask yourself:

  • Am I solving the problem, or just reacting to its presentation?

  • What assumptions am I treating as fixed?

  • Is there another way to interpret “one line”?

Sometimes, the smartest move isn’t to calculate faster —
it’s to see differently.

As always, think laterally.
And good luck.

Is the AI Bubble About to Burst?


See All Articles on AI

Bitcoin, Bubbles, and the Madness of Crowds

There’s a dangerous idea quietly circulating in modern culture: don’t think you’re special; don’t think you know better than the crowd. It sounds humble, even virtuous. But history tells us something very different.

When independent thinking is dismissed, when skepticism is framed as ignorance, societies don’t become wiser—they become fragile. And nowhere is this more visible than in financial bubbles.

Today, I want to talk about Bitcoin, crypto, and speculative manias—not with hype, and not with hatred, but with history.


When the Greatest Investors Call It “Rat Poison”

“Bitcoin is rat poison squared.”

Those are not my words. They belong to Charlie Munger, echoed repeatedly by Warren Buffett—two of the most successful investors in human history. Together, their investment track record runs into millions of percent over decades.

When people like that speak, it’s worth listening.

To be clear: Bitcoin’s price performance has been extraordinary. Anyone who bought early and held deserves recognition for the returns they achieved. I’ve felt that pull myself.

In 2016, I was close to buying roughly $150,000 worth of Bitcoin. At today’s prices, that stake would have been worth somewhere between $20–25 million. I didn’t do it—not because I lacked conviction, but because transferring money into crypto at the time was operationally difficult.

A friend of mine did take the plunge. He bought Ethereum at around 50 cents and sold near $1,400—a roughly 2,800× return. Stories like his aren’t rare. We’ve all heard about people who jumped into something absurd-sounding and walked away rich.

And that’s exactly the problem.


FOMO: An Ancient Survival Instinct in Modern Markets

FOMO—fear of missing out—is not a personality flaw. It’s evolutionary.

For most of human history, survival depended on belonging to the group. Being excluded meant danger: no food, no protection, no mating opportunities. That wiring never left us.

So when everyone around us seems to be getting rich, when dissent is mocked and skepticism labeled ignorance, our instincts scream: get in or be left behind.

Even the greatest minds are not immune.

In the 1700s, Isaac Newton invested in the South Sea Company. He made money, exited wisely—then watched others grow richer. He re-entered at the peak. The bubble collapsed, and Newton lost a fortune.

Afterward, he famously remarked:

“I can calculate the movement of heavenly bodies, but not the madness of men.”


How Crowds Rewrite Reality

Psychology reinforces this lesson.

In the famous Smoke-Filled Room experiment (1968), participants sat in a room completing paperwork when smoke began pouring in. When alone, most people immediately reported the danger. But when surrounded by others who calmly ignored it, nearly 90% did nothing.

The presence of a crowd didn’t make people safer—it made them blind.

Financial bubbles work the same way.


A Short History of Financial Madness

Every major bubble follows a familiar script:

  1. A real innovation appears

  2. Speculation overtakes utility

  3. Prices rise because prices are rising

  4. Dissent is ridiculed

  5. Collapse follows

Tulip bulbs in the 1630s traded for the price of houses.
Railways in the 1840s promised a new world—and delivered bankruptcies.
The Roaring Twenties had cars, electricity, and radios—followed by the Great Depression.
The dot-com era had the internet—Wall Street, media hype, and “new economy” logic—then an 85% Nasdaq crash.

The technology was always real.
The pricing never was.


The Emperor’s New Clothes, Revisited

In Denmark, we grow up with Hans Christian Andersen’s The Emperor’s New Clothes. Two fraudsters convince an emperor they’ve made magical garments invisible only to the unworthy. No one admits they see nothing—until a child speaks the obvious truth.

Financial bubbles depend on the same social pressure.

Nobody wants to be the “stupid one” who doesn’t get it.


Bitcoin and the Largest Bubble Yet

Let’s look at scale.

Buffett and Munger often reference market capitalization to GDP as a broad measure of financial excess:

  • 1929: ~89%

  • 2000 (dot-com peak): ~136%

  • 2007 (pre-financial crisis): ~107%

  • Today: ~226%

This is not normal.

Bitcoin alone is up over 1.2 million percent since 2012. The Nasdaq and crypto markets now move in near-lockstep. In the dot-com crash, the Nasdaq fell 85%—and that bubble was far smaller than today’s AI-and-crypto-fueled exuberance.

Bitcoin historically falls more than the Nasdaq in downturns.

If we enter a recession—and rising unemployment has always preceded recessions—what happens when the broader market cracks?

A 95% drawdown in Bitcoin is not unthinkable. It is historically consistent.


Technology ≠ Guaranteed Returns

None of this denies the reality of blockchain technology—just as railways, electricity, and the internet were real. But innovation does not protect investors from overpaying.

Markets don’t crash because technology fails.
They crash because expectations detach from reality.


The Hardest Truth

Bitcoin may survive. Crypto may survive. Some companies will emerge stronger.

But bubbles do not deflate gently. They burst.

And when they do, the crowd that once mocked skepticism will insist no one could have seen it coming.

History says otherwise.

Sometimes, it takes the courage to be the child in the crowd—the one willing to say: the emperor isn’t wearing any clothes.

Warren Buffett and Charlie Munger did.
So did Isaac Newton—after it was too late.

The rest of us still have time to listen.