Friday, December 26, 2025

Installing Draw.IO on Linux Ubuntu Laptop

All Linux Lessons
How do I install this software on my linux machine: It is available in following architectures: Linux - deb, AppImage or rpm

You can install draw.io (diagrams.net desktop) on Linux in three common ways. Pick the one that best fits your distro and preference.


✅ Option 1: DEB package (Best for Ubuntu / Linux Mint / Debian)

Recommended if you want system integration and automatic updates.

  1. Download the .deb file from the release page.

  2. Open a terminal in the download folder and run:

    bash
    sudo apt update sudo apt install ./drawio-amd64-29.0.3.deb

    Replace the filename if the version differs.

  3. Launch from the app menu or run:

    bash
    drawio

✅ Option 2: AppImage (Works on almost all distros)

No installation needed; great for portability.

  1. Download the .AppImage file.

  2. Make it executable:

    bash
    chmod +x drawio-29.0.3-x86_64.AppImage
  3. Run it:

    bash
    ./drawio-29.0.3-x86_64.AppImage

Optional: Move it to /opt and create a desktop shortcut if you want.


✅ Option 3: RPM package (Fedora / RHEL / openSUSE)

Use this if you’re on an RPM-based distro.

bash
sudo dnf install drawio-29.0.3.x86_64.rpm

or on openSUSE:

bash
sudo zypper install drawio-29.0.3.x86_64.rpm

🔍 Which should you choose?

  • Ubuntu / Linux Mint / DebianDEB

  • Any distro / portable useAppImage

  • Fedora / RHEL / openSUSERPM

If you tell me your exact distro (e.g., Ubuntu 22.04, Mint 21, Lubuntu, etc.), I’ll give you the exact filename and best command.

...
(base) jain@jain-ThinkPad-T440s:~/Downloads$ chmod +x drawio-x86_64-29.0.3.AppImage (base) jain@jain-ThinkPad-T440s:~/Downloads$ ./drawio-x86_64-29.0.3.AppImage dlopen(): error loading libfuse.so.2 AppImages require FUSE to run. You might still be able to extract the contents of this AppImage if you run it with the --appimage-extract option. See https://github.com/AppImage/AppImageKit/wiki/FUSE for more information

This is a very common AppImage issue 👍
Your system is missing FUSE v2 (libfuse.so.2), which AppImages require.

Let’s fix it cleanly.


✅ Fix 1 (Recommended): Install FUSE (libfuse2)

On Ubuntu / Linux Mint / Lubuntu:

bash
sudo apt update sudo apt install libfuse2

Then run again:

bash
./drawio-x86_64-29.0.3.AppImage

This should start draw.io immediately.


🔎 Why this happens

  • AppImages use FUSE (Filesystem in Userspace) to mount themselves.

  • Newer distros ship fuse3, but AppImages still expect fuse2.

  • Hence the missing libfuse.so.2.


🛠 Fix 2 (If you cannot install FUSE): Extract AppImage

If you’re on a restricted system:

bash
./drawio-x86_64-29.0.3.AppImage --appimage-extract

Then run it manually:

bash
cd squashfs-root ./drawio

⚠️ This works but no desktop integration.


🔁 Alternative (Most Stable): Install DEB instead

If you’re on Ubuntu/Mint/Lubuntu, the DEB version is simpler:

bash
sudo apt install ./drawio-amd64-29.0.3.deb

✅ Quick Recommendation

SituationBest Choice
Ubuntu / Mint / LubuntuDEB package
Portable / no installAppImage + libfuse2
Locked-down systemAppImage extract

If you want, I can:

  • Create a desktop launcher for the AppImage

  • Help you switch to the DEB version

  • Check if your system already has fuse3 vs fuse2

Just tell me 👍

... ... ...
(base) jain@jain-ThinkPad-T440s:~/Downloads$ ./drawio-x86_64-29.0.3.AppImage [6616:1227/063742.336623:FATAL:sandbox/linux/suid/client/setuid_sandbox_host.cc:166] The SUID sandbox helper binary was found, but is not configured correctly. Rather than run without sandboxing I'm aborting now. You need to make sure that /tmp/.mount_drawiodOPId7/chrome-sandbox is owned by root and has mode 4755. Trace/breakpoint trap (core dumped)

This is another very common Electron + AppImage issue 👍
Nothing is “broken” — it’s a sandbox permission problem.

The error means:

Electron found the chrome-sandbox helper inside the AppImage, but it cannot be setuid root, which is required for sandboxing.

Because AppImages run from /tmp/.mount_*, Linux does not allow setuid binaries there.


Fastest & Safest Fix (Recommended)

Run draw.io without the sandbox.

👉 Just run:

bash
./drawio-x86_64-29.0.3.AppImage --no-sandbox

This is safe for desktop apps like draw.io and is the official workaround.


🔁 Make it permanent (so you don’t type it every time)

Create a small wrapper:

bash
nano ~/bin/drawio

Paste:

bash
#!/bin/bash ~/Downloads/drawio-x86_64-29.0.3.AppImage --no-sandbox

Save (Ctrl+O, Enter, Ctrl+X) and make executable:

bash
chmod +x ~/bin/drawio

Now you can just run:

bash
drawio

🧠 Why this happens (short explanation)

  • draw.io is an Electron app (Chromium-based)

  • Chromium wants a setuid sandbox

  • AppImages mount under /tmp/.mount_*

  • Linux disallows setuid binaries there

  • Result → crash unless sandbox is disabled


🟢 Best Long-Term Solution (If you want zero issues)

Since you’re on Ubuntu/Mint/Lubuntu, the DEB package is better than AppImage:

bash
sudo apt install ./drawio-amd64-29.0.3.deb

✔ Proper sandbox
✔ Menu integration
✔ No flags needed
✔ Auto updates possible


🏁 Recommendation

MethodStability
AppImage + --no-sandbox✅ Works
AppImage without flag❌ Crashes
DEB package⭐⭐⭐⭐⭐ Best

If you want, I can:

  • Create a desktop icon for the AppImage

  • Help you switch cleanly to DEB

  • Explain security implications (briefly)

Just tell me 🙂

Tags: Technology,Linux,FOSS,

Accenture Skill Proficiency Test - Large Language Models - Dec 2025


See All: Miscellaneous Interviews @ Accenture

📘 Accenture Proficiency Test on LLMs


Q1. Few-shot Learning with GPT-3

Question (Cleaned)

You are developing a large language model using GPT-3 and want to apply few-shot learning techniques. You have a limited dataset for a specific task and want the model to generalize well. Which approach would be most effective?

Options:
a) Train the model on the entire dataset, then fine-tune on a small subset
b) Provide examples of the task in the input and let the model generate
c) LLMs are unable to handle single tasks

Correct Answer

b) Provide examples of the task in the input and let the model generate

💡 Hint

  • Few-shot learning = prompt engineering

  • No retraining required

  • You show the task via examples in the prompt


Q2. Edge AI with LLMs

Question

You are using an LLM for an edge AI application requiring real-time object detection. Which approach is most efficient?

Options:
a) Cloud-based LLM processing
b) Use a complex LLM regardless of compute
c) Use an LLM optimized for edge devices balancing accuracy and efficiency
d) Store data and process later
e) Manual input-based LLM

Correct Answer

c) Use an LLM optimized for edge devices

💡 Hint

  • Edge AI prioritizes low latency + low compute

  • Cloud = latency bottleneck

  • “Optimized” is the keyword


Q3. Improving Fine-tuned LLM Performance (Select Two)

Question

You fine-tuned a pre-trained LLM but performance is poor. What steps improve it?

Options:
a) Gather more annotated Q&A data with better supervision
b) Change architecture to Mixture of Experts / Mixture of Tokens
c) Simplify the task definition
d) Smaller chunks reduce retrieval complexity
e) Smaller chunks improve generation accuracy

Correct Answers

a) Gather more annotated data
c) Simplify the task definition

💡 Hint

  • First fix data quality & task framing

  • Architecture changes come later

  • Accenture favors practical ML hygiene


Q4. Chunking in RAG Systems

Question

Why do smaller chunks improve a RAG pipeline?

Correct Statements:
✔ Smaller chunks reduce retrieval complexity
✔ Smaller chunks improve generation accuracy

💡 Hint

  • Retrieval works better with semantic focus

  • Too large chunks dilute meaning


Q5. Challenges of Local LLMs in Chatbots

Question

What is a potential challenge local LLMs face in long-term task planning?

Options:
a) Unable to adjust plans when errors occur
b) Unable to handle complex tasks
c) Unable to handle multiple queries
d) Unable to use task decomposition

Correct Answer

a) Unable to adjust plans when errors occur

💡 Hint

  • Local models lack persistent memory & feedback loops

  • Planning correction is the real limitation


Q6. RAG Pipeline – Poor Semantic Representation

Question

Why might embeddings not represent semantic meaning correctly?

Options:
a) Encoder not trained on similar data
b) Text chunks too large
c) Encoder incompatible with RAG
d) Incorrect chunk splitting
e) Encoder not initialized

Correct Answers

a) Domain mismatch in training data
b) Chunk size too large

💡 Hint

  • Embeddings fail due to domain shift or context overflow

  • Initialization issues are rare in practice


Q7. Designing Advanced RAG – Chunking Decision

Question

Which is NOT a valid reason for splitting documents into smaller chunks?

Options:
a) Large chunks are harder to search
b) Small chunks won’t fit in context window
c) Smaller chunks improve indexing efficiency

Correct Answer

b) Small chunks won’t fit in context window

💡 Hint

  • Smaller chunks fit better, not worse

  • This is a classic reverse-logic trap


Q8. Intent in Chatbots

Question

What is the purpose of an intent in a chatbot?

Correct Answer

To determine the user’s goal

💡 Hint

  • Intent ≠ entity

  • Intent answers “what does the user want?”


Q9. Healthcare LLM Security

Question

Which strategy best ensures privacy and compliance for patient data?

Options:
a) Public API & public cloud
b) Layered security: encryption, access control, audits, private network
c) No security changes
d) Plaintext storage
e) Unverified 3rd party services

Correct Answer

b) Layered security approach

💡 Hint

  • Healthcare = defense in depth

  • Accenture loves encryption + audits + private infra


Q10. Edge AI Programming Language

Question

Which language is commonly used for developing ML models in Edge AI?

Options:

  • Java

  • Python

  • C++

  • JavaScript

  • Ruby

Correct Answer

Python

💡 Hint

  • Python dominates ML tooling

  • C++ is used for deployment, not modeling


Q11. Customizing METEOR Scoring

Question

How do you customize METEOR’s scoring function?

Correct Answer

Modify tool configuration or run with command-line flags

💡 Hint

  • METEOR supports custom weighting

  • Not hardcoded, no paid version needed


Q12. Bias Mitigation in LLMs

Question

First step when an LLM is found biased?

Correct Answer

Identify the source of bias

💡 Hint

  • Diagnosis before correction

  • Retraining comes later


Q13. DeepEval Tool – Advanced Features

Question

Which statement is correct about DeepEval advanced usage?

Correct Answer

Configure advanced features in Python scripts

💡 Hint

  • Tool-level configuration

  • No paid version needed


Tags: Interview Preparation,Generative AI,Large Language Models,

Accenture Skill Proficiency Test - Natural Language Processing (NLP) - Dec 2025


See All: Miscellaneous Interviews @ Accenture

📘 Accenture Skill Proficiency Test - NLP - Question Report

🧠 SECTION 1: NLP & WORD EMBEDDINGS

Q1. What is GloVe?

Extracted options (interpreted):

  • Matrix factorization of (raw) PMI values with respect to squared loss

  • Matrix factorization of log-counts with respect to weighted squared loss

  • Neural network that predicts words in context and learns from that

  • Neural network that predicts words based on similarity and embedding

Correct Answer

Matrix factorization of log-counts with respect to weighted squared loss

💡 Hint

  • GloVe = Global Vectors

  • Combines count-based methods (co-occurrence matrix) with predictive embedding ideas

  • Objective minimizes weighted squared error between word vectors and log(co-occurrence counts)


🧠 SECTION 2: TRANSFORMERS & CONTEXTUAL MODELS

Q2. In which architecture are relationships between all words in a sentence modeled irrespective of their position?

Extracted options:

  • OpenAI GPT

  • BERT

  • ULMFiT

  • ELMo

Correct Answer

BERT

💡 Hint

  • BERT uses bidirectional self-attention

  • Every token attends to all other tokens simultaneously

  • GPT = causal (left-to-right), not fully bidirectional


📊 SECTION 3: EVALUATION METRICS

Q3. Log loss evaluation metric can have negative values

Options:

  • True

  • False

Correct Answer

False

💡 Hint

  • Log loss = negative log likelihood

  • Probabilities ∈ (0,1) → log(probability) ≤ 0 → negative sign makes loss ≥ 0

  • Log loss is always ≥ 0


Q4. Which metric is used to evaluate STT (Speech-to-Text) transcription?

Extracted options:

  • ROUGE

  • BLEU

  • WER

  • METEOR

Correct Answer

WER (Word Error Rate)

💡 Hint

  • WER = (Insertions + Deletions + Substitutions) / Total words

  • Standard metric for speech recognition


🧩 SECTION 4: NLP ALGORITHMS & TAGGING

Q5. Best Cut algorithm works for:

Extracted options:

  • Text classification

  • Coreference resolution

  • POS tagging

Correct Answer

Text classification

💡 Hint

  • Best Cut / Min Cut → graph-based partitioning

  • Often used for document clustering / classification


Q6. BIO tagging is applicable for:

Extracted options:

  • Text classification

  • Coreference resolution

  • NER

  • N-grams

Correct Answer

NER (Named Entity Recognition)

💡 Hint

  • BIO = Begin – Inside – Outside

  • Used in sequence labeling tasks

  • Especially common in NER and chunking


🎯 SECTION 5: ATTENTION MECHANISMS

Q7. Which among the following are attention mechanisms generally used in neural network models?

Extracted options:

  • Bahdanau attention

  • Karpathy attention

  • Luong attention

  • ReLU attention

  • Sigmoid attention

Correct Answers

Bahdanau attention
Luong attention

Incorrect

  • Karpathy (not an attention mechanism)

  • ReLU / Sigmoid (activation functions, not attention)

💡 Hint

  • Bahdanau = additive attention

  • Luong = multiplicative (dot-product) attention

  • Activations ≠ attention


📚 SECTION 6: TEXT PROCESSING & TOPIC MODELING

Q8. Porter Stemmer is used for:

Correct Answer

Stemming words to their root form

💡 Hint

  • Example: running → run

  • Reduces vocabulary size

  • Rule-based, not dictionary-based


Q9. LDA stands for:

Correct Answer

Latent Dirichlet Allocation

💡 Hint

  • Probabilistic topic modeling

  • Documents = mixture of topics

  • Topics = distribution over words


Q10. Which of the following are true to choose optimal number of topics in LDA for topic modeling?

Extracted options:

  • Min coherence value

  • Max model perplexity

  • Max coherence value

  • Min model perplexity

Correct Answers

Max coherence value
Min model perplexity

💡 Hint

  • Coherence → interpretability (higher is better)

  • Perplexity → generalization (lower is better)

  • Best practice: balance both


Tags: Interview Preparation,Natural Language Processing,

Thursday, December 25, 2025

Manager - Data science, Machine Learning, Gen AI - People Staffing - 11 to 14 yoe


See All: Miscellaneous Interviews @ Naukri.com

Q. How do you stay updated with the latest trends in AI and machine learning?

A.
# Staying updated with the latest trends in AI and machine learning is crucial for my professional development. I regularly engage with online courses and certifications, such as those from DeepLearning.AI and Google Cloud Skills Boost, to enhance my knowledge.

# Additionally, I follow industry leaders and research publications to keep abreast of new developments. Participating in webinars and conferences also provides valuable insights into emerging technologies and methodologies.

# Networking with other professionals in the field allows me to exchange ideas and learn from their experiences. This collaborative approach helps me stay informed about best practices and innovative solutions in AI.

# By actively pursuing continuous learning, I ensure that I remain at the forefront of advancements in AI and machine learning, which ultimately benefits my projects and teams.


Q: What strategies do you use for fine-tuning large language models for specific applications? A: Fine-tuning large language models (LLMs) is a critical aspect of my work. I typically start by understanding the specific requirements of the application, such as the domain and the type of data involved. For instance, in my project 'AI Over BI', we fine-tuned models to handle domain-specific queries effectively. My approach involves collecting a diverse dataset that reflects the types of queries users might pose. I then preprocess this data to ensure it is clean and relevant. Using frameworks like LangChain and HuggingFace Transformers, I implement fine-tuning techniques that adjust the model's weights based on this dataset. Moreover, I continuously evaluate the model's performance through metrics such as accuracy and response time, making iterative adjustments as necessary. This process ensures that the model not only understands the context but also provides accurate and timely responses.
Q. Can you discuss your experience with cloud technologies in deploying AI solutions? A: My experience with cloud technologies is extensive, particularly in deploying AI solutions. I have worked with platforms such as Azure Cloud and Google Collaboratory to implement scalable AI applications. In my current role at Accenture, I have utilized Azure ML Studio to deploy models developed for various projects, including the 'AI Over BI' initiative. This platform allows for seamless integration of machine learning workflows and provides tools for monitoring and managing deployed models. Additionally, I have experience with MLOps practices, which are essential for maintaining the lifecycle of AI models. This includes version control using Git and implementing CI/CD pipelines to automate the deployment process. By leveraging cloud technologies, I ensure that the AI solutions I develop are not only efficient but also scalable, allowing them to handle large datasets and user requests effectively.
Q. How do you approach prompt engineering for LLMs in your projects? A: Prompt engineering is a crucial skill in my toolkit, especially when working with large language models. In my role at Accenture, I have developed a systematic approach to crafting effective prompts that yield the best results from LLMs. Initially, I focus on understanding the specific task at hand and the expected output. For example, in the 'AI Over BI' project, I needed to ensure that the prompts accurately reflected the user's intent when querying the database. I employ techniques such as meta-prompting, where I refine the initial prompt based on feedback from the model's responses. This iterative process allows me to adjust the wording and structure of the prompts to improve clarity and relevance. Additionally, I leverage my knowledge of embeddings and attention mechanisms to enhance the prompts further. By incorporating context and ensuring that the prompts are concise yet informative, I can significantly improve the model's performance in generating accurate and relevant outputs.
Q. Can you describe your experience with Generative AI and how you've applied it in projects? A: In my current role as a Senior Data Scientist at Accenture Solutions Pvt Ltd, I have been deeply involved in projects leveraging Generative AI. One notable project is the 'AI Over BI' initiative, where we developed a solution that utilizes Generative AI to enhance business intelligence across various domains, including Telco and Healthcare. This project involved preprocessing data to generate metadata about tables and columns, which was then stored in a vector database. We implemented a Natural Language Query (NLQ) system that allowed users to interact with the data using natural language, which was re-engineered into SQL queries for execution. This not only streamlined data access but also improved user engagement with the analytics platform. Additionally, I worked on an English Language Learning App as part of Accenture's CSR initiative, which utilized Generative AI for features like sentence generation and story creation. This project aimed to enhance learning experiences for students in low-income schools, showcasing the versatility of Generative AI in educational contexts.
Q. How do you ensure that your AI solutions are scalable and maintainable? Ensuring scalability and maintainability in AI solutions is a priority in my work. I adopt a modular approach when designing AI systems, which allows for easier updates and scalability. For example, in my projects at Accenture, I utilize cloud-native architectures that support scaling based on user demand. This includes leveraging services from Azure Cloud to handle increased workloads without compromising performance. Additionally, I implement best practices in coding and documentation, which are essential for maintainability. Clear documentation helps other team members understand the system, making it easier to onboard new developers and maintain the codebase. Regular code reviews and testing are also part of my strategy to ensure that the solutions remain robust and scalable. By continuously evaluating the architecture and performance, I can make necessary adjustments to accommodate growth.
Q. Can you explain your experience with Retrieval-Augmented Generation (RAG) architectures? A: My experience with Retrieval-Augmented Generation (RAG) architectures is extensive, particularly in my current role at Accenture. In the 'AI Over BI' project, we developed a RAG pipeline that integrated structured and unstructured data to enhance user interactions with our analytics platform. The RAG architecture allowed us to retrieve relevant information from a vector database based on user queries. This involved preprocessing data to create metadata and using it to inform the model about the context of the queries. By re-engineering user prompts through meta-prompting techniques, we ensured that the LLM could generate accurate SQL queries. This architecture not only improved the accuracy of the responses but also enhanced the overall user experience by providing contextually relevant information quickly. The success of this project demonstrated the power of RAG in bridging the gap between data retrieval and natural language generation.
Q. Can you describe a project where you had to mentor junior data scientists? A: Mentoring junior data scientists has been a rewarding aspect of my career, particularly during my time at Accenture. One notable project involved the development of the 'Wiki Spinner' knowledge automation platform. In this project, I took on the responsibility of guiding a team of junior data scientists through the complexities of implementing Generative AI solutions. I provided them with insights into best practices for data preprocessing, model selection, and evaluation techniques. We held regular meetings to discuss progress and challenges, fostering an environment of open communication. I encouraged them to share their ideas and approaches, which not only boosted their confidence but also led to innovative solutions. By the end of the project, the junior team members had significantly improved their skills and contributed meaningfully to the project's success. This experience reinforced my belief in the importance of mentorship in fostering talent within the field of data science.
Q. What role do you believe team collaboration plays in successful AI projects? A: Team collaboration is vital in the success of AI projects. In my experience, particularly at Accenture, I have seen firsthand how effective collaboration can lead to innovative solutions and improved project outcomes. Working in cross-functional teams allows for diverse perspectives, which is crucial when developing complex AI solutions. For instance, in the 'AI Over BI' project, collaboration with product managers and engineers helped us align our AI capabilities with business needs. I also believe in fostering an environment where team members feel comfortable sharing ideas and feedback. This open communication leads to better problem-solving and enhances the overall quality of the project. Moreover, mentoring junior data scientists is another aspect of collaboration that I value. By sharing knowledge and best practices, I contribute to the team's growth and ensure that we collectively advance our skills in AI and machine learning.
Q. What are your long-term goals in the field of data science? A: My long-term goals in the field of data science revolve around advancing my expertise in Generative AI and its applications across various industries. I aspire to lead innovative projects that leverage AI to solve complex problems and drive business value. Additionally, I aim to contribute to the development of best practices in AI ethics and governance, ensuring that the solutions we create are responsible and beneficial to society. Mentoring the next generation of data scientists is also a key goal for me. I believe that sharing knowledge and experiences can help cultivate a strong community of professionals who are equipped to tackle the challenges of the future. Ultimately, I envision myself in a leadership role where I can influence the strategic direction of AI initiatives within an organization, driving innovation and fostering a culture of continuous learning and improvement.
Q. What is your experience with model evaluation and optimization techniques? A: Model evaluation and optimization are integral parts of my workflow as a Senior Data Scientist. I utilize various metrics to assess model performance, including accuracy, precision, recall, and F1 score, depending on the specific application. For instance, in my recent projects at Accenture, I implemented a rigorous evaluation framework for the models used in the 'AI Over BI' initiative. This involved testing the models against a validation dataset to ensure they could handle real-world queries effectively. Once the models were evaluated, I focused on optimization techniques such as hyperparameter tuning and feature selection. By adjusting parameters and selecting the most relevant features, I was able to enhance the model's performance significantly. Moreover, I continuously monitor the models post-deployment to ensure they maintain their performance over time. This proactive approach allows me to identify any degradation in model accuracy and make necessary adjustments promptly.