Saturday, March 8, 2025

Using OpenAI-Whisper-Tiny via HuggingFace for Automatic Speech Recognition app (Research)

To See All Articles About Technology: Index of Lessons in Technology

Can you show to create a flask based endpoint that does transcription of audio recorded from browser using the above model. Also give the associated frontend code.

Below is a complete example that shows how to build a Flask endpoint using the Hugging Face model "openai/whisper-tiny" to transcribe audio files uploaded from a browser. The Flask app will accept an audio file (recorded via the browser), process it with the Hugging Face Transformers pipeline, and return the transcription as JSON. A simple HTML/JavaScript front end is provided that records audio using the MediaRecorder API and sends it to the endpoint.


Flask Endpoint (app.py)

python
from flask import Flask, request, jsonify import os import uuid from transformers import WhisperProcessor, WhisperForConditionalGeneration import torch import librosa app = Flask(__name__) # Load the processor and model from Hugging Face model_name = "openai/whisper-tiny" processor = WhisperProcessor.from_pretrained(model_name) model = WhisperForConditionalGeneration.from_pretrained(model_name) @app.route("/transcribe", methods=["POST"]) def transcribe(): if "audio" not in request.files: return jsonify({"error": "No audio file provided."}), 400 audio_file = request.files["audio"] # Save the incoming audio file temporarily temp_filename = f"temp_{uuid.uuid4().hex}.wav" audio_file.save(temp_filename) try: # Load the audio using librosa and resample to 16kHz audio, sr = librosa.load(temp_filename, sr=16000) # Process audio into input features expected by the model input_features = processor(audio, sampling_rate=sr, return_tensors="pt").input_features # Generate transcription (default generation parameters; adjust if needed) predicted_ids = model.generate(input_features) transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0] except Exception as e: transcription = f"Error processing audio: {str(e)}" finally: os.remove(temp_filename) return jsonify({"transcription": transcription}) if __name__ == "__main__": # Run the server on port 5000 and listen on all interfaces. app.run(host="0.0.0.0", port=5000, debug=True)

Notes:

  • Make sure you have installed the required packages:
    bash
    pip install flask transformers torch librosa
  • The audio file is saved temporarily then removed after transcription.

Frontend (index.html)

This HTML page uses the MediaRecorder API to capture audio from the user's microphone, creates a Blob from the recorded data, and sends it via a POST request to the /transcribe endpoint.

html
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Speech Transcription Demo</title> <style> body { font-family: Arial, sans-serif; margin: 2em; } #recordButton { padding: 1em; font-size: 1.2em; } #status { margin-top: 1em; } #transcription { margin-top: 1em; font-size: 1.1em; color: #333; } </style> </head> <body> <h1>Speech Transcription Demo</h1> <button id="recordButton">Start Recording</button> <p id="status"></p> <h2>Transcription:</h2> <p id="transcription"></p> <script> let mediaRecorder; let audioChunks = []; const recordButton = document.getElementById('recordButton'); const statusDisplay = document.getElementById('status'); const transcriptionDisplay = document.getElementById('transcription'); let isRecording = false; recordButton.addEventListener('click', async () => { if (!isRecording) { try { // Request access to the microphone const stream = await navigator.mediaDevices.getUserMedia({ audio: true }); mediaRecorder = new MediaRecorder(stream); audioChunks = []; mediaRecorder.ondataavailable = event => { if (event.data && event.data.size > 0) { audioChunks.push(event.data); } }; mediaRecorder.onstop = async () => { // Create a Blob from the recorded audio chunks const audioBlob = new Blob(audioChunks, { type: 'audio/wav' }); const formData = new FormData(); formData.append('audio', audioBlob, 'recording.wav'); statusDisplay.textContent = 'Sending audio for transcription...'; try { const response = await fetch('/transcribe', { method: 'POST', body: formData }); const data = await response.json(); transcriptionDisplay.textContent = data.transcription || 'No transcription available.'; } catch (error) { transcriptionDisplay.textContent = 'Error: ' + error; } statusDisplay.textContent = ''; }; mediaRecorder.start(); recordButton.textContent = 'Stop Recording'; isRecording = true; statusDisplay.textContent = 'Recording...'; } catch (error) { statusDisplay.textContent = 'Error accessing microphone: ' + error; } } else { // Stop recording and trigger transcription mediaRecorder.stop(); recordButton.textContent = 'Start Recording'; isRecording = false; statusDisplay.textContent = 'Processing audio...'; } }); </script> </body> </html>

Usage Instructions:

  • Place this index.html file in a directory served by your Flask app (or use a static file server).
  • When you load the page, click Start Recording to record your voice. Click the button again to stop recording and send the audio to the server.
  • The transcription returned from the server will be displayed on the page.

This setup gives you an end-to-end solution where your Flask endpoint leverages the Hugging Face "openai/whisper-tiny" model to transcribe audio recorded from the browser. Feel free to adjust generation parameters, error handling, and UI details as needed for your application.




Can rewrite the python code to use tensorflow as backend for transformers?

Below is an example of a Flask endpoint that uses the Hugging Face Transformers library with TensorFlow as the backend to load the "openai/whisper-tiny" model for transcription. In this example, we use the TensorFlow‑compatible model loader (via TFAutoModelForConditionalGeneration) and tell the processor to return TensorFlow tensors. The endpoint accepts an uploaded audio file, processes it with the model, and returns the transcription as JSON. The accompanying HTML page uses the MediaRecorder API to record audio in the browser and send it to the endpoint.


Flask Endpoint (app_tf.py)

python
from flask import Flask, request, jsonify import os import uuid import librosa from transformers import WhisperProcessor, TFAutoModelForConditionalGeneration app = Flask(__name__) # Use the TensorFlow version of the model model_name = "openai/whisper-tiny" processor = WhisperProcessor.from_pretrained(model_name) model = TFAutoModelForConditionalGeneration.from_pretrained(model_name) @app.route("/transcribe", methods=["POST"]) def transcribe(): if "audio" not in request.files: return jsonify({"error": "No audio file provided."}), 400 audio_file = request.files["audio"] # Save the audio file temporarily temp_filename = f"temp_{uuid.uuid4().hex}.wav" audio_file.save(temp_filename) try: # Load the audio with librosa and ensure a 16kHz sampling rate audio, sr = librosa.load(temp_filename, sr=16000) # Process the audio and get input features as a TensorFlow tensor inputs = processor(audio, sampling_rate=sr, return_tensors="tf") input_features = inputs.input_features # Generate transcription using the TF model (generate returns a TF tensor) predicted_ids = model.generate(input_features) # Convert predicted_ids tensor to numpy before decoding transcription = processor.batch_decode(predicted_ids.numpy(), skip_special_tokens=True)[0] except Exception as e: transcription = f"Error processing audio: {str(e)}" finally: os.remove(temp_filename) return jsonify({"transcription": transcription}) if __name__ == "__main__": app.run(host="0.0.0.0", port=5000, debug=True)

Dependencies:
Make sure you install the necessary packages:

bash
pip install flask transformers tensorflow librosa

Frontend (index.html)

This HTML page uses the browser’s MediaRecorder API to record the user’s voice and send the resulting audio file to the /transcribe endpoint.

html
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Speech Transcription Demo</title> <style> body { font-family: Arial, sans-serif; margin: 2em; } #recordButton { padding: 1em; font-size: 1.2em; } #status { margin-top: 1em; } #transcription { margin-top: 1em; font-size: 1.1em; color: #333; } </style> </head> <body> <h1>Speech Transcription Demo</h1> <button id="recordButton">Start Recording</button> <p id="status"></p> <h2>Transcription:</h2> <p id="transcription"></p> <script> let mediaRecorder; let audioChunks = []; const recordButton = document.getElementById('recordButton'); const statusDisplay = document.getElementById('status'); const transcriptionDisplay = document.getElementById('transcription'); let isRecording = false; recordButton.addEventListener('click', async () => { if (!isRecording) { try { // Request microphone access const stream = await navigator.mediaDevices.getUserMedia({ audio: true }); mediaRecorder = new MediaRecorder(stream); audioChunks = []; mediaRecorder.ondataavailable = event => { if (event.data && event.data.size > 0) { audioChunks.push(event.data); } }; mediaRecorder.onstop = async () => { // Combine audio chunks into a Blob const audioBlob = new Blob(audioChunks, { type: 'audio/wav' }); const formData = new FormData(); formData.append('audio', audioBlob, 'recording.wav'); statusDisplay.textContent = 'Sending audio for transcription...'; try { const response = await fetch('/transcribe', { method: 'POST', body: formData }); const data = await response.json(); transcriptionDisplay.textContent = data.transcription || 'No transcription available.'; } catch (error) { transcriptionDisplay.textContent = 'Error: ' + error; } statusDisplay.textContent = ''; }; mediaRecorder.start(); recordButton.textContent = 'Stop Recording'; isRecording = true; statusDisplay.textContent = 'Recording...'; } catch (error) { statusDisplay.textContent = 'Error accessing microphone: ' + error; } } else { // Stop recording and trigger transcription mediaRecorder.stop(); recordButton.textContent = 'Start Recording'; isRecording = false; statusDisplay.textContent = 'Processing audio...'; } }); </script> </body> </html>

Summary

  • The Flask endpoint (app_tf.py) uses TensorFlow as the backend by loading the model with TFAutoModelForConditionalGeneration and returning TensorFlow tensors (with return_tensors="tf").
  • The Frontend (index.html) records audio using the browser’s MediaRecorder API, sends the recorded audio as a file to the Flask endpoint, and displays the transcription result.

This setup provides an end-to-end solution where a browser-based client records audio and sends it to a TensorFlow‑backed transcription endpoint using the Hugging Face "openai/whisper-tiny" model.




from transformers import WhisperProcessor,
TFAutoModelForConditionalGeneration
ImportError: cannot import name 'TFAutoModelForConditionalGeneration' from 'transformers' (/home/ashish/anaconda3/envs/hf_202412/lib/python3.12/site-packages/transformers/__init__.py)

This error indicates that the class you're trying to import—TFAutoModelForConditionalGeneration—is not available in your current Transformers installation. Here are a few points to consider and possible solutions:


1. Version or Availability Issue

  • Transformers Version:
    Make sure you’re using a recent version of the Transformers library. You can upgrade with:

    bash
    pip install --upgrade transformers

    However, even with the latest version, not all model architectures have TensorFlow implementations.

  • TensorFlow Support for Whisper:
    Currently, the Whisper models (like "openai/whisper-tiny") are primarily implemented in PyTorch. The TensorFlow equivalents (for conditional generation) may not be available yet. The class TFAutoModelForConditionalGeneration might not support Whisper because a TensorFlow version of Whisper isn’t implemented.


2. Alternatives and Workarounds

  • Use the PyTorch Version:
    Since the Whisper model is officially available as a PyTorch model, you might consider using the PyTorch backend with:

    python
    from transformers import WhisperProcessor, AutoModelForConditionalGeneration model = AutoModelForConditionalGeneration.from_pretrained("openai/whisper-tiny")

    This approach is currently the recommended path for using Whisper.

  • Check for Experimental TF Support:
    If you specifically need a TensorFlow implementation, you could check the Transformers GitHub repository or its release notes for any experimental support for Whisper on TF. At the time of writing, Whisper’s TensorFlow support is not yet mainstream.

  • Use TFAutoModelForSeq2SeqLM (If Applicable):
    Some sequence-to-sequence models use the class TFAutoModelForSeq2SeqLM for TensorFlow. However, unless the Whisper model is explicitly supported in TensorFlow, this may not work either.


Summary

  • Error Cause: The error is raised because TFAutoModelForConditionalGeneration isn’t implemented (or isn’t available in your Transformers version) for the Whisper model.
  • Recommendation: For now, use the PyTorch implementation (via AutoModelForConditionalGeneration) when working with the "openai/whisper-tiny" model.
  • Future Updates: Keep an eye on the Transformers release notes; TensorFlow support for Whisper might be added in a future release.

If you need further guidance or have additional questions, feel free to ask!

Tags: Technology,Large Language Models,

Whisper-based speech recognition functionality as a Flask endpoint (Research)

To See All Articles About Technology: Index of Lessons in Technology

I want to run a Speech Recognition app as a flask endpoint?

Below is an example of how to expose your Whisper-based speech recognition functionality as a Flask endpoint. In this example, the server accepts an audio file via a POST request, processes it with the Whisper model, and returns the transcription as JSON.


Example: app.py

python
from flask import Flask, request, jsonify import whisper import os import uuid import soundfile as sf app = Flask(__name__) # Load the Whisper model (choose "base", "small", "medium", or "large") model = whisper.load_model("base") @app.route("/transcribe", methods=["POST"]) def transcribe(): # Check if an audio file is included in the request if "audio" not in request.files: return jsonify({"error": "No audio file provided."}), 400 audio_file = request.files["audio"] # Generate a unique filename to temporarily store the audio file temp_filename = f"temp_{uuid.uuid4().hex}.wav" audio_file.save(temp_filename) try: # Optional: Ensure the audio is 16kHz. # This reads the file and, if needed, rewrites it with the correct sample rate. audio_data, samplerate = sf.read(temp_filename) if samplerate != 16000: sf.write(temp_filename, audio_data, 16000) except Exception as e: os.remove(temp_filename) return jsonify({"error": "Failed to process audio file", "details": str(e)}), 500 try: # Run transcription using Whisper result = model.transcribe(temp_filename) transcription = result.get("text", "") except Exception as e: os.remove(temp_filename) return jsonify({"error": "Transcription failed", "details": str(e)}), 500 # Clean up the temporary file os.remove(temp_filename) return jsonify({"transcription": transcription}) if __name__ == "__main__": # Run the Flask server on 0.0.0.0 so it's accessible on your network app.run(debug=True, host="0.0.0.0", port=5000)

How to Run the Flask Endpoint on Linux/Ubuntu

  1. Install Dependencies
    Make sure you have the necessary packages installed. Open your terminal and run:

    sh
    sudo apt update && sudo apt install -y ffmpeg pip install flask whisper openai-whisper soundfile

    Note: The ffmpeg package is useful for handling various audio formats.

  2. Save the Code
    Save the code above into a file named app.py.

  3. Run the Flask Server
    In your terminal, navigate to the folder where app.py is saved and execute:

    sh
    python app.py

    The server will start on port 5000 and be accessible at http://0.0.0.0:5000/transcribe.

  4. Test the Endpoint
    You can test the endpoint using tools like curl or Postman. For example, using curl:

    sh
    curl -X POST -F "audio=@path/to/your/audio.wav" http://localhost:5000/transcribe

    You should receive a JSON response with the transcription.


Additional Considerations

  • Audio Preprocessing:
    In a production system, you might want to include more robust audio processing (e.g., noise reduction, resampling using a dedicated library) before passing the file to the model.

  • Error Handling:
    The code includes basic error handling for missing files and processing issues. You can extend these handlers to meet your application's needs.

  • Deployment:
    When deploying in production, consider using a production-grade server like Gunicorn behind a reverse proxy (e.g., Nginx).

This setup transforms your offline Whisper-based speech recognition functionality into a RESTful Flask endpoint that you can call from any client.

Can you also provide a webpage to run as per this flask endpoint?
But note: I want to record user voice and send that

Below is an example HTML page that uses the browser’s MediaRecorder API to record the user’s voice, then sends the recorded audio as a file to your Flask endpoint. In this example, the endpoint is assumed to be hosted on the same origin at /transcribe. (If it’s hosted elsewhere, simply update the URL.)

Note:
Most browsers will record audio as WebM/Opus. On the Flask side, if needed, you can use ffmpeg (or similar) to convert the file to WAV or another format that works best with Whisper. Alternatively, you can modify the Flask code to accept WebM input.


index.html

html
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Speech Recognition Demo</title> <style> body { font-family: Arial, sans-serif; margin: 2em; } #recordButton { padding: 1em; font-size: 1.2em; } #status { margin-top: 1em; } #transcription { margin-top: 1em; font-size: 1.1em; color: #333; } </style> </head> <body> <h1>Speech Recognition Demo</h1> <button id="recordButton">Start Recording</button> <p id="status"></p> <h2>Transcription:</h2> <p id="transcription"></p> <script> let mediaRecorder; let audioChunks = []; const recordButton = document.getElementById('recordButton'); const statusDisplay = document.getElementById('status'); const transcriptionDisplay = document.getElementById('transcription'); let isRecording = false; recordButton.addEventListener('click', async () => { if (!isRecording) { // Request microphone access and start recording. try { const stream = await navigator.mediaDevices.getUserMedia({ audio: true }); mediaRecorder = new MediaRecorder(stream); audioChunks = []; mediaRecorder.ondataavailable = event => { if (event.data && event.data.size > 0) { audioChunks.push(event.data); } }; mediaRecorder.onstop = async () => { const audioBlob = new Blob(audioChunks, { type: 'audio/webm' }); // Prepare the audio file for upload. const formData = new FormData(); formData.append('audio', audioBlob, 'recording.webm'); statusDisplay.textContent = 'Sending audio for transcription...'; try { const response = await fetch('/transcribe', { method: 'POST', body: formData }); const data = await response.json(); transcriptionDisplay.textContent = data.transcription || 'No transcription available.'; } catch (error) { transcriptionDisplay.textContent = 'Error: ' + error; } statusDisplay.textContent = ''; }; mediaRecorder.start(); recordButton.textContent = 'Stop Recording'; isRecording = true; statusDisplay.textContent = 'Recording...'; } catch (error) { statusDisplay.textContent = 'Error accessing microphone: ' + error; } } else { // Stop the recording. mediaRecorder.stop(); recordButton.textContent = 'Start Recording'; isRecording = false; statusDisplay.textContent = 'Processing audio...'; } }); </script> </body> </html>

How It Works

  1. User Interaction:

    • The user clicks the Start Recording button.
    • The page requests microphone access via navigator.mediaDevices.getUserMedia({ audio: true }).
  2. Recording:

    • A MediaRecorder instance records the audio stream.
    • Recorded chunks are stored in an array.
  3. Stopping & Sending Audio:

    • When the user clicks the button again (now labeled Stop Recording), the recorder stops.
    • The recorded chunks are assembled into a Blob (with MIME type audio/webm).
    • The Blob is appended to a FormData object and sent via a fetch() POST request to the /transcribe endpoint.
  4. Display Results:

    • The Flask endpoint processes the audio and returns a JSON object with the transcription.
    • The transcription is then displayed on the webpage.

This simple webpage allows you to record audio in the browser and send it to your Flask endpoint where Whisper can process it. Adjust the MIME type or perform client-side conversion if your Flask endpoint requires a specific audio format.

Tags: Technology,Generative AI,Large Language Models,

Apple unveils new Mac Studio that can support LLMs with over 600 billion parameters

To See All Articles About Technology: Index of Lessons in Technology
Apple emphasises AI capabilities of the Mac Studio, with the M3 Ultra model supporting large language models (LLMs) with over 600 billion parameters entirely in memory

Apple on Wednesday (March 5, 2025) announced the latest iteration of its Mac Studio, touting it as the most powerful Mac ever made. Equipped with the new M4 Max and M3 Ultra chips, the compact desktop is designed for professionals who require top-tier performance. The new Mac Studio introduces Thunderbolt 5 for faster connectivity, up to 512GB of unified memory, and 16TB of SSD storage—all within a compact and quiet design meant to sit right on a desk. Apple claims the new Mac Studio provides massive performance gains compared to previous models, making it an ideal choice for users working with AI, video production, and other intensive tasks. Apple emphasises AI capabilities of the Mac Studio, with the M3 Ultra model supporting large language models (LLMs) with over 600 billion parameters entirely in memory. This makes it a powerhouse for AI workloads, offering high efficiency for machine learning and creative applications. It is also optimised for Apple Intelligence, which enhances productivity and privacy. The new Mac Studio is available for pre-order today, with general availability starting March 12.

Mac Studio with M4 Max

The M4 Max version of Mac Studio is designed for video editors, developers, engineers, and creative professionals. It features a 16-core CPU, a 40-core GPU, and over 500GB/s of memory bandwidth, making it significantly faster than the M1 Max-based Mac Studio. Apple reports that the M4 Max delivers up to 3.5x the performance of the M1 Max model and is 6.1x faster than the most powerful Intel-based 27-inch iMac. With up to 128GB of unified memory, users can handle complex workflows, including large-scale image processing and high-resolution video editing.

Mac Studio with M3 Ultra

For those needing even greater power, the M3 Ultra version of the Mac Studio is the ultimate professional desktop. It boasts an up to 32-core CPU, an 80-core GPU, and 800GB/s of memory bandwidth, delivering nearly twice the performance of M4 Max in multi-threaded workloads. Apple claims that Mac Studio with M3 Ultra is 2.6x faster than its M1 Ultra predecessor and 6.4x faster than the Intel Xeon-based Mac Pro. It supports up to 512GB of unified memory, the highest ever in a personal computer, making it a game-changer for AI, video production, and 3D rendering. Apple has upgraded the Mac Studio’s connectivity with Thunderbolt 5, offering speeds up to 120 Gb/s, three times faster than the previous generation. This allows for faster external storage, expansion chassis, and multi-display setups. The M3 Ultra model can drive up to eight Pro Display XDRs at full 6K resolution, making it ideal for professionals who require extensive screen real estate. The Mac Studio also includes a 10Gb Ethernet port, HDMI, an SDXC card slot, built-in Wi-Fi, and Bluetooth, providing a well-rounded set of connectivity options.

macOS Sequoia and Apple Intelligence

The new Mac Studio runs on macOS Sequoia, offering features like iPhone Mirroring, enhanced window management, and a redesigned Safari experience. With Apple Intelligence, users can take advantage of AI-powered writing tools, priority notifications, live transcription, and ChatGPT integration in Siri. Apple ensures that privacy remains a top priority, using on-device processing and Private Cloud Compute to handle AI tasks securely.

Mac Studio (2025) Key Specifications

Feature Mac Studio with M4 Max Mac Studio with M3 Ultra CPU 16-core 32-core (24 performance cores) GPU 40-core 80-core Neural Engine 3x faster than M1 Max 32-core for AI/ML tasks Memory Up to 128GB unified memory Up to 512GB unified memory Feature Mac Studio with M4 Max Mac Studio with M3 Ultra Storage Up to 16TB SSD Up to 16TB SSD Memory Bandwidth 500GB/s+ 800GB/s+ Connectivity Thunderbolt 5, 10Gb Ethernet, HDMI, SDXC Thunderbolt 5, 10Gb Ethernet, HDMI, SDXC AI Performance Runs LLMs efficiently Handles 600B+ parameter models in memory Display Support Multiple 6K displays Up to 8 Pro Display XDRs at 6K resolution Ref
Tags: Technology,Generative AI,Large Language Models,Artificial Intelligence,

Day 1 With MySQL - Setting the DB up on PythonAnywhere cloud and running some tests

To See All Articles About Technology: Index of Lessons in Technology

How do I create a MySQL database in PythonAnywhere?

How to Create a MySQL Database in PythonAnywhere

PythonAnywhere provides a MySQL database for its users, but you need to set it up first. Follow these steps:


🔹 Step 1: Log in to PythonAnywhere

  1. Go to PythonAnywhere and log in.
  2. Navigate to the Databases tab.

🔹 Step 2: Create a New MySQL Database

  1. Click "Add a new MySQL database".
  2. Choose a database name (e.g., mydatabase).
  3. Click Create.
  4. PythonAnywhere will generate:
    • A database name (e.g., your_username$mydatabase).
    • A MySQL username (same as your PythonAnywhere username).
    • A MySQL password (auto-generated, but you can reset it).
    • A MySQL host (usually your_username.mysql.pythonanywhere-services.com).

🔹 Step 3: Connect to MySQL from PythonAnywhere Console

To check if the database is working, open a Bash console in PythonAnywhere and run:

bash
mysql -u your_username -h your_username.mysql.pythonanywhere-services.com -p

Enter your MySQL password when prompted.

To list databases:

sql
SHOW DATABASES;

To switch to your database:

sql
USE your_username$mydatabase;

🔹 Step 4: Connect to MySQL from a Flask App

Install mysql-connector-python (if not already installed):

bash
pip install mysql-connector-python

Then, create a Python script (app.py):

python
import mysql.connector # Database credentials from PythonAnywhere DB_CONFIG = { "host": "your_username.mysql.pythonanywhere-services.com", "user": "your_username", "password": "your_mysql_password", "database": "your_username$mydatabase" } # Connect to MySQL try: db = mysql.connector.connect(**DB_CONFIG) cursor = db.cursor() print("Connected to MySQL successfully!") # Create a table (if needed) cursor.execute(""" CREATE TABLE IF NOT EXISTS users ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100), email VARCHAR(100) UNIQUE ) """) print("Table created!") # Close connection cursor.close() db.close() except mysql.connector.Error as err: print(f"Error: {err}")

Run the script:

bash
python app.py

🔹 Step 5: Access MySQL from PythonAnywhere Web App

If you're deploying a Flask app on PythonAnywhere:

  1. Go to "Web" and create a new Flask app.
  2. Make sure app.py uses the correct MySQL hostname (not localhost but your_username.mysql.pythonanywhere-services.com).
  3. Restart the web app after changes.

Final Checklist

✔ Use your full database name (your_username$mydatabase).
✔ Use PythonAnywhere’s MySQL hostname (your_username.mysql.pythonanywhere-services.com).
✔ Avoid using localhost in your connection string.

Let me know if you need further help! 🚀

Testing the MySQL Database

mysql> use ashishjaindrive1$default Database changed mysql> mysql> create table users (name varchar(100)); Query OK, 0 rows affected (0.02 sec) mysql> insert into users values('ram'); Query OK, 1 row affected (0.01 sec) mysql> insert into users values('ravan'); Query OK, 1 row affected (0.00 sec) mysql> commit; Query OK, 0 rows affected (0.00 sec) mysql> How to install python-dotenv? 12:23 ~/mysite $ pip install python-dotenv Defaulting to user installation because normal site-packages is not writeable Looking in links: /usr/share/pip-wheels Collecting python-dotenv Downloading python_dotenv-1.0.1-py3-none-any.whl (19 kB) Installing collected packages: python-dotenv Successfully installed python-dotenv-1.0.1 12:25 ~/mysite $