Saturday, May 2, 2026

How to setup my AI project for observability using OTel and Arize Phoenix

See All Posts on Arize Phoenix    <<< Previously  Next >>>

Guide me in layman terms in explanatory tone "how to setup my project for observability using OTel/Arize Phoenix"

Setting up observability for your AI project is like installing a "flight recorder" (OpenTelemetry) and a "mission control dashboard" (Arize Phoenix). Since you are already building SPAs and working with story datasets, this setup will help you see exactly how your logic flows.

Here is a layman’s guide to getting it running in your project.


The "Big Picture" Flow

  1. The Application: Your code (React, Python, etc.).

  2. The Instrument (OTel): A small piece of code that "listens" to what your app is doing.

  3. The Collector: The pipe that carries that info.

  4. The Viewer (Phoenix): The dashboard where you watch the traces.


Step 1: Start Your Dashboard (Arize Phoenix)

Before your app can send data anywhere, you need a place for it to land. The easiest way is to run Phoenix locally on your machine.

In your terminal, run:

Bash
pip install arize-phoenix
phoenix serve
  • What just happened? You just started a local server. You can now open http://localhost:6006 in your browser to see your (currently empty) dashboard.


Step 2: Install the "Flight Recorder"

You need to install the OpenTelemetry (OTel) libraries that specifically understand AI and LLMs. Since you're likely using Python for your story-filtering logic, you’ll want the arize-phoenix-otel helper.

In your project folder, run:

Bash
pip install arize-phoenix-otel openinference-instrumentation-openai

(Note: If you use LangChain or LlamaIndex instead of raw OpenAI, there are specific "instrumentors" for those too.)


Step 3: Turn on the "Auto-Recorder"

Now, you need to tell your code: "Hey, every time I talk to an LLM, record it and send it to Phoenix."

You can do this with just two lines of code at the very top of your main script:

Python
from phoenix.otel import register

# This sets up the plumbing and points it to your local Phoenix server
tracer_provider = register(project_name="my-story-app", auto_instrument=True)

Why this is cool:

By setting auto_instrument=True, you don't have to manually write "Start Recording" and "Stop Recording" around every function. The OTel library "wraps" your LLM calls automatically.


Step 4: Run Your Code and Watch

Now, run your script (e.g., the one filtering your TinyStories dataset).

  1. Perform an LLM action (like classifying a story).

  2. Go back to http://localhost:6006.

  3. Magic: You will see a "Trace" appear.


How to use this for your specific projects:

If you are working on...Focus your Observability on...
Phonics SPAsLook at the Latency. How long does it take for the phonics sound logic to return? Is it fast enough for a 5-year-old’s attention span?
TinyStories FilteringLook at the Inputs/Outputs. If a story was classified as "Fantasy" but it's clearly "Real Life," you can click the trace to see the exact prompt you sent and why the model got confused.
Classroom GamesLook at Session Tracking. You can group all actions from one "Game Session" together to see if the game crashed or slowed down after 10 minutes of play.

Pro Tip: If you want to move from your laptop to a "real" setup later, you just change the Endpoint URL in your code to point to a cloud version of Phoenix. The code itself stays exactly the same!


See All Posts on Arize Phoenix    <<< Previously  Next >>>

No comments:

Post a Comment