Open Source LLM Engineering Platform

Traces, evals, prompt management and metrics to debug and improve your LLM application.
Auto-advance is active. Press Escape to pause auto-advance.
Python SDK
JS/TS SDK
from langfuse import observe

# drop-in wrapper adds OpenTelemetry tracing to OpenAI
# many other llm/agent integrations are available
from langfuse.openai import openai

@observe()  # decorate any function; all nested calls are auto-linked
def handle_request(text: str) -> str:
    res = openai.chat.completions.create(
        model="gpt-5",
        messages=[
            {"role": "system", "content": "Summarize in one sentence."},
            {"role": "user", "content": text},
        ],
    )
    return res.choices[0].message.content
Langfuse observability trace detail view showing nested observations with latency and cost

Observability

Capture complete traces of your LLM applications/agents. Use traces to inspect failures and build eval datasets. Based on OpenTelemetry with support for all popular LLM/agent libraries.