Files · LangChain Observability for SMB AI Workflow Monitoring
24 (0 binary, 337.3 kB total)attempt 1
README.md·2909 B·markdown
markdown
# LangChain Observability for SMB AI Workflow Monitoring
Plug-and-play tracing and cost observability for LangChain-based pipelines, built on REAA's open-source instrumentation stack.
## Problem
SMBs adopting LangChain for multi-step LLM workflows have no built-in way to see where latency piles up, which chain step costs the most, or why a particular prompt is bleeding tokens. They either fly blind or pay for a separate SaaS with complex setup.
## Architecture
```
LangChain App → ObservabilityCallbackHandler → OTel TracerProvider
├── SpanListener (budget tracking via agent-budget-otel-bridge)
└── OTLP Exporter → Langfuse (via OpenTelemetry Collector)
Express Server (sidecar)
├── /health → { status: "ok" }
└── /metrics → { total_cost, per_model, per_chain, token_usage, ... }
```
The callback handler wraps LangChain's `BaseCallbackHandler` to capture LLM, chain, and tool invocations as OpenTelemetry spans. The spans are simultaneously:
- Processed by `SpanListener` (from `@reaatech/agent-budget-otel-bridge`) for cost tracking
- Exported via OTLP to a collector endpoint (e.g., Langfuse OTLP endpoint)
A sidecar Express server exposes `/health` and `/metrics` endpoints for operational visibility.
## Quick Start
```bash
pnpm install
cp .env.example .env
# Fill in your Langfuse credentials and OTLP endpoint
pnpm start
```
Point a LangChain chain at the callback:
```typescript
import { createLangChainCallback } from "langchain-observability-sm/src/langchain-callback.js";
const chain = new LLMChain({ llm, prompt, callbacks: [createLangChainCallback()] });
```
## Environment Variables
| Variable | Description |
|---|---|
| `LANGFUSE_PUBLIC_KEY` | Langfuse project public key |
| `LANGFUSE_SECRET_KEY` | Langfuse project secret key |
| `LANGFUSE_HOST` | Langfuse host (e.g., `cloud.langfuse.com`) |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP collector endpoint |
| `OTEL_SERVICE_NAME` | Service name for traces (default: `langchain-observability-sm`) |
| `PORT` | Express server port (default: `3000`) |
| `LANGCHAIN_API_KEY` | LangChain API key (for LangSmith) |
| `LANGCHAIN_PROJECT` | LangChain project name |
## Langfuse OTLP Configuration
Langfuse accepts OpenTelemetry traces via its OTLP endpoint. Configure:
```
LANGFUSE_HOST=cloud.langfuse.com
LANGFUSE_PUBLIC_KEY=pk-...
LANGFUSE_SECRET_KEY=sk-...
OTEL_EXPORTER_OTLP_ENDPOINT=https://cloud.langfuse.com/api/public/otel
```
The exporter sends `Authorization: Basic base64(publicKey:secretKey)` headers automatically.
## Development
```bash
pnpm typecheck # TypeScript validation
pnpm lint # ESLint with strict type-checked rules
pnpm test # Vitest with 90% coverage thresholds
```
## License
MIT