Pluggable Express middleware that scrubs PII, blocks prompt injections, and enforces content policies on Anthropic-powered chatbots — no vendor lock‑in.
Small businesses deploying AI chatbots face regulatory and reputational risks when user‑supplied PII leaks through, prompt injections manipulate the model, or the LLM produces harmful content. They lack an easy, self‑hosted way to enforce safety rules without relying on expensive API gateways.
A complete, working implementation of this recipe — downloadable as a zip or browsable file by file. Generated by our build pipeline; tested with full coverage before publishing.
You’ll build an Express server and a Next.js API route that wrap every Anthropic chat request in a guardrail pipeline. The middleware intercepts user messages to strip PII and block prompt injections, calls the Anthropic API, then checks the response through output guardrails before returning OpenAI-compatible JSON. By the end you’ll have a running server that logs structured audit events to stdout and can be deployed on any infrastructure.
Prerequisites
Node.js >= 22 — The project uses node: imports and ES2022 features.
pnpm 10.0.0 — Package manager pinned in package.json.
Anthropic API key — Get one at console.anthropic.com.
Sentry DSN (optional) — Enables error tracking via @sentry/node.
Familiarity with TypeScript, Express middleware, and Next.js App Router conventions.
Step 1: Initialize the project
Create an empty working directory and scaffold package.json with all required scripts and dependencies.
Environment variable overrides take precedence over this file. Set GUARDRAIL_CHAIN_BUDGET_MAX_LATENCY_MS=2000 in .env to raise the latency budget without editing the YAML.
Step 6: Create the request schema
Create app/api/schemas.ts. This file defines the Zod schema that validates incoming chat requests for both the Express middleware and the Next.js route handler.
Create src/utils/headers.ts. This utility extracts or generates a correlation ID from the incoming request so audit logs can be correlated across input guardrail, Anthropic call, and output guardrail.
Create src/utils/response.ts. This utility converts Anthropic’s native message format into an OpenAI-compatible chat completions response shape so existing clients work without changes.
Create src/chain/factory.ts. This factory builds the GuardrailChain instance once and caches it. It loads config from guardrail.config.yaml and registers the full set of input and output guardrails.
ts
import { loadConfig } from "@reaatech/guardrail-chain-config";import { GuardrailChain } from "@reaatech/guardrail-chain";import { PIIRedaction, PromptInjection, TopicBoundary, ContentModeration, PIIScan, ToxicityFilter, HallucinationCheck,} from "@reaatech/guardrail-chain-guardrails";let chainInstance: GuardrailChain | null = null;export async function createGuardrailChain(): Promise<GuardrailChain> { if (chainInstance) return chainInstance; const config = await loadConfig({ filePath: "./guardrail.config.yaml", useEnv: true, }); const chain = new GuardrailChain(config); // Input guardrails chain.addGuardrail( new PIIRedaction({ redactionStrategy: "mask" }) ); chain.addGuardrail(new PromptInjection()); chain.addGuardrail( new TopicBoundary({ allowedTopics: ["weather", "travel", "food", "shopping", "support"], blockedTopics: ["politics", "violence", "hate", "illegal"], }) ); chain.addGuardrail(new ContentModeration()); // Output guardrails chain.addGuardrail(new PIIScan()); chain.addGuardrail(new ToxicityFilter()); chain.addGuardrail(new HallucinationCheck()); chainInstance = chain; return chain;}export function getGuardrailChain(): GuardrailChain { if (!chainInstance) { throw new Error("Guardrail chain not initialized. Call createGuardrailChain() first."); } return chainInstance;}
Step 10: Create the guardrail middleware
Create src/middleware/guardrail.ts. This Express middleware intercepts POST /v1/chat/completions requests, runs the input guardrail chain on each user message, calls the Anthropic SDK, then runs the output guardrail chain on each text block of the response.
ts
import type { Request, Response, NextFunction } from "express";import type { MessageParam } from "@anthropic-ai/sdk/resources/messages/messages.js";import { ChatRequestSchema } from "../types/schemas.js";import { extractCorrelationId } from "../utils/headers.js";import { formatAnthropicResponse, type AnthropicMessage } from "../utils/response.js";import { getLogger } from "@reaatech/guardrail-chain-observability";interface ChatMessage { role: "user" | "assistant" | "system" | "developer" | "tool"; content: string;}
Step 11: Create the server entry points
Create src/index.ts to export everything from a single entry point.
ts
// Guardrail Chain exportsexport { createGuardrailChain, getGuardrailChain } from "./chain/factory.js";// Observabilityexport { initObservability, getLogger, getTracer } from "./observability.js";// Typesexport * from "./types/index.js";// Utilitiesexport { extractTextFromContentBlocks } from "./utils/text.js";export { extractCorrelationId } from "./utils/headers.js";export { formatAnthropicResponse, type OpenAIChatResponse } from "./utils/response.js";export { toHttpError, type HttpError } from "./utils/error.js";// Middlewareexport { createGuardrailMiddleware } from "./middleware/guardrail.js";export const RECIPE_VERSION = "0.1.0";export const RECIPE_NAME = "anthropic-security-guardrails-for-smb-ai-chatbots";
Create instrument.ts at the project root to initialize Sentry as a Node.js preload hook.
ts
import * as Sentry from "@sentry/node";Sentry.init({ dsn: process.env.SENTRY_DSN,});
Create src/server.ts for the standalone Express server. This is the entry point used by pnpm start:express.
Create app/api/v1/chat/route.ts. This is the Next.js App Router handler that shares the same guardrail pipeline as the Express server. The file imports a local NextResponse shim from app/api/next-types.ts to avoid module resolution issues with the next/server package.
ts
import Anthropic from "@anthropic-ai/sdk";import type { MessageParam } from "@anthropic-ai/sdk/resources/messages/messages.js";import { createGuardrailChain } from "../../../../src/index.js";import { ChatRequestSchema } from "../../schemas.js";import { getLogger } from "@reaatech/guardrail-chain-observability";import { NextResponse } from "../../next-types.js";import { formatAnthropicResponse, type AnthropicMessage } from "../../../../src/utils/response.js";type NextRequest = { headers: { get(name: string): string | null;
Create app/api/next-types.ts. This is a local NextResponse shim that avoids importing next/server in test mocks and standalone server contexts.
ts
/** * Local Next.js types for the route handler. * Copied here to avoid module resolution issues with next/server. * These are minimal shims that match the Next.js App Router API surface. */type ResponseInit = { status?: number; statusText?: string; headers?: Record<string, string>;};export class NextResponse { private _response: Response; private _headers: Record<string, string> = {}; private _status: number; constructor(response: Response, status?: number) { this._response = response; this._status = status ?? response.status; } static json(data: unknown, init?: ResponseInit): NextResponse { const headers: Record<string, string> = { "content-type": "application/json" }; if (init?.headers) { Object.assign(headers, init.headers); } const resp = new Response(JSON.stringify(data), { status: init?.status ?? 200, headers, }); return new NextResponse(resp, init?.status); } static redirect(url: string, status = 307): NextResponse { return new NextResponse(Response.redirect(url, status)); } static next(): NextResponse { return new NextResponse(new Response(null, { status: 200 })); } get headers(): Headers { return this._response.headers; } get status(): number { return this._status; } async json<T = unknown>(): Promise<T> { return this._response.json() as Promise<T>; } async text(): Promise<string> { return this._response.text(); } clone(): NextResponse { return new NextResponse(this._response.clone()); }}export interface NextRequest { headers: { get(name: string): string | null; has(name: string): boolean; }; method: string; url: string; nextUrl: { pathname: string; search: string; searchParams: URLSearchParams; }; json(): Promise<Record<string, unknown>>; text(): Promise<string>; clone(): NextRequest;}
Step 13: Run the tests
Run the full test suite with coverage reporting.
terminal
pnpm test
Expected output: Vitest prints a summary with pass/fail counts and a JSON coverage report at vitest-report.json. All tests pass and coverage is above the 90% thresholds.
Step 14: Start the Next.js dev server
Start the Next.js development server. The /v1/chat route is served alongside the main app.
terminal
pnpm dev
Expected output: The terminal prints Next.js ... ready, and then started server on ... localhost:3000.
Step 15: Start the Express server
Start the standalone Express server instead of the Next.js dev server.
terminal
pnpm start:express
Expected output: The terminal prints Express guardrail server started with the port number from .env.
Expected output: A 400 response with X-Guardrail-Passed: false header and a JSON body containing "error": "Input guardrail failed" and a guardrail field naming the guardrail that triggered.
Next steps
Add a topic allowlist for your business — Edit guardrail.config.yaml to replace the default allowedTopics with domains your chatbot actually serves (e.g., ["support", "orders", "refunds"]).
Wire up a production Sentry project — Set SENTRY_DSN in .env to start capturing structured error traces with correlation IDs from every blocked request.
Swap the Express server for your existing API gateway — The createGuardrailMiddleware function accepts any object with a messages.create() method, so you can plug it into Fastify, Koa, or a custom Hapi handler without changing the guardrail logic.