Mistral AI Multi-Agent Handoff for Linear Support Triage
Route, hand off, and escalate customer support issues in Linear with a Mistral-powered multi-agent system that preserves context across specialist agents.
SMB support teams lose context when transferring issues between agents, leading to repeated troubleshooting and delayed resolutions—especially when using Linear as their issue tracker.
A complete, working implementation of this recipe — downloadable as a zip or browsable file by file. Generated by our build pipeline; tested with full coverage before publishing.
In this tutorial, you’ll build a multi-agent support triage system that routes incoming Linear issues to specialist agents. The system uses Mistral AI for classification, LangGraph for workflow orchestration, and the REAA Agent Handoff Protocol to preserve context across agent handoffs. By the end, you’ll have a working Next.js application with two API routes: one that receives Linear webhook events and processes them through a classify → compress → budget-check → route → handoff pipeline, and another that returns the current agent registry and budget state.
Prerequisites
Node.js 22 (or 23+)
pnpm 10.x
Mistral AI API key
Linear account with a personal API key and a webhook secret
Familiarity with TypeScript and Next.js App Router
ngrok or a similar tunneling tool for local webhook testing
Step 1: Scaffold the project
Create a new Next.js project with the App Router and install all runtime dependencies.
Expected output: Next.js creates the app/ directory with layout.tsx, page.tsx, and globals.css. The second command installs all runtime dependencies without errors. The project engines field pins "node": ">=22" in package.json.
Step 2: Configure environment variables
Create a .env.local file with your API keys and webhook secret. These are read at runtime by the Mistral client, Linear client, and webhook signature verifier.
Expected output: A new .env.local file exists in the project root.
Step 3: Create shared types
Define the domain types used throughout the codebase. The IssueCategory const-object drives routing, and LinearIssueEvent describes the shape of a parsed webhook payload.
The Mistral client is initialized lazily from the MISTRAL_API_KEY environment variable and reused across requests.
Create src/lib/config.ts:
ts
import { Mistral } from '@mistralai/mistralai';function requireEnv(key: string): string { const val = process.env[key]; if (!val) { throw new Error(`${key} is not set`); } return val;}export function getConfig() { return { mistralApiKey: process.env.MISTRAL_API_KEY ?? '', linearApiKey: process.env.LINEAR_API_KEY ?? '', linearWebhookSecret: process.env.LINEAR_WEBHOOK_SECRET ?? '', };}let _mistralClient: Mistral | null = null;export function getMistralClient(): Mistral { if (_mistralClient === null) { _mistralClient = new Mistral({ apiKey: requireEnv('MISTRAL_API_KEY') }); } return _mistralClient;}
Step 5: Create the Linear client helper
The Linear SDK client is initialized once from LINEAR_API_KEY and exported for use in webhook handlers.
Create src/lib/linear.ts:
ts
import { LinearClient } from '@linear/sdk';let _linearClient: LinearClient | null = null;export function getLinearClient(): LinearClient { if (_linearClient === null) { const apiKey = process.env.LINEAR_API_KEY; if (!apiKey) { throw new Error('LINEAR_API_KEY is not set'); } _linearClient = new LinearClient({ apiKey }); } return _linearClient;}
Step 6: Build the issue classifier
The classifier uses a keyword-based fast path to avoid an API call for obvious cases, then falls back to mistral-large-latest for ambiguous issues. The system prompt instructs the model to return a JSON object with a single category key.
Create src/classify/intent.ts:
ts
import { getMistralClient } from '../lib/config.js';import { IssueCategory } from '../lib/types.js';const CLASSIFICATION_MODEL = 'mistral-large-latest';const SYSTEM_PROMPT = 'You are a support ticket classifier. Given a support issue title, description, and labels, respond with a JSON object containing the single key "category" with value "Technical", "Billing", or "Account".';const BILLING_KEYWORDS = ['invoice', 'billing', 'payment', 'refund', 'subscription', 'charge', 'price', 'cost'];const ACCOUNT_KEYWORDS = ['account', 'login', 'permission', 'access', 'password', 'profile', 'user', 'onboarding'];function keywordClassify(title: string, description: string, labels: string[]): IssueCategory | null { const text = `${title} ${description ?? ''} ${labels.join(' ')}`.toLowerCase(); if (BILLING_KEYWORDS.some((k) => text.includes(k))) return IssueCategory.Billing; if (ACCOUNT_KEYWORDS.some((k) => text.includes(k))) return IssueCategory.Account; return null;}export async function classifyIssue( title: string, description: string, labels: string[]): Promise<IssueCategory> { const keywordResult = keywordClassify(title, description, labels); if (keywordResult) return keywordResult; try { const client = getMistralClient(); const content = `Title: ${title}\nDescription: ${description || 'N/A'}\nLabels: ${labels.join(', ') || 'none'}`; const result = await client.chat.complete({ model: CLASSIFICATION_MODEL, messages: [ { role: 'system', content: SYSTEM_PROMPT }, { role: 'user', content: content }, ], temperature: 0.1, maxTokens: 50, responseFormat: { type: 'json_object' }, }); const raw = result.choices?.[0]?.message?.content ?? ''; if (typeof raw !== 'string') return IssueCategory.Technical; const parsed = JSON.parse(raw) as { category?: string }; const cat = parsed.category?.trim(); const VALID_CATEGORIES: IssueCategory[] = [ IssueCategory.Technical, IssueCategory.Billing, IssueCategory.Account, ]; const normalized = VALID_CATEGORIES.find((c) => c === cat); if (normalized !== undefined) { return normalized; } return IssueCategory.Technical; } catch { return IssueCategory.Technical; }}
Step 7: Build the context compressor
The compression module uses the REAA HybridCompressor to reduce conversation history into a structured summary with key facts and intents, fitting within a token budget.
The registry holds three specialist agents: Technical Support, Billing, and Account. Each agent declares its skills, domains, load, and concurrency limits. It is lazily initialized as a singleton.
The router uses the REAA CapabilityBasedRouter with a minimum confidence threshold of 0.6 and best_effort policy. It scores agents by skill and domain overlap with the required capabilities for a given category.
The budget controller enforces a $50 monthly limit for the linear-support scope. When usage hits 80% of the limit, it triggers a soft-cap action; at 100%, a hard-cap action blocks new requests. It also auto-downgrades from mistral-large-latest to mistral-small-latest when nearing the soft cap.
import { SpendStore } from '@reaatech/agent-budget-spend-tracker';// In-memory adapter wrapping the real SpendStore from agent-budget-spend-tracker.// Re-exports the same interface for dependency injection.export { SpendStore };
The HandoffState annotation defines the shape of the workflow state, including session ID, issue event, classification result, compressed context, routing decision, budget check, and error tracking.
Each node in the graph corresponds to a step in the pipeline. The nodes are defined as async functions that receive the current state and return a partial state update.
The workflow wires together all five nodes in a directed graph: classify → compress → budget → route → handoff. The budget node has a conditional edge that routes to END if there is an error or the done flag is set, skipping routing and handoff entirely.
Create src/orchestration/graph.ts:
ts
import { StateGraph, START, END } from '@langchain/langgraph';import { HandoffState } from './state.js';import { classifyNode } from './nodes/classify.js';import { compressNode } from './nodes/compress.js';import { budgetNode } from './nodes/budget.js';import { routeNode } from './nodes/route.js';import { handoffNode } from './nodes/handoff.js';const workflow = new StateGraph({ state: HandoffState,});// Cast workflow to any to bypass strict node-name union checking.// LangGraph accepts any string for node names at runtime.const wf = workflow as { addNode(name: string, handler: (...args: unknown[]) => unknown): void; addEdge(from: string, to: string): void; addConditionalEdges( source: string, router: (state: Record<string, unknown>) => string | string[] ): void;};wf.addNode('classify', classifyNode as (...args: unknown[]) => unknown);wf.addNode('compress', compressNode as (...args: unknown[]) => unknown);wf.addNode('budget', budgetNode as (...args: unknown[]) => unknown);wf.addNode('route', routeNode as (...args: unknown[]) => unknown);wf.addNode('handoff', handoffNode as (...args: unknown[]) => unknown);wf.addEdge(START, 'classify');wf.addEdge('classify', 'compress');wf.addEdge('compress', 'budget');wf.addConditionalEdges( 'budget', (state: Record<string, unknown>) => state.error || state.done ? END : 'route');wf.addEdge('route', 'handoff');wf.addEdge('handoff', END);export const app = workflow.compile();
Create src/orchestration/entry.ts:
ts
import { app } from './graph.js';import type { LinearIssueEvent } from './state.js';// Exported for testing — allows tests to inject a mock applet _app: typeof app = app;export function setAppMock(mock: typeof app): void { _app = mock as typeof app;}function generateSessionId(): string { return `session-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`;}export async function processIssueEvent( event: LinearIssueEvent): Promise<{ sessionId: string }> { const sessionId = generateSessionId(); await _app.invoke({ sessionId, issueEvent: event, }); return { sessionId };}
Step 14: Create the webhook route
This route receives Linear issue.created events, verifies the HMAC signature, parses the payload, and invokes the LangGraph workflow. It returns a sessionId that you can use to track the handoff through the pipeline.
This route returns the current list of registered agents and the budget state for the linear-support scope. It is useful for debugging and monitoring the system’s health.
Replace the default Next.js page with a simple status view that documents the API routes and agent roles. The app/layout.tsx is untouched — only page.tsx is replaced.
Replace app/page.tsx:
tsx
export default function Home() { return ( <main style={{ padding: '2rem', fontFamily: 'system-ui, sans-serif' }}> <h1>Mistral AI Multi-Agent Handoff for Linear Support Triage</h1> <p> A LangGraph-powered agent orchestration system that classifies Linear support issues, compresses context, and routes them to specialist agents via the REAA Agent Handoff Protocol. </p> <h2>API Routes</h2> <ul> <li><code>POST /api/linear-webhook</code> — ingest Linear issue-created webhooks</li> <li><code>GET /api/agent-status</code> — query registered agents and budget state</li> </ul> <h2>Agent Roles</h2> <ul> <li><strong>Technical Support</strong> — debugging, performance, integration, API, SDK</li> <li><strong>Billing Agent</strong> — invoicing, pricing, refunds, subscription</li> <li><strong>Account Agent</strong> — account management, onboarding, permissions</li> </ul> </main> );}
Step 17: Use the test fixture
A test fixture mirrors the shape of a real Linear webhook payload. The tests import this to exercise the webhook handler without hitting the Linear API. The fixture already exists in the repository at tests/fixtures/linear-issue-create.json.
Create tests/fixtures/linear-issue-create.json:
json
{ "action": "create", "type": "Issue", "data": { "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "identifier": "PROJ-123", "title": "Checkout page returns 500 error on payment submission", "description": "Users are unable to complete purchases. The checkout page throws a 500 error when submitting payment details. This is affecting all customers trying to use credit cards.", "team": { "name": "Engineering" }, "labels": [{ "name": "bug" }, { "name": "payment" }, { "name": "high-priority" }] }}
Step 18: Install dev dependencies and run the tests
Install the dev dependencies (including vitest and MSW for HTTP mocking), then run the test suite.
terminal
pnpm installpnpm test
Expected output: The vitest summary shows pass/fail counts per file, and a coverage table with line, branch, function, and statement percentages. The 90% coverage threshold applies to all files under src/ and app/**/route.ts; app/page.tsx (a JSX UI component) is excluded from coverage.
Step 19: Start the development server
terminal
pnpm dev
Expected output: The terminal prints Ready followed by the local URL, for example http://localhost:3000. Visit the page in your browser to see the API documentation.
Step 20: Expose the webhook endpoint for testing
For local development, use ngrok to expose your dev server so Linear can send webhook events to it.
terminal
ngrok http 3000
Expected output: ngrok prints a public https://*.ngrok-free.app URL. Copy that URL and register it as a webhook destination in your Linear workspace settings, selecting the Issue → Create event type. When a new issue is created in Linear, Linear sends a POST request to /api/linear-webhook, and your dev server logs the resulting session ID.
Next steps
Wire up a real agent transport (MCP or HTTP) so the handoff actually delivers context to a running specialist agent instead of stopping at the HandoffManager.executeHandoff() call.
Add persistent storage (e.g., a database or Redis) for session history so you can resume long-running conversations across restarts.
Extend the classifier with more fine-grained sub-categories and additional specialist agents to handle a broader range of issue types.