Skip to content
reaatechREAATECH

@reaatech/agent-runbook-agent

npm v0.1.0

Provides a unified interface for interacting with Anthropic, OpenAI, and Google Gemini models to perform automated repository analysis and runbook generation. It exposes an `AnalysisAgent` class and factory function that handle provider-specific prompt formatting, response parsing, and template management.

@reaatech/agent-runbook-agent

npm version License: MIT CI

AI agent layer for the Agent Runbook Generator. Provides LLM provider abstraction for Anthropic Claude, OpenAI, and Google Gemini, plus configurable prompt templates for automated code analysis and runbook generation.

Installation

terminal
npm install @reaatech/agent-runbook-agent
# or
pnpm add @reaatech/agent-runbook-agent

Feature Overview

  • Multi-provider support — Anthropic Claude, OpenAI (GPT), and Google Gemini with unified interface
  • Provider adapter — automatic model formatting and response parsing per provider
  • Configurable prompts — pre-built templates for 8 analysis and generation tasks
  • Custom prompts — programmatic creation of custom prompt templates
  • Cost and performance — supports temperature, max_tokens, and rate limit configuration
  • Dynamic SDK loading — provider SDKs loaded on demand to minimize bundle size

Quick Start

typescript
import { createAnalysisAgent, generatePrompt } from "@reaatech/agent-runbook-agent";
 
const agent = createAnalysisAgent({
  provider: "claude",
  model: "claude-opus-4-5",
  apiKey: process.env.ANTHROPIC_API_KEY,
  temperature: 0.3,
});
 
const insights = await agent.analyzeRepository(analysisContext);
const failureModes = await agent.identifyFailureModes(analysisContext);
const section = await agent.generateRunbookSection("alerts", analysisContext);

API Reference

Analysis Agent

ExportKindDescription
AnalysisAgentclassLLM-powered repository analysis. Creates an agent for analyzing code patterns, identifying failure modes, and generating runbook sections.
createAnalysisAgentfunction(config?: Partial<AgentConfig>) => AnalysisAgent — factory with sensible defaults

AgentConfig: { provider: string; model?: string; apiKey?: string; baseUrl?: string; temperature?: number; maxTokens?: number }

Instance Methods

MethodReturnsDescription
analyzeRepository(context)Promise<AnalysisInsight[]>Analyze repository patterns for insights
identifyFailureModes(context)Promise<string[]>Identify potential failure modes
generateRunbookSection(sectionType, context)Promise<string>Generate a runbook section

Provider Adapter

ExportKindDescription
ProviderAdapterclassHandles prompt formatting and response parsing for each LLM provider
createProviderAdapterfunction(config: AgentConfig) => ProviderAdapter — factory

Instance Methods

MethodDescription
formatMessages(system, user)Format messages for the target provider
formatForClaude(system, user)Anthropic-specific message formatting
formatForOpenAI(system, user)OpenAI-specific message formatting
formatForGemini(system, user)Gemini-specific message formatting
parseResponse(provider, raw)Parse raw provider response into AgentResponse
getFallbackProvider()Get fallback provider if primary fails
supportsStreaming()Check if provider supports streaming
getRateLimits()Get provider rate limits

Prompt Templates

FunctionSignature
generatePrompt(type: PromptType, variables: Partial<PromptVariables>) => string
getPromptTemplate(type: PromptType) => PromptTemplate
getSystemPrompt(type: PromptType) => string
createPromptTemplate(type: PromptType, systemPrompt: string, userPrompt: string) => PromptTemplate

Prompt types: repository-analysis, failure-mode-identification, runbook-alerts, runbook-dashboards, runbook-failure-modes, runbook-rollback, runbook-incident-response, runbook-health-checks.

License

MIT