reaatech/prompt-injection-bench
These packages provide a standardized framework for benchmarking and evaluating the effectiveness of prompt-injection defenses in AI agent systems. They allow you to measure security posture by running a diverse corpus of adversarial attacks against pluggable defense adapters and calculating statistically significant performance scores. The system is designed as a modular pipeline where you can swap defense implementations, execute parallelized benchmarks, and generate reproducible reports through a unified CLI or MCP-compatible interface.
Packages
9 packages
@reaatech/pi-bench-adapters
- status
- awaiting publish
@reaatech/pi-bench-core
- status
- awaiting publish
@reaatech/pi-bench-corpus
- status
- awaiting publish
@reaatech/pi-bench-leaderboard
- status
- awaiting publish
@reaatech/pi-bench-mcp-server
- status
- awaiting publish
@reaatech/pi-bench-observability
- status
- awaiting publish
@reaatech/pi-bench-runner
- status
- awaiting publish
@reaatech/pi-bench-scoring
- status
- awaiting publish
prompt-injection-bench
- status
- awaiting publish
Comments
Sign in with GitHub to comment and vote.
