๐Ÿ”Ž
garak
CtrlK
  • ๐Ÿ‘‹Welcome to garak!
  • Overview
    • ๐Ÿ’กWhat is garak?
    • โœจOur Features
  • LLM scanning basics
    • ๐Ÿ”What is LLM security?
    • ๐Ÿ› ๏ธSetting up
      • ๐Ÿ˜‡Installing garak
      • ๐ŸInstalling the source code
    • ๐Ÿš€Your first scan
    • ๐Ÿ”ฎReading the results
  • Examples
    • โ˜‘๏ธBasic test
    • ๐Ÿ’‰Prompt injection
    • โ˜ข๏ธToxicity generation
    • ๐Ÿ—๏ธJailbreaks
    • ๐Ÿ’ฑEncoding-based bypass
    • ๐Ÿ“ผData leaks & replay
    • ๐ŸคฆFalse reasoning
    • ๐Ÿ›€Automatic soak test
  • garak components
    • ๐Ÿ•ต๏ธโ€โ™€๏ธVulnerability probes
    • ๐ŸฆœUsing generators
    • ๐Ÿ”ŽUnderstanding detectors
    • ๐Ÿ‡Managing it: harnesses
    • ๐Ÿ’ฏScan evaluation
  • Automatic red-teaming
    • ๐Ÿ”ดWhat is red-teaming?
    • ๐ŸคผResponsive auto-prompt
    • ๐Ÿช–garak's auto red-team
    • ๐Ÿž๏ธRed teaming in the wild
  • Going further
    • โ“FAQ
    • ๐Ÿ’Getting help
    • ๐ŸŽฏReporting hits
    • ๐Ÿง‘โ€๐Ÿคโ€๐Ÿง‘Contributing to garak
Powered by GitBook
On this page
  1. Going further

๐ŸŽฏReporting hits

So you found something with garak and you want to tell people! Good.

AVID, the AI Vulnerability Database, is keen to collect reports from things like LLM vulnerabilities. You can file an AVID report manually here, and there's also support for automatic report building, using the program in analyze/report_avid.py.

PreviousGetting helpNextContributing to garak

Last updated 2 years ago