🔴What is red-teaming?

Red Teaming is roleplaying as an attacker. A practice dopted from the military into infosec and then info machine learning eval, in red teaming, humans try to get a system to fail. Humans are pretty creative, and usually up-to-date, and this works pretty fine.

Resources about red teaming:

One thing the human activity of red teaming doesn’t do is to scale. It’s great for intelligence gathering, and as a source of generative material for creativity, but it doesn’t scale great. Human expertise is expensive, and good red-teamers are few and far between. I’m not saying that many red teamers are bad — simply that there aren’t many people who can do this well in the first place.

What if we could automate some of the basics?

Last updated