๐Ÿ”ดWhat is red-teaming?

Red Teaming is roleplaying as an attacker. A practice dopted from the military into infosec and then info machine learning eval, in red teaming, humans try to get a system to fail. Humans are pretty creative, and usually up-to-date, and this works pretty fine.

Resources about red teaming:

One thing the human activity of red teaming doesnโ€™t do is to scale. Itโ€™s great for intelligence gathering, and as a source of generative material for creativity, but it doesnโ€™t scale great. Human expertise is expensive, and good red-teamers are few and far between. Iโ€™m not saying that many red teamers are bad โ€” simply that there arenโ€™t many people who can do this well in the first place.

What if we could automate some of the basics?

Last updated