🔐What is LLM security?
LLM security is the investigation of the failure modes of LLMs in use, the conditions that lead to them, and their mitigations.
Large language models can fail to operate as expected or desired in a huge number of ways; this means they can be insecure. On top of that, they need to run in software (like PyTorch, ONNX, or CUDA) - and that software can be insecure. Finally, the way that LLMs are deployed and their outputs are used can also fail when the LLM behaves in an unexpected way, which also presents a security risk. LLM security covers all this.
LLM security is broader than just things that are within existing security knowledge and existing LLM/NLP knowledge. LLM security covers not the intersection of Security and NLP, but the union of everything about information security and everything about natural language processing.
Last updated