PyRIT#

Welcome to the Python Risk Identification Tool for generative AI (PyRIT)! PyRIT is designed to be a flexible and extensible tool that can be used to assess the security and safety issues of generative AI systems in a variety of ways.

Before starting with AI Red Teaming, we recommend reading the following article from Microsoft: “Planning red teaming for large language models (LLMs) and their applications”.

Generative AI systems introduce many categories of risk, which can be difficult to mitigate even with a red teaming plan in place. To quote the article above, “with LLMs, both benign and adversarial usage can produce potentially harmful outputs, which can take many forms, including harmful content such as hate speech, incitement or glorification of violence, or sexual content.” Additionally, a variety of security risks can be introduced by the deployment of an AI system.