TrustAI is on a mission to ensure the safety and integrity of AI systems and unlock the full potential of generative AI while maintaining control and trust. We believe in bringing security to the forefront of AI development, safeguarding against potential vulnerabilities, and promoting responsible AI innovation.
Our goal is to empower developers, researchers, and organizations to build secure and trustworthy AI systems.
- Website: http://www.trustai.pro/
- Blog: https://securaize.substack.com/
- Lab: https://lab.trustai.pro/
- Linkin: https://www.linkedin.com/company/trustai-sg
- ISC.AI 2024 -- LLM Jailbreaking Vulnerability Mining and Defefense
- SecGeek -- The Road Leading to LLM Security Alignment: Research on Vulnerability Mining and Alignment Defense for LLM
- Xcon 2024 -- Next-Generation Detectionand Respbonse Technology Driven by LLM Intelligent Agent
- S-tron China 2024 - S-Talent Talk
- AI x Security Summit - SG Antler
- AI Nexus Summit – GenAI for SEA - SG Antler
Here are some of main projects we've released:
-
Learn Prompt Hacking: The most comprehensive prompt hacking course available.
- Prompt Engineering technology.
- GenAI development technology.
- Prompt Hacking technology.
- LLM security defence technology.
- LLM Hacking resources
- LLM security papers.
-
TrustEval - LLM Security&Safety Evaluation: TRUST AI Security Labs - Evaluation. Quantifying. Securing AI.
- Discover: Reveal your AI Risk across your organisation, with the most comprehensive Evaluation Metrics.
- Red Teaming: Test your AI Model security against Adversarial Scenarios.
- CI/CD Model Testing: Run established security tests against benchmarks in your MLOps pipeline.
-
LLM Protection: A SDK API that basically a One-click Alignment Proxy for AI App Integration.
- Detect and address direct and indirect prompt injections in real-time, preventing potential harm to GenAI applications.
- Ensure your GenAI applications do not violate the policies by detecting harmful and insecure output.
- Safeguard sensitive PII and avoid data losses, ensuring compliance with privacy regulations.
- Prevent data poisoning attacks on your GenAI applications through real-time prompt filtering.
-
AI HackingClub: Powered by TrustAI, Al HackingClub is dedicated to fostering awareness,education,and engagement on Al safety to develop safer Al systems.
- Hack into Al
- Prompt Injection Al
- RealworId Jailbreaking Al Safety
-
LLM Security CTF: Learn LLM/AI Security through a series of vulnerable LLM CTF challenges. No sign ups, all fees, everything on the website.
- Stark Game: Very neat game to get intuitions for prompt injection, user need find ways to get Stark to tell the password for the level, except Stark is instructed not to reveal the word.
- Doc: Intro to Stark Game.