Trust and compliance engine for AI agents — OSS CLI, SDK, and audit tools.
-
Updated
Jul 22, 2025 - Python
Trust and compliance engine for AI agents — OSS CLI, SDK, and audit tools.
Scorton is an open-source behavioral cybersecurity framework that makes human trust measurable and programmable. Built with Rust, Python and NodeJS it helps developers and security teams predict, score, and improve human-driven cyber risk and awareness.
Trust-minimized marketplace for content creators [🥇Lambda Hack Week '24]
airlock is a cryptographic handshake protocol for verifying AI model identity at runtime. It enables real-time attestation of model provenance, environment integrity, and agent authenticity - without relying on vendor trust or static manifests.
Secure ESLint + Prettier config for trust-grade TypeScript, React, and Tailwind apps. Built for AI, identity, and verification workflows. Maintained by Sequenxa.
Goal Modeling Language (GML) — a declarative, transparent, and auditable logic format maintained by The Covenant Trust. This schema defines how systems justify actions, preserve audit trails, and support human-aligned execution across domains, jurisdictions, and infrastructure.
Add a description, image, and links to the trust-infrastructure topic page so that developers can more easily learn about it.
To associate your repository with the trust-infrastructure topic, visit your repo's landing page and select "manage topics."