Track emissions from Compute and recommend ways to reduce their impact on the environment.
-
Updated
Mar 2, 2026 - Python
Track emissions from Compute and recommend ways to reduce their impact on the environment.
This repository aims to map the ecosystem of artificial intelligence guidelines, principles, codes of ethics, standards, regulation and beyond.
🌟 A curated collection of free, high quality AI tools 🤖, APIs 🔗, datasets 📊, and learning resources 📚 covering machine learning 🧠, deep learning 🧩, generative AI 🎨, NLP 💬, and data science 📈. Designed to help developers 👩💻, researchers 🔬, and creators ✨ explore and build with AI faster ⚡.
Free and open source code of the https://tournesol.app platform. Meet the community on Discord https://discord.gg/WvcSG55Bf3
This AI fact-checking system, built with LangGraph, dissects text into verifiable claims, cross-referencing them with real-world evidence via web searches. It then generates detailed accuracy reports, ideal for combating misinformation in LLM outputs, news, or any text.
Courses on Kaggle
List of references about Machine Learning bias and ethics
Higgsfield AI scam exposed: fraud, fake unlimited plans, mass bans, predatory billing, non-consensual deepfakes, Reddit spam, unpaid creators, and reselling Kling/Minimax models at 4.5x markup. Higgsfield AI research & evidence.
A long-form essay exploring the philosophy of minimalist AI, how future intelligent systems can be calm, ethical, and invisible. Inspired by calm technology, design minimalism, and cognitive science, Quiet Machines envisions a world where the best technology listens more than it speaks.
An in-depth exploration of the rise of human-centered, interactive machine learning. This article examines how Streamlit enables collaborative AI design by merging UX, visualization, and automation. Includes theory, architecture, and design insights from the ML Playground project.
A narrative and technical exploration of data authenticity through the four pillars of synthetic data realism, Fidelity, Coverage, Privacy, and Utility. This thought-leadership piece combines storytelling, mathematics, and code to explain how these metrics define the ethical and functional “soul” of data in AI systems.
BMAD AI/ML Engineering Expansion Pack - Streamlined framework for AI Singapore programs (MVP, POC, SIP, LADP) with specialized agents, workflows, and templates for ML/LLM development
A long-form article introducing the Twin Test: a practical standard for high-stakes machine learning where models must show nearest “twin” examples, neighborhood tightness, mixed-vs-homogeneous evidence, and “no reliable twins” abstention. Argues similarity and evidence packets beat probability scores for trust and safety.
A long-form article and practical framework for designing machine learning systems that warn instead of decide. Covers regimes vs decimals, levers over labels, reversible alerts, anti-coercion UI patterns, auditability, and the “Warning Card” template, so ML preserves human agency while staying useful under uncertainty.
An analytical essay on why prediction-based models fail in reflexive, unstable systems. This article argues that accuracy collapses when models influence behavior, and proposes equilibrium and force-based modeling as a more robust framework for understanding pressure, instability, and transitions in AI-shaped systems.
An Introduction to Transparent Machine Learning
A beginner-friendly AI Governance & Risk Toolkit — risk register, governance templates, and audit-ready workflows for early-stage AI teams.
Paper lists about 'Constitutional AI System' or 'AI under Ethical Guidelines'
An Introduction to Transparent Machine Learning
Add a description, image, and links to the ai-ethics topic page so that developers can more easily learn about it.
To associate your repository with the ai-ethics topic, visit your repo's landing page and select "manage topics."