Entities
View all entitiesIncident Stats
Incident Reports
Reports Timeline
- View the original report at its source
- View the report at the Internet Archive
There have been reports in the press about the results of a research project at Stanford University, according to which the LAION training set 5B contains potentially illegal content in the form of CSAM. We would like to comment on this as …
- View the original report at its source
- View the report at the Internet Archive
A Stanford Internet Observatory (SIO) investigation identified hundreds of known images of child sexual abuse material (CSAM) in an open dataset used to train popular AI text-to-image generation models, such as Stable Diffusion.
- View the original report at its source
- View the report at the Internet Archive
A popular training dataset for AI image generation contained links to child abuse imagery, Stanford’s Internet Observatory found, potentially allowing AI models to create harmful content.
LAION-5B, a dataset used by Stable Diffusion creat…
- View the original report at its source
- View the report at the Internet Archive
A massive public dataset that served as training data for a number of AI image generators has been found to contain thousands of instances of child sexual abuse material (CSAM).
In a study published today, the Stanford Internet Observatory …
- View the original report at its source
- View the report at the Internet Archive
This piece is published with support from The Capitol Forum.
The LAION-5B machine learning dataset used by Stable Diffusion and other major AI products has been removed by the organization that created it after a Stanford study found that i…
- View the original report at its source
- View the report at the Internet Archive
Stable Diffusion, one of the most popular text-to-image generative AI tools on the market from the $1 billion startup Stability AI, was trained on a trove of illegal child sexual abuse material, according to new research from the Stanford I…
- View the original report at its source
- View the report at the Internet Archive
There have been significant problems with AI's training data, with various complaints already filed by those who claimed their work was stolen, but the most recent discovery saw child sexual abuse images in their dataset. In a recent study,…
- View the original report at its source
- View the report at the Internet Archive
A massive open-source AI dataset, LAION-5B, which has been used to train popular AI text-to-image generators like Stable Diffusion 1.5 and Google's Imagen, contains at least 1,008 instances of child sexual abuse material, a new report from …
- View the original report at its source
- View the report at the Internet Archive
Over 1,000 images of sexually abused children have been discovered inside the largest dataset used to train image-generating AI, shocking everyone except for the people who have warned about this exact sort of thing for years.
The dataset w…
- View the original report at its source
- View the report at the Internet Archive
Researchers from the Stanford Internet Observatory say that a dataset used to train AI image generation tools contains at least 1,008 validated instances of child sexual abuse material. The Stanford researchers note that the presence of CSA…
- View the original report at its source
- View the report at the Internet Archive
An influential machine learning dataset—the likes of which has been used to train numerous popular image-generation applications—includes thousands of suspected images of child sexual abuse, a new academic report reveals.
The report, put to…
- View the original report at its source
- View the report at the Internet Archive
A widely-used artificial intelligence data set used to train Stable Diffusion, Imagen and other AI image generator models has been removed by its creator after a study found it contained thousands of instances of suspected child sexual abus…
- View the original report at its source
- View the report at the Internet Archive
Child sexual abuse material (CSAM) has been located in LAION, a major data set used to train AI.
The Stanford Internet Observatory revealed thousands of images of child sexual abuse in the LAION-5B data set, which supports many different AI…
- View the original report at its source
- View the report at the Internet Archive
The integrity of a major AI image training dataset, LAION-5B, utilized by influential AI models like Stable Diffusion, has been compromised after the discovery of thousands of links to Child Sexual Abuse Material (CSAM). This revelation has…
- View the original report at its source
- View the report at the Internet Archive
Generative AI has been democratized. The toolkits to download, set up, use, and fine-tune a variety of models have been turned into one-click frameworks for anyone with a laptop to use. While this technology allows users to generate and exp…
- View the original report at its source
- View the report at the Internet Archive
In The Ones Who Walk Away From Omelas, the fiction writer Ursula K. Le Guin describes a fantastic city wherein technological advancement has ensured a life of abundance for all who live there. Hidden beneath the city, where nobody needs to …
- View the original report at its source
- View the report at the Internet Archive
Why are AI companies valued in the millions and billions of dollars creating and distributing tools that can make AI-generated child sexual abuse material (CSAM)?
An image generator called Stable Diffusion version 1.5, which was created by …
- View the original report at its source
- View the report at the Internet Archive
Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a dataset used to train popular AI image-generator tools.
The LAION research dataset is a huge index of…
Variants
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents