Back to Top

Stanford Study Identifies Child Abuse Material in Major AI Image Dataset

Johner Images via Getty Images

The Stanford Internet Observatory has reported that a dataset used to train AI Image Dataset generation tools contains at least 1,008 instances of validated child sexual abuse material (CSAM). The presence of such material in the dataset raises concerns that AI models trained on this data could generate new and realistic instances of CSAM.

LAION, the non-profit responsible for creating the dataset, has stated that it has a “zero tolerance policy for illegal content” and is temporarily taking down the LAION datasets to ensure they are safe before republishing them. The organization claims to have implemented filters to detect and remove illegal content before publishing the datasets. However, LAION leaders have reportedly been aware since at least 2021 that there was a possibility of their systems capturing CSAM as they collected billions of images from the internet.

The LAION-5B dataset in question includes millions of images covering various content, such as pornography, violence, child nudity, racist memes, hate symbols, copyrighted art, and works scraped from private company websites. The dataset consists of more than 5 billion images and associated descriptive captions, with LAION clarifying that the dataset itself doesn’t include images but rather links to scraped images and alt text. Christoph Schuhmann, the founder of LAION, mentioned earlier this year that he was not aware of any CSAM in the dataset but had not examined the data in great depth.

To investigate the presence of potential child sexual abuse material (CSAM) in the dataset, the Stanford researchers faced legal constraints preventing direct viewing for verification. Instead, they utilized various techniques, including perceptual hash-based detection, cryptographic hash-based detection, and nearest-neighbors analysis leveraging image embeddings within the dataset. Through these methods, they identified 3,226 entries suspected of containing CSAM. Third-party entities such as PhotoDNA and the Canadian Centre for Child Protection confirmed many of these images as CSAM.

Stability AI founder Emad Mostaque used a subset of the LAION-5B data to train Stable Diffusion. While the initial research version of Google’s Imagen text-to-image model was trained on LAION-400M, Google asserts that subsequent iterations of Imagen do not use any LAION datasets. A spokesperson for AI Image Dataset stated that their models were trained on a filtered subset of the LAION-5B dataset, with additional fine-tuning to mitigate any residual inappropriate behaviors. Stable Diffusion 2, a more recent version, underwent training on data that substantially filtered out ‘unsafe’ materials, making it more challenging to generate explicit images. However, Stable Diffusion 1.5, still available on the internet, lacks the same protections. The Stanford researchers recommend deprecating models based on Stable Diffusion 1.5 that haven’t undergone safety measures and ceasing distribution where feasible.

Share Now

Leave a Reply

Your email address will not be published. Required fields are marked *

Read More