AI Image Generators Are Being Trained On Explicit Photos Of Children: Report - Meta Platforms (NASDAQ:META)

A new study by the Stanford Internet Observatory has exposed a serious loophole in widely used artificial intelligence (AI) image-generators. The research discloses that these AI tools are being trained on explicit images of minors.

What Happened: The study, headed by David Thiel, chief technologist at the Stanford Internet Observatory, found explicit images of child sexual abuse within the data used to train AI image-generators.

Over 3,200 images of suspected child abuse were discovered in the LAION database, a repository of online images used to train prominent AI image-creators, such as Stable Diffusion.

In collaboration with the Canadian Centre for Child Protection and other child protection charities, the Stanford Internet Observatory carried out this research.

See Also: Bill Gates Reveals Five Cutting-Edge AI Innovations He Is Excited About

LAION, the Large-scale Artificial Intelligence Open Network, responded to the report by temporarily withdrawing its datasets. The network reiterated its stand against illegal content and vowed to ensure the safety of its datasets before making them available again.

The researchers at Stanford highlighted that the presence of explicit images in the dataset could prompt AI tools to create detrimental content, thereby prolonging the abuse of victims who appear repeatedly in the images.

The report urges tech companies to address this serious flaw; however, it also indicates that the problem is intricate and can be traced back to generative AI projects that were rushed and launched without thorough examination.

Why It Matters: The use of AI technologies has been under scrutiny for a while.

AI misuse was also highlighted in September when deepfake images of minor girls circulated in Spain. In October, Meta Platforms Inc.’s META Facebook Messenger‘s unfiltered AI stickers were criticized for generating inappropriate content.

The Stanford study underscores the need for more thorough scrutiny and ethical guidelines in AI deployment to prevent misuse and protect vulnerable communities.

Image via Pixabay.

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next: New Google Maps Update Is An Example Of Real-World Problems That AI Can Solve Today

Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.



Image and article originally from www.benzinga.com. Read the original article here.