Researcher presents filter for AI-generated images deemed ‘unsafe’

Tech

AI Image Generators Face Increasing Concerns Over Unsafe Images, Research Reveals
Byline
Jace Dela Cruz

The public’s growing interest in AI image generators has seen a significant rise in popularity in recent times. Fans are better able to create a wider variety of images with ease. However, researchers are beginning to express concerns about the nature of some of the images being produced.

Yiting Qu, a researcher at CISPA, recently published a proposal for effective filters designed to address these concerns. Qu’s paper, “Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models,” investigates the prevalence of such images and proposed effective filters to prevent their creation.

Use and Misuse of New AI Image Generators

The report primarily looks into text-to-image models, detailing how they allow users to input data that the AI then uses to create digital images. While these models present new possibilities for creativity, Yiting Qu’s investigation also uncovered a darker side to the technology. Users sometimes utilize these tools to create distressing or explicit images. Qu classified such images based on meaning, finding that a high percentage of images created by popular AI image generators fall into the “unsafe images” category.

Proposed Solutions

As a solution, Qu suggested a new filter that would calculate the distance between generated images and unsafe words. Any images that exceed the threshold would be replaced with a black color field. That way, pictures violating the specified limit would not be displayed.

Following the research findings, Qu recommended three main remedies to prevent the creation and dissemination of harmful images. Firstly, developers should carefully choose and screen training data to include only certain images. Secondly, developers could implement regulations on user-input prompts to remove unsafe keywords. Lastly, platforms should be equipped with mechanisms to delete unsafe images, especially those with the potential to be widely circulated.

While the study acknowledged the need to balance content freedom and security, the researchers emphasized the importance of strict regulations. Yiting Qu’s research aims to make a tangible difference in creating a safer digital landscape. The research findings were published on arXiv.

The increasing popularity of AI image generators raises concerns about the types of images being generated. This has led a researcher to propose filters to prevent the creation and proliferation of harmful images. Do you think the solutions proposed by Yiting Qu are promising? Let us know your thoughts.

© 2023 TECHTIMES.com All rights reserved. Do not reproduce without permission.