Technology

OpenAI claims to detect images generated by its own logic …

It is testing a new tool to determine whether images are generated by artificial intelligence.

Recognising AI-generated images: an ongoing battle
We all believe in our ability to recognise images generated by AI. Strange text in the background, inaccuracies that defy the laws of physics and, above all, grotesque hands and fingers. However, technology is constantly evolving and it won’t be long before we can no longer distinguish between the real and the artificial.

OpenAI and deceptive images
OpenAI, one of the industry leaders, is attempting to take the lead by creating a toolkit to detect images created by its own DALL-E 3 generator. Although the company claims to be able to “accurately detect 98% of the images created by DALL-3”, major reservations remain.

The results were mixed and a number of problems were encountered

First of all, the image had to be created by DALL-E, which is admittedly not the only image generator in existence.They are everywhere on the web. According to the data provided by OpenAI, the system has only managed to correctly classify 5-10% of the images generated by other AI models.

In addition, the system also has difficulty if the image is modified in any way. While success rates of around 95-97% are still acceptable for minor modifications such as cropping, compression and saturation changes, colour tone adjustment reduces this success rate to 82%.

The upside of detecting changes to an image?
On the other hand, the toolkit struggles when more significant changes are made to the image, and OpenAI chooses not even to publish the success rate in these cases, simply stating that “further changes will degrade performance”.

This is a real problem during elections, however, as AI-generated images are often modified to elicit stronger reactions. In any case, OpenAI will continue to work on improving its detection tools and to be transparent about the limits of its technology.

留言

您的电子邮箱地址不会被公开。 必填项已用 * 标注