How easy is it to fool AI detection tools?

The pope didn’t wear Balenciaga. And the filmmakers didn’t fake the moon landing. In recent months, however, startlingly realistic images of these AI-created scenes have gone viral online, threatening society’s ability to separate fact from fiction.

To resolve the confusion, a rapidly growing group of companies is now offering services to detect what’s real and what’s not.

Their tools analyze content using sophisticated algorithms, picking up subtle signals to distinguish images made with computers from those produced by human photographers and artists. But some tech leaders and disinformation experts have expressed concern that advances in AI will always stay one step ahead of tools.

To gauge the effectiveness of current AI detection technology, The New York Times tested five new services using more than 100 synthetic images and real photos. The results show that services are advancing rapidly, but sometimes failing.

Consider this example:

Generated by artificial intelligence


This image appears to show billionaire entrepreneur Elon Musk hugging a realistic robot. The image was created using Midjourney, the AI ​​image generator, by Guerrero Art, an artist who works with AI technology.

Despite the implausibility of the image, it managed to fool several AI image detectors.

Test results from Mr. Musk’s image

The detectors, including paid-to-access versions, such as Sensity, and free ones, such as Umm-maybe AI Art Detector, are designed to detect hard-to-find markers embedded in AI-generated images. They look for unusual patterns in the way pixels are arranged, including in their sharpness and contrast. These signals tend to be generated when AI programs create images.

But the detectors ignore all context clues, so they don’t dismiss the existence of a realistic automaton in a photo with Mr. Musk as unlikely. This is one of the flaws of relying on technology to detect fakes.

Several companies, including Sensity, Hive and Inholo, the company behind Illuminarty, didn’t dispute the findings and said their systems are continually improving to keep up with the latest advances in AI image generation. Hive added that its misclassifications can occur when analyzing lower quality images. Umm-maybe and Optic, the company behind AI or Not, did not respond to requests for comment.

To conduct the tests, The Times gathered AI images from artists and researchers familiar with variations of generative tools like Midjourney, Stable Diffusion and DALL-E, which can create lifelike portraits of people and animals, and lifelike depictions of nature, real estate, , food and more. The actual images used were from the Times photo archive.

Here are seven examples:

Note: Images cropped from original size.

The sensing technology has been heralded as a way to mitigate the harm of AI images.

Artificial intelligence experts like Chenhao Tan, an assistant professor of computer science at the University of Chicago and director of its Human+AI research lab in Chicago, are less convinced.

Overall I don’t think they’re great, and I’m not optimistic they will be, she said. In the short term, it is possible that they will be able to do with some precision, but in the long term, anything special that a human does with images, even the AI ​​will be able to recreate and it will be very difficult to distinguish the difference.

Most of the concerns were with realistic portraits. Florida Governor Ron DeSantis, who is also a Republican presidential candidate, has come under fire after his campaign used AI-generated imagery in a post. Synthetically generated artwork that focuses on the scenery has also caused confusion in political contests.

Many of the companies behind AI detectors acknowledged that their tools were flawed and warned of a technological arms race: detectors often have to catch up with AI systems that seem to be improving by the minute.

Every time someone builds a better generator, people build better discriminators, and then people use the better discriminator to build a better generator, said Cynthia Rudin, a professor of computer science and engineering at Duke University, where she is also a principal investigator at Interpretable. Machine learning lab. Generators are designed to fool a detector.

Sometimes, detectors fail even when an image is obviously fake.

Dan Lytle, an artist who works with artificial intelligence and runs a TikTok account called The_AI_Experiment, asked Midjourney to create a vintage image of a giant Neanderthal standing among regular men. He has produced this aged portrait of a towering Yeti-like beast alongside a colorful couple.

Generated by artificial intelligence


Test results from the image of a giant

The erroneous result of every tested service demonstrates a disadvantage with current AI detectors: They tend to struggle with images that have been altered from their original output or are of low quality, according to Kevin Guo, founder and chief executive officer of Hive, a image detection tool.

When AI generators like Midjourney create photorealistic artwork, they package the image with millions of pixels, each containing clues to its origins. But if you distort it, if you scale it, lower the resolution, all that stuff, by definition you’re altering those pixels and that extra digital signal goes away, Mr. Guo said.

When Hive, for example, ran a high-resolution version of Yeti’s artwork, it correctly determined that the image was generated by AI.

Such shortcomings can undermine the potential for AI detectors to become a weapon against fake content. As images go viral online, they are often copied, resaved, shrunk or cropped, obscuring the important signals that AI detectors rely on. A new tool in Adobe Photoshop, known as a generative fill, uses AI to expand a photo beyond its boundaries. (When tested on a photograph that was expanded using generative fill, the technology confused most tracking services.)

The unusual portrait below, showing President Biden, is of much better resolution. It was taken in Gettysburg, Pennsylvania by Damon Winter, the Times photographer.

Many of the detectorists rightly thought the portrait was authentic; but not all did.

Real picture


Test results from a photograph of President Biden

Falsely labeling an authentic image as being generated by AI is a significant risk with AI detectors. Sensity was able to correctly label most of the AI ​​images as artificial. But the same tool has wrongly labeled many real photographs as generated by artificial intelligence.

Such risks could extend to artists, who could be wrongly accused of using artificial intelligence tools in the creation of their artwork.

This Jackson Pollock painting, called Convergence, features the artist’s familiar, colorful paint splatters. Most, but not all, AI detectors determined that it was a real image and not an AI-generated replica.

Real picture


Test results from a Pollock painting

The creators of Illuminarty said they wanted a detector that could identify fake works of art, such as paintings and drawings.

In tests, Illuminarty correctly rated most real photos as authentic, but only labeled about half of the AI ​​images as artificial. The tool, the creators said, has an intentionally cautious design to avoid falsely accusing artists of using AI

Illuminarty’s tool, along with most other detectors, correctly identified a similar Pollock-style image created by the New York Times using Midjourney.

Generated by artificial intelligence


Test results from the image of a splatter painting

AI tracking companies say their services are designed to help promote transparency and accountability by helping flag misinformation, fraud, non-consensual pornography, artistic dishonesty, and other abuses of the technology. Industry experts warn that financial markets and voters could become vulnerable to AI deception.

This image, in the style of a black and white portrait, is quite convincing. It was created with Midjourney by Marc Fibbens, a New Zealand artist who works with artificial intelligence. However, most AI detectors managed to correctly identify it as fake.

Generated by artificial intelligence


Test results from an image of a man wearing Nike

Yet AI detectors struggled after introducing some grain. Detectors like Hive suddenly believed fake images were real photos.

The fine texture, which was nearly invisible to the naked eye, interfered with its ability to analyze pixels for signs of AI-generated content. Some companies are now trying to identify the use of AI in images by assessing the perspective or size of subjects’ limbs, as well as looking at pixels.





3.3% probably generated by artificial intelligence

99% chance to be generated by AI

99% chance to be generated by AI

3.3% probably generated by artificial intelligence


Artificial intelligence is capable of generating more than realistic images, the technology is already creating text, audio and video that have deceived professors, defrauded consumers and have been used in attempts to turn the tide of war.

AI tracking tools shouldn’t be the only defense, the researchers said. Image makers should incorporate watermarks into their work, said S. Shyam Sundar, director of the Center for Socially Responsible Artificial Intelligence at Pennsylvania State University. Websites could embed detection tools into their backends, he said, so they could automatically identify AI images and better serve them to users with warnings and restrictions on how they’re shared.

The images are especially powerful, Mr. Sundar said, because they have that tendency to provoke a visceral response. People are much more likely to believe their eyes.

#easy #fool #detection #tools
Image Source : www.nytimes.com

Leave a Comment