A photographer created ‘fake’ images of Russia with generative AI. Now he’s losing his biggest fans


When Russia’s invasion of Ukraine thwarted his travel plans, Belgian photographer Carl De Keyzer decided to take a virtual trip to Russia.

From his home, the famous documentary photographer began working on a collection of images about Russia using generative artificial intelligence (AI). He was not prepared for the consequences.

In the late 1980s, De Keyzer visited Russia 12 times in the space of a year. The USSR is in agony and De Keyzer photographs the rituals and pastimes which will soon disappear. He returned in the 2000s to photograph the interior of prison camps in Siberia.

In November, three decades after his first visit to Russia, De Keyzer published a series of AI-generated images in a book called Putin’s Dream. This time there were no human bodies, no moments in time, but rather a vision brought to life using computers.

Hours after posting Putin’s dream online, De Keyzer was criticized for producing false images and possibly contributing to disinformation.

De Keyzer says he is satisfied with the realism of the images generated by the AI. (Provided: Carl De Keyzer)

An estimate 15 billion images using text-to-image conversion algorithms – a type of artificial intelligence where written prompts are given to software to create new images – had already been created in August 2023.

As AI generative imagery becomes ubiquitous, concerns about its ethics are growing. It has also become a thorny subject among photographers.

Putin’s dream

To create the Putin’s Dream series, De Keyzer fed the AI ​​software with his own photographs from previous projects, adjusting them to suit his visual style.

He says the series is a “commentary on the horrors of [the Ukraine] war caused by one man’s dream” and that the use of generative AI was a way to achieve this.

Pleased with the results, De Keyzer says the “new images — illustrations” he published in Putin’s Dream reflect his previous photographic work, which often explored propaganda and systems of power.

De Keyzer says the AI-generated images in Putin’s Dream are a meditation on the horrors of the war in Ukraine. (Provided: Carl De Keyzer)

“I tried to get as close to the ‘real’ footage as possible,” he told ABC News.

“Of course, it remains artificial, but it was possible to get really close to almost realistic images and above all to present my way of composing and commenting. [using] irony, humor, doubt, wonder, surrealism… Many people say they see my style clearly in these images, that was the idea.”

De Keyzer says he has always been transparent about using AI to create Putin’s dream.

But when he posted several images on Instagram to publicize his new book, the reaction was harsh.

Many people criticized him for publishing “fake” images, De Keyzer says.

“There were a lot of negative comments on my Insta post, like 600 in two hours. I wasn’t used to that. I’ve always had very good reactions to my posts… but this time the box exploded… Some people said they were my biggest fans before but not anymore. AI always automatically causes disgust, regardless of the approach or progress made.

For a while he feared the project was a mistake. But he also received encouragement from people praising his work, he said.

“An astonishing work which shows once again how photography can be done differently, without traveling the world, but by navigating the other world, our double, this latent space of computer memories which contain the countless accumulated media layers”, said Yves Malicy, a Belgian academic in digital culture. » wrote in French (this is a translation) on Facebook.

For photographer Carl De Keyzer, “AI is just one tool among others with a great future.” (Provided: Carl De Keyzer)

Is the world ready for AI images?

The history of photography is marked by scandals of manipulation, staging or falsification. Yet photography’s status as a record of reality endures. As generative AI becomes more sophisticated, many fear it could trigger a tsunami of misinformation.

When artist Boris Eldagsen shocked the photography world by winning the Sony World Photography Prize with an AI-generated image, he said he wanted to provoke a debate about AI and photography.

“It was a test to see if photo competitions were prepared for [AI]… That’s not the case,” he told ABC Radio National.

Unlike Eldagsen, De Keyzer was not trying to deceive anyone. But he eventually deleted the images from Instagram because, he says, people started attacking Magnum Photos, the prestigious photography collective of which he has been a member since 1994.

A week after De Keyzer’s post, Magnum Photos issued a statement on AI-generated images.

“[Magnum] respects and values ​​the creative freedom of our photographers,” the statement said. But its archives “will remain dedicated exclusively to photographic images taken by humans and which reflect real events and stories, in keeping with Magnum’s heritage and commitment towards the documentary tradition.

Putin’s Dream was published in November and De Keyzer says the book is selling well. (Provided: Carl De Keyzer)

De Keyzer is not the only Magnum Photos member to stir up controversy by experimenting with AI image generation.

Michael Christopher Brown used generative AI to produce a series of images about Cuban refugees. It was a way of telling inaccessible stories, he says PetaPixel.

In a complex meditation on AI, and a “prank” on its photographic communityJonas Bendiksen used software to create 3D models of people and insert them into landscape photographs he took for a series examining a Macedonian town that had become a notorious center for fake news production. He published a picture book called The Book of Veles and used AI to generate the book’s accompanying text.

“By seeing that I lied and produced fake news myself, I have in some way undermined the credibility of my work,” he told Magnum Photos. “But I hope…this project will open people’s eyes to what lies ahead and the territory that photography and journalism are heading into.”

The Liar’s Dividend

Speaking at the Photo Ethics Center symposium in December, Alex Mahadevan says the loss of trust caused by AI-generated images, which allows people to question the veracity of real images or videos, is known as the “liar’s dividend.”

Mahadevan, director of digital media education project MediaWise, cites the Princess Catherine photo debacle as an example.

After an AI-assisted image of the princess and her children was published and then hastily retracted by news agencies when anomalies were spotted, the photo gave rise to wild speculation about the health of Princess Catherine. A video posted by the princess updating her followers about her health was later dismissed by many. “Immediately people all over the internet were saying it wasn’t a video of Princess Kate, it was a deepfake, that she was dead… all these crazy conspiracy theories,” Mahadevan says.

This is why transparency is vital when it comes to using generative AI. But as symposium speakers explained, how AI use is tagged or captured in metadata, or when AI assistance – as opposed to generative AI – becomes sufficiently important to justify its disclosure, are unresolved questions at this stage.

Savannah Dodd, founder and director of the Photography Ethics Centre, says there are other ethical considerations, beyond questions of truth, when it comes to generative AI technology.

“AI allows creators to create images of places they have never visited themselves or may not know much about,” she explains.

Dodd has written on how the biases of AI image generators and lack of consultation on the part of the user can lead to the reproduction of stereotypes.

The question of which AI generator to use also needs to be carefully considered, says Dodd.

“Most of the larger generators scrape images from the Internet, regardless of copyright.”

Last year, images of Australian children were found in a dataset called LAION-5B, which was used to train a number of publicly available AI generators that produce hyper-realistic images.

In November, a parliamentary inquiry into AI released a report claiming the companies behind generative AI had committed “unprecedented theft” from creative workers in Australia.

The inquiry was presented with a “significant body of evidence” suggesting that generative AI was already having an impact on the creative industries in Australia.

Dodd says that creators working in photography or creating AI-generated images should question their motivations, the message they want to convey, and the medium they use to do so.

“I think it’s worth taking the time to understand how an image or set of images will function in the world, how they will be understood, and what their potential impact might be,” she says.

De Keyzer says he wanted the AI-generated images to look like “real images.” (Provided: Carl De Keyzer)

For De Keyzer, the excitement over his use of generative AI is overblown. Although he says the world needs to learn about AI to avoid its possible abuse, he could use it again.

“AI is just another tool with a great future, why should I repeat what I’ve always done,” he says

“I like that I can travel in my mind now. I’m getting older, and this might be a way to stay creative without the problems and costs that come with real travel. Of course, real things are always favorites “It is a fact that it is becoming more and more difficult to travel, to sell the images, to have them published. “

Leave a Reply

Your email address will not be published. Required fields are marked *