Google has built its empire on media dollars. Now he looks at the other side of the equation – creativity – with AI as its way.
The panels accumulate in the last 18 months, but the clearest signal came last week on the annual part Google Cloud following.
In Las Vegas, the giant has unveiled an extended continuation of content creation tools: Veo 2 for video, Imagen 3 for images, Lyria for music and the Chirp 3 model for personalized vocal technology, all designed to transform a minimum input into maximum output. The leaders did this throughout the event, not only by speaking of the technical capacities of the tools, but also to show how brands and agencies already adopt them to create and set up multimedia content.
Here are some of the examples mentioned on stage when opening Wednesday:
- L’Oréal uses a generative AI to develop global content – manufacturing 50,000 images and 500 videos per month with Imagen 3 and Veo 2 – with personalized content for various products, locations and demographic data. The beauty giant has also chosen not to allow teams to create people generated by AI for advertisements.
- Google used generative AI tools – including Veo, Imagen and Gemini – to transform the sphere of Las Vegas into the emerald city of “The Wizard of Oz”, transforming scenes from the original film into a lively 360 -degree experience before a new show that made its debut there in August.
- Kraft Heinz’s “Tastemaker” platform integrated Veo 2 and Imagen 3 to reduce content production from 8 weeks to 8 hours.
- Mondelez and Accenture use Google Cloud content agents to create text, images and personalized videos in the world markets in hours instead of weeks.
- Reddit extends his “Reddit responses”, propelled by Gemini on Vertex AI, to personalize the home pages with conversation summaries organized by AI.
Google also presented the way agencies like Goodby Silverstein & Partners use a generative AI for content creation. The creative agency began a new trailer for a Film generated by AI Based on the scenario of the surrealist artist Salvador Dali, “Giraffes on HorseBack Salad”. In collaboration with the DALI Museum, the agency used Veo 2 and Imagen 3 to transform Dalí’s notes and sketches into a cinematographic experience which will also become a longer film.
By allowing the rapid creation of evidence of concepts and prototypes, the agency sees a generative AI to help at the start of the creative process to visualize and sell ideas to customers. Jeff Goodby, co-founder and co-president of GS & P, also warned that production must be of a certain quality and that bad ones could fall into the “strange valley” and not have an idea.
“I think people are always the referees and judges to know if the thing brings an emotion or not,” said Goodby. “We are the end users who must be satisfied. Perhaps at some point that will change, but that has not yet changed.”
A film on a surrealist artist made for the “perfect alignment project with hallucinations and serendipity,” said Martin Pagh Ludvigsen, Goodby, Silverstein & Partners’s Head of IA.
“This happens when you play with Veo 2 and things like it really contributed to a better product,” said Pagh Ludvigsen. “The fact is that if you find a red sky in a BMW ad, it is not a good thing. But in the giraffes on the salad on horseback, it is a good thing.”
Another example is WPP, which announced integrations for Veo 2 and Imagen 3 in WPP Open, the owner platform fed by the Holding company which incorporates AI models, data and workflows for creation, strategy, purchase of media and other efforts.
One thing that WPP tests is to use AI content creation tools and agents to iteratively test synthetic discussion groups depending on what videos, images and text work best for various audiences. WPP technology director Stephan Pretorius said the company had given broader internal access to new tools like Veo 2 and Imagen 3 to encourage employees of each department to test more and develop new ways to use AI within the company. He also noted that 28,000 agents to employees were already deployed for a range of tasks.
“When you bring new capacities to a group of people, you should not be obstinate in advance on how they will use it,” said Pretorius. “You should let them understand it for themselves and then think about how they use it.”
Agency leaders also see the advantages of the combination of content tools with other parts of an integrated AI stack. This is why many were happy to see a range of partners announced by Google, such as integrations with platforms like Adobe Firefly and the police as well as more interoperability for even other clouds like Azure and AWS.
Emily Wengart, head of AI in enormous AI, sees the potential of a generative AI integrating into a range of other updates linked to data and agents announced. Also a lot of rebrands that a new agency merged with Hero Digital after being sold last year by IPG. She noted that the history of pruning in design and technology create a new ideal point for the agency at a moment of rebirth: “It is still on cowboy days the feeling of getting you out of this border and trying to understand what you want to do then,” she said.
“We only worry about pixels and now much more than the data is my new pixels,” said Wengart. “I care much more than I never thought about technical infrastructure issues when I can dispute decisions … These are questions of experience of the end user that I express.”
All this indicates a greater change. Google no longer wants to power the pipes of digital advertising, he wants to shape what is going through them. By investing in a generative AI for creativity, Google is positioning itself as a co -pilot for content production. He also used the Cloud next to it to start more AI content guarantees, including improved watermark, copyright compensation and data governance.
It is a natural extension of the domination of the giant in the media, but with a twist. If Google can have the tools that create the content and systems that distribute it, it redefines the economy of creativity itself.
That said, there are still persistent questions about the execution by Google of this plan. Gartner Andrew Frank analyst was impressed by Google AI leadership presented by marketing applications, platform partners and creative agents. He also noticed that it seemed to be bypassing two critical market tensions: the remuneration of content creators whose work has formed these Genai models and consumer concerns concerning the identification of the synthetic media which seem remarkably realistic.
“What surprised me the most was what seemed to be a missed opportunity to solve two related button-rack problems that create friction on the market,” said Frank. “… Google tackled these problems elsewhere, but it was disappointing not to see them in the speech.”