In this LevelUp episode, Melissa sits down with our very own maestro of creatives Dan Greenberg, Chief Design Officer at ironSource. They talk AI image generation, what it is, how it can be used now, and what it needs to truly change the game.
Tune in here or keep reading for the highlights:
Where we are now
"There are a few emerging models which look really promising and I think create a lot of excitement around our industry and the creative community in general. When we look at image generation we're talking mainly about Midjourney and Dall-e from OpenAI that’s got to a point where you can actually prompt it to create an image.
The image itself is so good and hits the right tone and so sparks the imagination of so many people - whether you're a creator or you are someone who works with creators. In general, I think we're just at the beginning.
I usually compare it to the dot matrix printers, the first printers that could spit out an image. It was just made out of dots. It was very basic and the technology was crude, but you could see where it was going. I think that's where we are now. We know it's gonna be a very big part because we can see the technology is there."
The role of the creator in an AI world and the role of AI in the creative world
"I agree with the notion that if you want to create something truly unique or special you have to be the person who actually understands how the thing works. In this use case, generative AI will always be a tool and not replace people. I think there are a few places generative AI will change some things.
First of all, one of the biggest impacts that it could have is on speed. Sometimes you're looking more for quantity or a shorter time to market, especially in our world if it's a performance-based marketing. It doesn't have to be the most polished, creative, or best-written banner or whatever. It just needs to perform. So having a computer-generated creative is going to be by definition faster.
It's gonna produce faster and I think that will unlock interesting use cases where you can have a machine that spits out more and more creatives."
What AI can, and can’t, do for your app marketing
"The main question that you need to take into consideration is which ad format is ripe for generative AI. ChaGPT can write an amazing paragraph and Midjourney or DALL-E can create a beautiful still image, but most of UA is still videos and interactives. So it doesn't really apply. Even with an image, the nearest use case that we have today is icons.
It's still not production ready. It's 90% accurate. We had this cool experiment that we did on creating app icons for games and we prompted the AI with a brief of a couple running away from zombies in a car.
What it generated was a very compelling image. The composition was great and it was a scared couple in a car. But when you look at the details, both of them were standing in the car. They were literally running from zombies in the car. The model didn't really understand how to interpret the brief correctly and created this image. It might work, but would you publish that as your app icon?
If you're a smaller game and you only care about performance, you might do that. But still, 90% accurate, or 85% accurate, is very different from 100% accurate. That jump from 90% to even 95% accurate is very big, and it'll take time to close that gap."
The next level of image generation is coming, but it needs some help
"Right now what we're seeing is a general demonstration of an amazing technology that isn't really specialized fitted into a specific use case. Once that will be done correctly, I think it'll become very interesting and practical and start being part of our day-to-day. Again, it's not that far. But, where it'll become interesting is when it'll be applied to specific use cases. There are a lot of companies that are trying to crack that, how to create specific use cases. Most of these models are gonna be commodified.
One of the first things that needs to be addressed is how I feed my characters and my IP into it. These models are generating amazing generic images, and that's actually the strength, which is to create something new out of thin air. But I think that if you're looking at it from a production standpoint or the actual use cases, how do I train the models to use my own characters? I think that's another immediate, near-future problem that needs to be solved before it becomes really adopted."
AI is already a big part of our lives
"We forget about all of the AI tools that we're using all the time. All of us probably use Google Photos. Once in a while, you probably get this very cool video of your kids growing through the years or whatever. It’s super emotional. You see it, you cry a little bit and it triggers a response.
What they're doing is a lot of small, clever stuff that you don't notice. For example, choosing photos that only have smiles in them or creating a small fade between them, and choosing the right music.
What you’re seeing essentially is a video that is auto-generated by AI that triggers an emotional response. In the use case of making sure that you continue to use Google Photos and store your photos there so they can train their own models.
You can see that we already live in an area where products around us use machine learning to generate an emotional response from existing or other content. It's an amazing piece of technology that works behind the scenes. We never notice it because became it’s so ubiquitous."