Daily Generations: AI Air Shootout
Created with: DALL-E 3
I missed the last two days of Daily Generations as I was traveling. So I decided to turn it into inspiration for today’s Daily Generations.
I also took the opportunity to show the differences between generative image models. The image above is from DALL-E 3, the one below is Adobe Photoshop, and the last is made with Adobe Firefly.
DALL-E 3 has done the best here. It’s understanding of the prompt overall was good and it nailed what I was asking for stylistically.
The passenger was meant to be in his seat looking down the aisle and it had a hard time with that. I feel it is also the one most likely to suddenly go way off base while iterating. In this case study it kept wanting to face the seats in the wrong direction.
Created with: Adobe Photoshop
It’s not a totally fair comparison to put photoshop up against DALL-E in this kind of test. Photoshop generative fill was designed to make edits to existing images, and it is very good at this.
However, it does fall behind when generating from scratch. The image is ok, but it completely ignored the style prompts and did not include the passenger. The iterations where it did include passengers where vague ghostly blobs of people. It may look like you took a photo and applied a cheesy filter, but we’re asking it to operate way out of its comfort zone here.
Firefly made a very nice image (below), but not the image I asked for. In its defense, it did tell me that there were things in my prompt that were against guidelines so they were being ignored. I assume it didn’t like the references to Georges Remi and Albert Uderzo for style inspiration, but there’s no way to know for sure. It also forgot to include my passenger.
It’s a beautiful image though. I really like the lighting, and it seems to do a better job at hiding the weird “AI smudges” than DALL-E 3.
Out of the three I have the least experience with Firefly, but stay tuned for a more in-depth look in a future article.
Created with: Adobe Firefly