Article: Incorporating AI into your design workflow today.
If you haven’t been living under a rock for the last three years you’re probably well aware of generative AI.
Even the most sheltered internet user has, by now, read all about the power, disruption, risks, and benefits of tools like ChatGPT, Runway, Dall-E and the rest.
If you work in a creative field or run a company with creative needs you’ve probably read that the robots are going to take all the jobs, destroy the industry, and rustle your cattle all inside of two years. You’ve probably also heard that these tools will make your job easier, faster, and more efficient. Companies may be led to believe that individual designers or creative teams are no longer needed. When the dust finally settles in the creative industry it’s likely the truth will be somewhere in between. Except for the cattle rustling part.
I’m not here to tell you to run for hills or fight back against Skynet. I also cannot say that you have nothing to worry about, the robots are our friends, your job is safe, and the savings that the company makes from using AI will go directly into your Christmas bonus.
What I am here to tell you is that Generative AI will have a massive and irreversible impact on every creative field both good and bad. It’s going to happen, there’s no turning it back, these tools are just too powerful to ignore, avoid, or try to legislate into oblivion.
Robotic life finds a way.
Mercifully, there are tremendous opportunities to benefit from these tools in order for you and your team to reap the rewards of both experienced designers and fast tracked results. Those who embrace AI now will be the industry leaders in the future.
So, here’s some tips and ideas you can use today to start incorporating AI into your design workflow.
Adobe Generative Fill & Expand
We’re starting here because if you’re already using Photoshop then you have access to this right now. No need to even make a new account.
I love generative fill. It’s so fast and easy to do fiddly tasks that could take a designer hours to do themselves or minutes to explain to a design intern that would have it back to you by the end of the day.
Ok, hours is a big exaggeration, but what used to take minutes can now take seconds, and if you’ve ever worked at a design agency that tried to get you to bill your bathroom breaks to the client, you know those minutes add up.
Generative Expand
Generative expand is likely the first one you’ll use. Let’s say you just need a little extra air around a subject, or the photographer cropped in a little closer than you want. This is the tool you’re looking for, you’ll find it as a dropdown amongst your regular crop tool options.
Generative Fill
Adding things is great, but sometimes you need to take something away. This is where I find myself using generative fill the most. In this example, I removed some text from a stock photo that needed to be location neutral. This took about 40 seconds rather than three minutes with the clone brush. Again, those minutes add up!
In my experience generative fill shines brightest while editing existing elements. Below you’ll see a stock photo of a cactus and Adobe’s attempt to make it in bloom. The result is acceptable depending on your use case and good enough for rapid concepting. However, when looking at examples from other companies we’ll see that this is an area for Adobe to improve.
Adobe has a number of other stand-alone and integrated AI products for design, but we’ll take a look at those in another article.
Natural language generation
Natural language generation or Generative text is arguably the simplest way to start using AI today. The only reason I put it second is because you have to create a new account.
ChatGPT from OpenAI is the big player in this area. There are other options, but we’re mostly going to focus on ChatGPT for today.
You may think, “Text and grammar are copywriter stuff, nothing to do with design”. There are so many possibilities as to how you and your team can use this tool, from generating dummy copy to helping construct and write stakeholder pitches.
It’s also great at breaking you out of your creative block when you have a looming deadline. Bounce all your dumb ideas off it and distill a good idea out of the responses.
To prove my point at how useful it can be, I asked the robot itself how useful it could be to a design workflow.
While this is an extremely powerful tool you’ll need to create your own policies for its use on your team, and make sure they’re aware of its quirks. You’ll notice in the example above, that I tell it to use less detail and to keep it short. ChatGPT in particular can be overly verbose, and it can be a real trap to generate more content than you and your team have the ability to thoroughly review.
UX teams and other design specializations create a lot of documentation, may already be thinking of all the possibilities (user personas on tap, anyone?). However, after only a little experience with tools like this you’ll see that they all tend to have their own voice; they “sound” like AI. Which is why I, an actual human, am writing this article and not simply posting the response that Chat GPT would have generated in 13 seconds.
I find GPT-4o to be a lot more succinct than GPT-4o Mini. Stay tuned to this blog for a rundown of all the GPT models.
Generative images
This probably seems like the most intuitive example of how AI can help designers and creative teams. We covered it a little in the first section with Adobe’s offering, but things get wilder when using other apps.
If you’re an illustrator or similar, the first time you use one of these programs to create something you’ll probably feel a mix of things all at once. First you’ll be amazed that you just did eight hours of work in five seconds, then you’ll feel like you’re cheating, then your imposter syndrome will hit record highs and you'll wonder if you just became redundant.
Thankfully this rollercoaster won’t last and you’ll soon see where generative imaging hits its limitations. Working with AI to generate images is a skill set in its own right and not just a cheap way to replace designers. You’ll have to learn what programs do which tasks best, how to combine them, and what prompts get the best results. It’s up to individual designers and creative leads to decide on how much to develop this skill set, but wise teams will begin adopting these tools and industry leads will start now.
If you’re already using ChatGPT the easiest way to get going is DALL-E. This is now built into ChatGPT so all you have to do is ask it to generate a picture for you. The free tier gets three images a day.
It’s very powerful and the results are high quality, but it takes a lot of finessing with prompts to get exactly what you want. A final production level image can be difficult to obtain, and asking it to make edits causes it to start with an entirely new image. If you want edits you’re going to have to make them yourself.
The two images below were both generated in DALL-E. The mythical creature on the left shows the awesome power of generative imaging used in creative endeavors. The bowling alley on the right demonstrates one of many reasons why we’re still paying human illustrators.
The example below shows a good combo of using DALL-E and Photoshop Generative Fill together. The image on the left was generated in DALL-E using a similar prompt to the flowering cactus we saw in the first section. However the dimensions weren’t correct for the final use. The left image shows how Ps was able to resize the canvas and fill in the blanks in just a few seconds.
Generative Video
If you work in video, motion, or animation you’ll immediately see the benefits and also the limitations of generative video. After diving in, you’ll also see that this is where generative AI gets expensive.
Open AI has shown some very impressive AI generated videos with their application Sora. However it is still in development and very few people have access to it.
If you want to try dabbling first without spending any budget, ImagineArt may be a good place to start as they offer regenerating credits. Though I find that to get a quality result you will spend a lot of time finessing your prompts and those free credits burn up fast. Once you’re on a paid plan, those used credits become real dollars. Its options can be somewhat limiting as well.
Runway is a robust tool, gives you a lot of control, and offers some strong features. It has not just the ability to generate crazy looking monsters or celebrities eating spaghetti, but tools you’ll use in your existing projects today, like frame interpolation, upscaling, find and blur faces, and a lot more.
I also find that it tends to get what I’m asking for faster than some of the others. Which saves you precious compute time.
If you want to try before you buy though be aware the free credits from runway are for life, once you use them up you have to pay to get more.
Fully AI-generated videos can deliver solid ROI for the right projects—especially when you need unique, license-free stock-style clips. However, to truly elevate the quality and ensure seamless integration with a larger campaign or creative work a skilled hand is still required. As the technology currently stands, I do think its immediate power for video producers is in creating concept videos and pre-vis to present to stakeholders and creatives.
Wrap up
The tools we looked at today were pretty simple, but they can be applied and combined in intricate ways. How you use them is really up to you, but there are clear benefits to maximizing your labor load, utilizing your team’s talent, and freeing up your expertise for greater things. It’s moving fast but is still an open field. If you keep learning and experimenting, your team could be the first ones inventing and using entirely new techniques. You now have digital interns and your three person crew can produce the work of five.
These were just a couple of basic ways to get going with AI right now. There are a lot of other ways you can use AI in design work, increasing in both complexity and required skill. I hope to cover many of them in upcoming articles, so stay tuned!