Skip to main content

How to run Stable Diffusion to make awesome AI-generated art

Stable Diffusion is an exciting and increasingly popular AI generative art tool that takes simple text prompts and creates incredible images seemingly from nothing. While there are controversies over where it gets its inspiration from, it's proven to be a great tool for generating character model art for RPGs, wall art for those unable to afford artist commissions, and cool concept art to inspire writers and other creative endeavors.

If you're interested in exploring how to use Stable Diffusion on a PC, here's our guide on getting started.

If you're more of an Apple fan, we also have a guide on how to run Stable Diffusion on a Mac, instead.

Recommended Videos

Difficulty

Hard

Duration

20 minutes

What You Need

  • Desktop PC with a modern graphics card with at least 8GB of VRAM

  • An admin account that lets you install applications

How to run Stable Diffusion on your PC

You can use Stable Diffusion online easily enough by visiting any of the many online services, like StableDiffusionWeb. If you run stable diffusion yourself, though, you can skip the queues, and use it as many times as you like with the only delay being how fast your PC can generate the images.

Here's how to run Stable Diffusion on your PC.

Step 1: Download the latest version of Python from the official website. At the time of writing, this is Python 3.10.10. Look at the file links at the bottom of the page and select the Windows Installer (64-bit) version. When it's ready, install it like you would any other application.

Note: Your web browser may flag this file as potentially dangerous, but as long as you're downloading from the official website, you should be fine to ignore that.

Step 2: Download the latest version of Git for Windows from the official website. Install it as you would any other application, and keep all settings at their default selections. You will also likely need to add Python to the PATH variable. To do so, follow the instructions from Educative.io, here.

Step 3: Download the Stable Diffusion project file from its GitHub page by selecting the green Code button, then select Download ZIP under the Local heading. Extract it somewhere memorable, like the desktop, or in the root of the C:\ directory.

Downloading the Stable Diffusion codebase from GiTHub.
Image used with permission by copyright holder

Step 4: Download the checkpoint file “768-v-ema.ckpt” from AI company, Hugging Face, here. It's a large download so might take a while to complete. When it does, extract it into the "stable-diffusion-webui\models\Stable-diffusion\" folder. You'll know you found the right one because it has a text file in it called "Put Stable Diffusion checkpoints here."

Step 5: Download the config yaml file (you might need to right click the page and select Save as.) and copy and paste it in the same location as the other checkpoint file. Rename it to the same name (768-v-ema.ckpt) and remove its .txt file extension.

Installing Stable Diffusion checkpoints.
Image used with permission by copyright holder

Step 6: Navigate back to the stable-diffusion-webui folder, and run the webui-user.bat file. Wait until all the dependencies are installed. This can take some time, even on fast computers with high-speed internet, but the process is well documented and you'll see it progress through it in real time.

Step 7: Once it's finished, you should see a Command Prompt window like the one above, with a URL at the end similar to "http://127.0.0.1:7860". Copy and paste that URL into a web browser of your choice, and it should be greeted with the Stable Diffusion web interface.

Installing Stable Diffusion on a local PC.
Image used with permission by copyright holder

Step 8: Input your image text prompt and adjust any of the settings you like, then select the Generate button to create an image with Stable Diffusion. You can adjust the resolution of the image using the Width and Height settings, or increase the sampling steps for a higher-quality image. There are other settings that can change the end result of your AI artwork, too. Play around with it all to see what works for you.

If it all works out, you should have as much AI-generated art as you desire without needing to go online or queue with other users.

Runnign a local version of Stable Diffusion in your web browser.
Image used with permission by copyright holder

Now that you've had a chance to play around with Stable Diffusion, how about trying out the ChatGPT natural language chatbot AI? If you're interested, here's how to use ChatGPT yourself.

Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
People are making entire short films with this new AI video-generation app
screenshot of a MiniMax AI video of a dog running through a field

Alibaba- and Tencent-backed startup Minimax, one of China's "AI tigers," has released its Video-01 text-to-video model, which can generate highly accurate depictions of humans, down to their hand motions. Minimax unveiled the new tool Saturday at its inaugural developer conference in Shanghai.

https://x.com/JunieLauX/status/1829950412340019261

Read more
AI image generation just took a huge step forward
Professor Dumbledore by a pool in Wes Anderson's Harry Potter.

We've been living with AI-generated images for a while now, but this week, some of the major players took some big steps forward. In particular, I'm talking about significant updates to Midjourney, Google's new model, and Grok.

Each company shows the technology evolving at different paces and in different directions. It's still very much an open playing field, and each company demonstrates just how far the advances have come.
Midjourney hits the web
An AI image generated in Midjourney. Channel/Midjourney

Read more
Grok 2.0 takes the guardrails off AI image generation
Elon Musk as Wario in a sketch from Saturday Night Live.

Elon Musk's xAI company has released two updated iterations of its Grok chatbot model, Grok-2 and Grok-2 mini. They promise improved performance over their predecessor, as well as new image-generation capabilities that will enable X (formerly Twitter) users to create AI imagery directly on the social media platform.

“We are excited to release an early preview of Grok-2, a significant step forward from our previous model, Grok-1.5, featuring frontier capabilities in chat, coding, and reasoning. At the same time, we are introducing Grok-2 mini, a small but capable sibling of Grok-2. An early version of Grok-2 has been tested on the LMSYS leaderboard under the name 'sus-column-r,'” xAI wrote in a recent blog post. The new models are currently in beta and reserved for Premium and Premium+ subscribers, though the company plans to make them available through its Enterprise API later in the month.

Read more