Skip to main content

How to run Stable Diffusion to make awesome AI-generated art

Stable Diffusion is an exciting and increasingly popular AI generative art tool that takes simple text prompts and creates incredible images seemingly from nothing. While there are controversies over where it gets its inspiration from, it's proven to be a great tool for generating character model art for RPGs, wall art for those unable to afford artist commissions, and cool concept art to inspire writers and other creative endeavors.

If you're interested in exploring how to use Stable Diffusion on a PC, here's our guide on getting started.

If you're more of an Apple fan, we also have a guide on how to run Stable Diffusion on a Mac, instead.

Recommended Videos

Difficulty

Hard

Duration

20 minutes

What You Need

  • Desktop PC with a modern graphics card with at least 8GB of VRAM

  • An admin account that lets you install applications

How to run Stable Diffusion on your PC

You can use Stable Diffusion online easily enough by visiting any of the many online services, like StableDiffusionWeb. If you run stable diffusion yourself, though, you can skip the queues, and use it as many times as you like with the only delay being how fast your PC can generate the images.

Here's how to run Stable Diffusion on your PC.

Step 1: Download the latest version of Python from the official website. At the time of writing, this is Python 3.10.10. Look at the file links at the bottom of the page and select the Windows Installer (64-bit) version. When it's ready, install it like you would any other application.

Note: Your web browser may flag this file as potentially dangerous, but as long as you're downloading from the official website, you should be fine to ignore that.

Step 2: Download the latest version of Git for Windows from the official website. Install it as you would any other application, and keep all settings at their default selections. You will also likely need to add Python to the PATH variable. To do so, follow the instructions from Educative.io, here.

Step 3: Download the Stable Diffusion project file from its GitHub page by selecting the green Code button, then select Download ZIP under the Local heading. Extract it somewhere memorable, like the desktop, or in the root of the C:\ directory.

Downloading the Stable Diffusion codebase from GiTHub.
Image used with permission by copyright holder

Step 4: Download the checkpoint file “768-v-ema.ckpt” from AI company, Hugging Face, here. It's a large download so might take a while to complete. When it does, extract it into the "stable-diffusion-webui\models\Stable-diffusion\" folder. You'll know you found the right one because it has a text file in it called "Put Stable Diffusion checkpoints here."

Step 5: Download the config yaml file (you might need to right click the page and select Save as.) and copy and paste it in the same location as the other checkpoint file. Rename it to the same name (768-v-ema.ckpt) and remove its .txt file extension.

Installing Stable Diffusion checkpoints.
Image used with permission by copyright holder

Step 6: Navigate back to the stable-diffusion-webui folder, and run the webui-user.bat file. Wait until all the dependencies are installed. This can take some time, even on fast computers with high-speed internet, but the process is well documented and you'll see it progress through it in real time.

Step 7: Once it's finished, you should see a Command Prompt window like the one above, with a URL at the end similar to "http://127.0.0.1:7860". Copy and paste that URL into a web browser of your choice, and it should be greeted with the Stable Diffusion web interface.

Installing Stable Diffusion on a local PC.
Image used with permission by copyright holder

Step 8: Input your image text prompt and adjust any of the settings you like, then select the Generate button to create an image with Stable Diffusion. You can adjust the resolution of the image using the Width and Height settings, or increase the sampling steps for a higher-quality image. There are other settings that can change the end result of your AI artwork, too. Play around with it all to see what works for you.

If it all works out, you should have as much AI-generated art as you desire without needing to go online or queue with other users.

Runnign a local version of Stable Diffusion in your web browser.
Image used with permission by copyright holder

Now that you've had a chance to play around with Stable Diffusion, how about trying out the ChatGPT natural language chatbot AI? If you're interested, here's how to use ChatGPT yourself.

Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
A first look at Adobe’s new AI video generation tools
Adobe Creative Cloud Suite apps list.

Adobe previewed its upcoming video AI tools, part of the Firefly video model the company announced in April, in a newly released YouTube post. The features (and model) are expected to arrive by the end of the year and be available on both the Premiere Pro beta app, as well as on a free website.

Adobe Firefly Video Model Coming Soon | Adobe Video

Read more
Lenovo’s Creator Zone brings Stable Diffusion to its PCs
lenovo creator zone coming to yoga ideapad creatorzone01 5a0aea

Lenovo's new lineups of Yoga and IdeaPad devices will come with Creator Zone, an on-device AI "designed to significantly enhance the creative process," the company announced during IFA 2024 in Berlin on Thursday.

The application is geared toward graphic designers, content creators, and marketers. It is powered by a fine-tuned version of Stable Diffusion 3.0 that is exclusive to Lenovo. Using natural language processing, Creator Zone can generate and modify images based on the user's text prompts, sketches, and reference images.

Read more
People are making entire short films with this new AI video-generation app
screenshot of a MiniMax AI video of a dog running through a field

Alibaba- and Tencent-backed startup Minimax, one of China's "AI tigers," has released its Video-01 text-to-video model, which can generate highly accurate depictions of humans, down to their hand motions. Minimax unveiled the new tool Saturday at its inaugural developer conference in Shanghai.

https://x.com/JunieLauX/status/1829950412340019261

Read more