Stable Diffusion is the easiest way to generate images with text prompts, but the web services cost money, lock your data, and can change overnight. This guide shows you how to run a ready-made Stable Diffusion web UI on your own machine so you can generate images privately, pay once (or not at all), and keep full control over your workflow.
What you’ll get
- A local web UI you can run on a laptop or desktop GPU
- Support for Stable Diffusion 1.5 (low VRAM) and SDXL (high quality)
- Offline image generation without uploading prompts or art to a cloud service
- A path to batch rendering, scripts, or hooking into local tools (VS Code, Obsidian, Blender)
What you need
Hardware
- GPU with 6–8GB VRAM: Stable Diffusion 1.5 (512×512) runs on 6–8GB, but will be slow. A 10–16GB card (RTX 4070/4080) is much smoother.
- GPU with 12+ GB VRAM: Needed for SDXL 1.0 at 1024×1024. Without it, use Stable Diffusion 1.5 or run 512×512.
- Disk space: Plan for 10–20GB for models + output.
Software
- A recent Linux distro (Ubuntu 22.04+ recommended) or Windows 10/11.
- Python 3.11+ (for web UI installers) or Docker (if you prefer container isolation).
- A browser to access the local web UI.
Step 1: Pick a web UI (Automatic1111 is the easiest start)
The most popular local option is the AUTOMATIC1111 Stable Diffusion Web UI. It wraps the model, tokenizer, and GPU runtime into a single web interface.
Install using the one-line script (Linux)
cd ~/stable-diffusion-webui
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
python launch.py --xformers
If you don’t have
xformersinstalled, the web UI will still work, but generating large images (1024×1024) will be noticeably slower.
Install on Windows (PowerShell)
cd $HOME\stable-diffusion-webui
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
.\webui.bat --xformers
When the server starts, it will print a local URL (e.g., http://127.0.0.1:7860). Open it in your browser.
Step 2: Download a model checkpoint
Stable Diffusion doesn’t include weights by default. The web UI will create a models/Stable-diffusion folder; drop the .ckpt or .safetensors file there.
Recommended models
- Stable Diffusion 1.5 (CompVis) — works on most consumer GPUs.
- Stable Diffusion XL 1.0 — higher fidelity, requires 12+ GB VRAM.
Download the model from Hugging Face (requires a free account and accepting the license):
- https://huggingface.co/CompVis/stable-diffusion-v1-5
- https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
Step 3: Configure for your GPU
In the web UI, click Settings → Stable Diffusion → Stable Diffusion checkpoint and select the model you placed in models/Stable-diffusion.
Low VRAM tips
- Use
--medvramor--lowvramflags when startinglaunch.py:python launch.py --medvram(best balance for 8GB)python launch.py --lowvram(for 6GB or less)
- Use half-precision (float16) to save VRAM.
- Lower the image size (512×512 or 640×640) to avoid out-of-memory errors.
Step 4: Generate images (replace Midjourney)
- Open the web UI in your browser.
- Set the prompt, width/height, and sampling method.
- Click Generate.
The interface lets you batch prompts, use templates, and save results locally. You can also export to a local folder and write scripts to automate rendering.
Optional: Use InvokeAI for a polished “app-like” experience
If you want a more opinionated install that manages models and provides a cleaner UI, try InvokeAI:
git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI
python -m pip install -r requirements.txt
invokeai --web
InvokeAI includes a model manager and comes with a small set of licensed models.
What This Means
Running Stable Diffusion locally gives you:
- Privacy: Prompts and images never leave your machine.
- Cost control: One-time compute (your GPU) instead of monthly subscription fees.
- Resilience: You won’t lose access if a service changes pricing, adds watermarking, or shuts down.
The downside is you need a capable GPU and some maintenance (updates, model downloads). But for frequent creators, it can save hundreds of dollars per year.
What You Can Do
- Try the same prompt in Midjourney or Runway and compare output quality and speed.
- Keep your model files on an encrypted disk if you’re concerned about theft.
- Use a tool like
rcloneorrsyncto back up generated images automatically. - Experiment with different samplers (Euler, DPM++ 2M Karras) to see which fits your style.