You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
3 weeks ago
.github Minor fixes 2 months ago
configs alt-diffusion integration 4 weeks ago
embeddings add embeddings dir 4 months ago
extensions delete the submodule dir (why do you keep doing this) 3 months ago
extensions-builtin moved roll artist to built-in extensions 4 weeks ago
html add footer with versions 3 weeks ago
javascript do not show full window image preview when right mouse button is used 3 weeks ago
localizations Remove old localizations from the main repo. 3 months ago
models added cheap NN approximation for VAE 1 month ago
modules Merge pull request #6376 from KumiIT/master 3 weeks ago
scripts rework #6329 to remove duplicate code and add prevent tab names for showing in ids for scripts that only exist on one tab 3 weeks ago
test Minor fixes 2 months ago
textual_inversion_templates hypernetwork training mk1 4 months ago
.gitignore Update 3 weeks ago
.pylintrc Add basic Pylint to catch syntax errors on PRs 3 months ago
CODEOWNERS remove localization people from CODEOWNERS add a note 3 months ago Update 3 weeks ago
artists.csv Update artists.csv 3 months ago
environment-wsl2.yaml update environment-wsl2.yaml 4 months ago add footer with versions 3 weeks ago
requirements.txt update req.txt 3 weeks ago
requirements_versions.txt update pillow 3 weeks ago
screenshot.png updated interface to use Blocks 5 months ago
script.js better targetting, class tabs was autoassigned 3 weeks ago
style.css add footer with versions 3 weeks ago
txt2img_Screenshot.png Add files via upload 4 months ago
v2-inference-v.yaml add hash and fix undo hijack bug 2 months ago Set install_dir (per Ephil012's suggestion) 2 months ago
webui-user.bat revert change to webui-user.bat 2 months ago Add support for macOS (Darwin) in 2 months ago
webui.bat Fix accelerate check when spaces in path 3 months ago Merge pull request #5774 from AUTOMATIC1111/camenduru-patch-1 3 weeks ago Fixes to exec LAUNCH_SCRIPT 3 weeks ago

Please note:

Corrective pull requests welcome to fix artist categorization and any other questionable leftovers found

Stable Diffusion web UI

A browser interface based on Gradio library for Stable Diffusion.

Check the custom scripts wiki page for extra scripts developed by users.


Detailed feature showcase with images:

  • Original txt2img and img2img modes
  • One click install and run script (but you still must install python and git)
  • Outpainting
  • Inpainting
  • Color Sketch
  • Prompt Matrix
  • Stable Diffusion Upscale
  • Attention, specify parts of text that the model should pay more attention to
    • a man in a ((tuxedo)) - will pay more attention to tuxedo
    • a man in a (tuxedo:1.21) - alternative syntax
    • select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user)
  • Loopback, run img2img processing multiple times
  • X/Y plot, a way to draw a 2 dimensional plot of images with different parameters
  • Textual Inversion
    • have as many embeddings as you want and use any names you like for them
    • use multiple embeddings with different numbers of vectors per token
    • works with half precision floating point numbers
    • train embeddings on 8GB (also reports of 6GB working)
  • Extras tab with:
    • GFPGAN, neural network that fixes faces
    • CodeFormer, face restoration tool as an alternative to GFPGAN
    • RealESRGAN, neural network upscaler
    • ESRGAN, neural network upscaler with a lot of third party models
    • SwinIR and Swin2SR(see here), neural network upscalers
    • LDSR, Latent diffusion super resolution upscaling
  • Resizing aspect ratio options
  • Sampling method selection
    • Adjust sampler eta values (noise multiplier)
    • More advanced noise setting options
  • Interrupt processing at any time
  • 4GB video card support (also reports of 2GB working)
  • Correct seeds for batches
  • Live prompt token length validation
  • Generation parameters
    • parameters you used to generate images are saved with that image
    • in PNG chunks for PNG, in EXIF for JPEG
    • can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
    • can be disabled in settings
    • drag and drop an image/text-parameters to promptbox
  • Read Generation Parameters Button, loads parameters in promptbox to UI
  • Settings page
  • Running arbitrary python code from UI (must run with --allow-code to enable)
  • Mouseover hints for most UI elements
  • Possible to change defaults/mix/max/step values for UI elements via text config
  • Random artist button
  • Tiling support, a checkbox to create images that can be tiled like textures
  • Progress bar and live image generation preview
  • Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
  • Styles, a way to save part of prompt and easily apply them via dropdown later
  • Variations, a way to generate same image but with tiny differences
  • Seed resizing, a way to generate same image but at slightly different resolution
  • CLIP interrogator, a button that tries to guess prompt from an image
  • Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
  • Batch Processing, process a group of files using img2img
  • Img2img Alternative, reverse Euler method of cross attention control
  • Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
  • Reloading checkpoints on the fly
  • Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
  • Custom scripts with many extensions from community
  • Composable-Diffusion, a way to use multiple prompts at once
    • separate prompts using uppercase AND
    • also supports weights for prompts: a cat :1.2 AND a dog AND a penguin :2.2
  • No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
  • DeepDanbooru integration, creates danbooru style tags for anime prompts
  • xformers, major speed increase for select cards: (add --xformers to commandline args)
  • via extension: History tab: view, direct and delete images conveniently within the UI
  • Generate forever option
  • Training tab
    • hypernetworks and embeddings options
    • Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
  • Clip skip
  • Use Hypernetworks
  • Use VAEs
  • Estimated completion time in progress bar
  • API
  • Support for dedicated inpainting model by RunwayML.
  • via extension: Aesthetic Gradients, a way to generate images with a specific aesthetic by using clip images embeds (implementation of
  • Stable Diffusion 2.0 support - see wiki for instructions

Installation and Running

Make sure the required dependencies are met and follow the instructions available for both NVidia (recommended) and AMD GPUs.

Alternatively, use online services (like Google Colab):

Automatic Installation on Windows

  1. Install Python 3.10.6, checking "Add Python to PATH"
  2. Install git.
  3. Download the stable-diffusion-webui repository, for example by running git clone
  4. Place model.ckpt in the models directory (see dependencies for where to get it).
  5. (Optional) Place GFPGANv1.4.pth in the base directory, alongside (see dependencies for where to get it).
  6. Run webui-user.bat from Windows Explorer as normal, non-administrator, user.

Automatic Installation on Linux

  1. Install the dependencies:
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
  1. To install in /home/$(whoami)/stable-diffusion-webui/, run:
bash <(wget -qO-

Installation on Apple Silicon

Find the instructions here.


Here's how to add code to this repo: Contributing


The documentation was moved from this README over to the project's wiki.