How to Install Automatic1111 WebUI on Linux

How to Install Automatic1111 WebUI on Linux – Complete Guide

Installing Stable Diffusion on Linux opens the door to AI-powered image creation without the overhead of Windows. I’ve helped set up Automatic1111 WebUI on dozens of Linux systems, from personal desktops to cloud servers. The process takes about 30-60 minutes and works reliably across Ubuntu, Fedora, Arch, and Debian-based distributions.

Automatic1111 is the most popular Stable Diffusion interface because it offers advanced features like inpainting, image-to-image generation, and hundreds of community extensions. It’s completely free and open-source, running locally on your hardware.

This guide covers everything from basic installation to advanced configurations like remote server setup and AMD GPU support. I’ll include commands specific to different Linux distributions so you can follow along regardless of your setup.

System Requirements and Prerequisites

Before installing Automatic1111, verify your system meets these requirements. I’ve seen installations fail because users skipped this verification step.

Hardware Requirements

Component Minimum Recommended
GPU VRAM 4GB NVIDIA 8GB+ NVIDIA
System RAM 8GB 16GB+
Storage 25GB HDD 50GB+ SSD
CPU Any modern 64-bit 6+ cores for preprocessing

Key Takeaway: “4GB VRAM works for basic 512×512 generation, but 8GB+ lets you generate 1024×1024 images and use advanced features like inpainting without memory errors.”

Software Requirements

Automatic1111 requires specific software versions. The most common error I see is Python version incompatibility.

  • Python: Version 3.10 or 3.11 only. Python 3.12 is not supported yet due to PyTorch compatibility.
  • Git: For cloning the repository.
  • NVIDIA Drivers: Version 470 or higher for CUDA 11.x support.
  • CUDA: Version 11.7 or 11.8 (12.x works but may have compatibility issues).

CUDA: NVIDIA’s parallel computing platform that enables GPU acceleration for PyTorch and Stable Diffusion. Without proper CUDA installation, Automatic1111 falls back to CPU-only mode, which is extremely slow.

Installing NVIDIA Drivers

Proper GPU driver installation is critical. Verify your driver is working before proceeding.

Check your current driver status:

nvidia-smi

If this command fails or shows an old driver version, install the latest drivers:

Ubuntu/Debian:

sudo apt update
sudo apt install nvidia-driver-535
sudo reboot

Fedora:

sudo dnf install akmod-nvidia
sudo reboot

Arch Linux:

sudo pacman -S nvidia
sudo reboot

Distro-Specific Package Installation

Install Python and Git using your distribution’s package manager. I recommend installing these before cloning the repository.

Ubuntu/Debian:

sudo apt update
sudo apt install python3 python3-venv git

Fedora:

sudo dnf install python3 python3-devel git

Arch Linux:

sudo pacman -S python python-virtualenv git

Important: If your system has Python 3.12 as the default, you’ll need to install Python 3.11 using pyenv or conda. Automatic1111’s PyTorch dependencies are not compatible with Python 3.12 yet.

Step-by-Step Installation Guide

Follow these steps to install Automatic1111. I’ve tested this process on fresh installations of Ubuntu 22.04, Ubuntu 24.04, Fedora 39, and Arch Linux.

Step 1: Install Python and Git

First, verify you have the correct Python version:

python3 --version

This should show Python 3.10 or 3.11. If it shows 3.12 or another incompatible version, you’ll need to install Python 3.11.

Installing Python 3.11 on Ubuntu (if needed):

sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt install python3.11 python3.11-venv python3.11-dev

Installing Python 3.11 using pyenv (universal method):

curl https://pyenv.run | bash
# Add pyenv to your PATH (add to ~/.bashrc or ~/.zshrc)
export PATH="$HOME/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
pyenv install 3.11.7
pyenv global 3.11.7

Step 2: Clone the Repository

Navigate to where you want to install Automatic1111 and clone the official repository. I recommend using your home directory or an SSD with at least 50GB free space.

cd ~
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui

The repository is about 100MB initially. After installation and downloading models, it can grow to 20GB or more depending on how many models you add.

Pro Tip: If you have limited storage space, you can install on an external drive. Just make sure the drive is mounted before running the WebUI and use the full path in all commands.

Step 3: Run the Installation Script

The Automatic1111 repository includes an automated installation script that handles Python dependencies. This is the easiest way to get started.

Make the script executable and run it:

chmod +x webui.sh
./webui.sh

The first run will take 10-30 minutes depending on your internet speed and CPU. The script downloads PyTorch, CUDA libraries, and other dependencies.

You’ll see output like this:

Installing torch and torchvision
Collecting torch
  Downloading torch-2.1.0-cp310-cp310-linux_x86_64.whl (...)
Successfully installed torch-2.1.0 torchvision-0.16.0

When installation completes, the WebUI will automatically launch and you’ll see:

Running on local URL:  http://127.0.0.1:7860

Step 4: Launch the WebUI

After the initial installation, you can launch the WebUI anytime with:

cd ~/stable-diffusion-webui
./webui.sh

Open your web browser and navigate to:

http://127.0.0.1:7860

You should see the Automatic1111 interface. However, you won’t be able to generate images yet until you download a Stable Diffusion model.

Downloading Stable Diffusion Models

Automatic1111 needs at least one Stable Diffusion model checkpoint file to generate images. Models are typically 2-6GB each.

Where to Get Models

The most popular sources for Stable Diffusion models are:

  • Hugging Face: Official repository with thousands of free models
  • Civitai: Community hub with user-created models and LoRAs
  • Stability AI: Official Stable Diffusion releases

Installing Model Files

Download a model checkpoint file (.safetensors or .ckpt) and place it in the models directory:

stable-diffusion-webui/models/Stable-diffusion/

For example, to download the popular SDXL 1.0 model from Hugging Face:

cd ~/stable-diffusion-webui/models/Stable-diffusion
wget https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0.safetensors

Or the original Stable Diffusion 1.5:

wget https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.safetensors

Key Takeaway: “Always use .safetensors format when possible. It’s safer than .ckpt files because it can’t execute malicious code. The AI community has largely moved to safetensors for security reasons.”

After placing the model file, refresh the WebUI browser page. The model should appear in the dropdown menu at the top left.

Configuration Options

Automatic1111 offers extensive configuration options. The most common method is creating a configuration file or using command line arguments.

Using webui-user.sh

The webui-user.sh file allows you to set permanent launch options. Create it from the template:

cd ~/stable-diffusion-webui
cp webui-user.sh.example webui-user.sh
nano webui-user.sh

Edit the COMMANDLINE_ARGS variable:

#!/bin/bash
#################################################
# Autogenerated by webui.sh, do not edit.
#################################################

#!/bin/bash
venv_dir="venv"
CLI_ARGS="--listen --xformers --enable-insecure-extension-access"
python3 launch.py --listen --xformers

Common Command Line Arguments

Argument Purpose
--listen Bind to all network interfaces (for remote access)
--xformers Enable xformers for faster generation
--share Create public tunnel via Gradio
--lowvram Optimize for 4GB VRAM cards
--port 7860 Use specific port (default: 7860)

For example, to launch with xformers acceleration and listen on all interfaces:

./webui.sh --listen --xformers

Installing xformers for Performance

xformers significantly speeds up image generation and reduces memory usage. Install it after the main installation:

cd ~/stable-diffusion-webui
source venv/bin/activate
pip install xformers
deactivate

I’ve seen 20-40% speed improvements with xformers enabled, especially for higher resolution images.

AMD GPU Support on Linux

Yes, you can run Automatic1111 with AMD GPUs on Linux. However, it requires ROCm instead of CUDA and has some limitations. I’ve tested this on Radeon RX 6000 and 7000 series cards.

Installing ROCm on Ubuntu

First, add the AMD repository and install ROCm:

sudo apt update
sudo apt install wget gnupg2

# Add AMD ROCm repository
wget -q -O - https://repo.radeon.com/rocm/rocm.gpg.key | sudo apt-key add -
echo 'deb [arch=amd64] https://repo.radeon.com/rocm/apt/ubuntu jammy main' | sudo tee /etc/apt/sources.list.d/rocm.list

sudo apt update
sudo apt install rocm-hip-sdk rocm-dev

# Add user to render and video groups
sudo usermod -a -G render,video $USER

Log out and back in for group changes to take effect.

Configuring Automatic1111 for AMD

Launch the WebUI with CUDA test disabled and directml enabled:

cd ~/stable-diffusion-webui
./webui.sh --skip-torch-cuda-test --precision full --no-half

AMD GPU Performance

ROCm provides 70-80% of the performance compared to NVIDIA. Works best on RX 6000/7000 series with 16GB+ VRAM. Some extensions may not work.

Known Limitations

xformers not supported. Slightly slower generation. Some advanced features may not work. Older GPUs (pre-RDNA2) have poor support.

Verifying AMD GPU Detection

Check if PyTorch detects your AMD GPU:

cd ~/stable-diffusion-webui
source venv/bin/activate
python3 -c "import torch; print(f'ROCm available: {torch.version.hip}')"
deactivate

Remote Server and Headless Setup

Running Automatic1111 on a headless server or VPS is common for dedicated AI art generation. I’ve set up multiple remote servers for this purpose.

Headless Server Setup

For servers without a monitor, use the --listen flag to bind to all network interfaces:

cd ~/stable-diffusion-webui
./webui.sh --listen --xformers

Access the WebUI from your local computer using the server’s IP address:

http://your-server-ip:7860

Remote Access Configuration

For easier access without remembering IP addresses, you have several options.

Option 1: SSH Port Forwarding (Secure)

From your local machine, establish SSH tunnel:

ssh -L 7860:localhost:7860 user@your-server-ip

Then access locally at http://localhost:7860.



Option 2: Gradio Share Tunnel (Easiest)



Use the built-in share feature:



./webui.sh --share

This creates a temporary public URL you can access from anywhere. However, it has bandwidth limitations and isn't suitable for heavy use.


Creating a systemd Service


To automatically start Automatic1111 on boot, create a systemd service:


sudo nano /etc/systemd/system/automatic1111.service

Add this configuration (adjust paths as needed):


[Unit]
Description=Automatic1111 Stable Diffusion WebUI
After=network.target

[Service]
Type=simple
User=your-username
WorkingDirectory=/home/your-username/stable-diffusion-webui
ExecStart=/home/your-username/stable-diffusion-webui/webui.sh --listen --xformers
Restart=on-failure

[Install]
WantedBy=multi-user.target

Enable and start the service:


sudo systemctl daemon-reload
sudo systemctl enable automatic1111
sudo systemctl start automatic1111

Check status anytime:


sudo systemctl status automatic1111

Security Best Practices for Remote Access



Important: Never expose Automatic1111 directly to the internet without authentication. Use SSH tunneling, VPN, or configure a reverse proxy with authentication.



For production use, consider these security measures:



  • Use --share only for temporary access

  • Set up a firewall rule limiting access to specific IPs

  • Configure nginx as a reverse proxy with HTTP basic auth

  • Use a VPN for private remote access

  • Keep the WebUI updated with git pull


Troubleshooting Common Issues


I've encountered and resolved these issues dozens of times. Here are the most common problems and their solutions.


GPU Detection Issues


Symptom: WebUI shows "Running on CPU" or errors about CUDA not being available.


Diagnostic:


nvidia-smi
python3 -c "import torch; print(torch.cuda.is_available())"

Solutions:



  1. Reinstall NVIDIA drivers: sudo apt install --reinstall nvidia-driver-535

  2. Verify CUDA installation: nvcc --version

  3. Check you're using the correct Python environment

  4. Try running with --skip-cuda-test


CUDA Out of Memory Errors


Symptom: "CUDA out of memory" error when generating images.


Solutions:



  1. Reduce image resolution in settings (try 512x512 instead of 1024x1024)

  2. Enable --lowvram mode: ./webui.sh --lowvram

  3. Close other GPU-intensive applications

  4. Reduce batch size to 1 in WebUI settings

  5. Use smaller models (SD 1.5 instead of SDXL)


Python Version Conflicts


Symptom: Errors about incompatible Python versions or failing package installations.


Solution:


cd ~/stable-diffusion-webui
rm -rf venv
python3.11 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
deactivate

Permission Errors


Symptom: "Permission denied" errors when running webui.sh.


Solution:


chmod +x ~/stable-diffusion-webui/webui.sh
chmod +x ~/stable-diffusion-webui/webui-user.sh

Black Screen or No Images Generated


Symptom: Generation completes but no images appear or output is black.


Solutions:



  1. Verify you have downloaded a model checkpoint

  2. Check the model is in models/Stable-diffusion/ directory

  3. Ensure the selected model matches the file (no corruption)

  4. Try a different prompt to rule out prompt-related issues

  5. Check console output for specific error messages


Network/Connection Issues


Symptom: Cannot access WebUI from another computer.


Solutions:



  1. Ensure --listen flag is used when launching

  2. Check firewall allows port 7860: sudo ufw allow 7860

  3. Verify server IP address: ip addr show

  4. Test local access first: http://localhost:7860


Updating Automatic1111 WebUI


Keeping Automatic1111 updated ensures you have the latest features and bug fixes. The project is actively updated.


Standard Update Procedure


To update to the latest version:


cd ~/stable-diffusion-webui
git pull

If there are merge conflicts or you want a clean update:


git fetch --all
git reset --hard origin/master

After updating, restart the WebUI. New dependencies will be installed automatically on first launch.


Handling Extension Updates


Extensions can be updated through the WebUI interface:



  1. Go to the "Extensions" tab

  2. Click "Available" to see installable extensions

  3. Click "Installed" to see current extensions

  4. Click "Apply and restart UI" after updating



Pro Tip: Before major updates, I recommend backing up your configurations. Copy the ui-settings.json file to preserve your customized settings.



Frequently Asked Questions




What are the system requirements for Automatic1111 WebUI?



Minimum requirements include a 64-bit Linux system with Python 3.10 or 3.11, 4GB of NVIDIA GPU VRAM, 8GB of system RAM, and 50GB of storage. For optimal performance, I recommend 8GB+ VRAM, 16GB+ RAM, and an SSD for faster model loading. AMD GPUs work through ROCm but with slightly reduced performance.





Can Automatic1111 run on AMD GPU on Linux?



Yes, AMD GPUs work on Linux through ROCm instead of CUDA. Install ROCm drivers from AMD's repository, then launch with --skip-torch-cuda-test --precision full --no-half. Performance is approximately 70-80% of comparable NVIDIA cards, and xformers acceleration is not available. Best results are on RX 6000 and 7000 series GPUs.





How much VRAM do I need for Stable Diffusion?



Minimum 4GB VRAM for basic 512x512 image generation. With 4GB, use the --lowvram flag and expect slower performance. 6GB allows 768x768 images comfortably. For 1024x1024 generation and advanced features like inpainting, 8GB+ VRAM is recommended. SDXL models work best with 12GB+ VRAM.





How do I update Automatic1111 WebUI?



Navigate to the stable-diffusion-webui directory and run git pull. This updates the code to the latest version. Restart the WebUI afterward. For major version changes or if conflicts occur, use git fetch --all followed by git reset --hard origin/master for a clean update.





Why is my GPU not detected in Automatic1111?



First check nvidia-smi to verify driver installation. Ensure CUDA is installed with nvcc --version. Test PyTorch detection with python3 -c "import torch; print(torch.cuda.is_available())". Common fixes include reinstalling NVIDIA drivers, ensuring correct Python environment, or using --skip-cuda-test flag.





Can I run Automatic1111 without a GPU?



Technically yes, but it's not practical. CPU-only generation can take 5-10 minutes per image compared to seconds on GPU. Launch with --skip-cuda flag for CPU mode. I only recommend this for testing or if you have no other option. Even an older GPU will dramatically outperform any CPU.





How do I run Automatic1111 on a headless server?



Use the --listen flag to bind to all network interfaces: ./webui.sh --listen. Access via your server's IP at port 7860. For secure remote access, use SSH port forwarding or set up nginx as a reverse proxy with authentication. The --share flag creates a temporary public tunnel for easy access.





What Python version does Automatic1111 require?



Automatic1111 requires Python 3.10 or 3.11. Python 3.12 is not yet supported due to PyTorch compatibility issues. Many modern Linux distributions ship with Python 3.12, so you may need to install Python 3.11 separately using pyenv, conda, or deadsnakes PPA on Ubuntu.



Final Recommendations


After setting up Automatic1111 on countless Linux systems, I've found that following the proper installation sequence prevents most issues. Start with correct NVIDIA drivers, use Python 3.10 or 3.11, and let the automated script handle dependencies.


For the best experience, invest in a GPU with at least 8GB VRAM. The difference between 4GB and 8GB is significant when working with higher resolutions and advanced features. If you're running on a remote server, take the time to set up proper security rather than exposing the interface directly.


The Stable Diffusion community is incredibly active. If you encounter issues not covered here, check the official GitHub repository, Stability AI forums, or Stable Diffusion subreddit for the latest solutions.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *