Build Your Own Private AI Image Generator with Docker and Open WebUI
Introduction
We've all been there: you need a few images for a project, fire up a cloud AI service, and suddenly you're worrying about credit limits, data privacy, and overly strict content filters rejecting your perfectly reasonable request for a dragon in a business suit. What if you could skip all that and run everything on your own machine with a polished chat interface? That's exactly what Docker Model Runner now makes possible. With just a few commands, you can pull an image-generation model, connect it to Open WebUI, and start creating images from a chat interface—fully local, fully private, and fully yours. This guide walks you through setting up your own private DALL‑E alternative, no cloud subscription required.

What You Need
- Docker Desktop (macOS) or Docker Engine (Linux)
- ~8 GB of free RAM for a small model (more is recommended for smoother performance)
- GPU (optional but highly recommended): NVIDIA (CUDA), Apple Silicon (MPS), or CPU fallback
- A terminal with
dockeranddocker modelcommands available (test withdocker model version)
If you can run docker model version without errors, you're ready to proceed.
Step 1: Pull an Image Generation Model
Docker Model Runner uses a compact packaging format called DDUF (Diffusers Unified Format) to distribute image generation models through Docker Hub, just like any other OCI artifact. Start by pulling the stable-diffusion model, which is a great all‑rounder for realistic and artistic images.
docker model pull stable-diffusion
This command downloads the model and its dependencies (text encoder, VAE, UNet/DiT, scheduler config) bundled into a single DDUF file. The download size is around 7 GB, so grab a coffee while it completes.
Step 2: Verify the Model
Once the pull finishes, confirm the model is ready by inspecting it:
docker model inspect stable-diffusion
You'll see output similar to this (truncated for clarity):
{
"id": "sha256:5f60862074a4c585126288d08555e5ad9ef65044bf490ff3a64855fc84d06823",
"tags": ["docker.io/ai/stable-diffusion:latest"],
"config": {
"format": "diffusers",
"architecture": "diffusers",
"size": "6.94GB"
}
}
This output confirms the model is stored locally and ready to run. The DDUF format means Docker Model Runner can unpack it at runtime without extra work on your part.
Step 3: Launch Open WebUI
Here's the magic step. Docker Model Runner includes a built-in command that automatically wires up Open WebUI against your local model's API endpoint. Run:
docker model launch openwebui
This command does several things at once:
- Starts a background container running the model inference engine
- Exposes an OpenAI‑compatible API (including
POST /v1/images/generations) - Opens Open WebUI in your default web browser, already connected to that API

After a few seconds, you'll see the Open WebUI interface. No configuration files, no environment variables—it just works. If you already have Open WebUI running, you can point it manually to the local model server (usually at http://localhost:8000), but the launch command is the easiest path.
Step 4: Generate Your First Image
With Open WebUI open, you're ready to create. Here's how:
- In the chat input, type a description of the image you want (e.g., "a dragon wearing a business suit, photorealistic style")
- Press Enter or click the send button
- Wait a few seconds while the model generates the image—progress appears in the chat window
- Once complete, the image appears inline. You can download it or copy it directly
Because everything runs locally, your prompts and generated images never leave your machine. No credit system, no content filters beyond what the model itself enforces.
Tips for Best Results
- Use a GPU if possible. CPU generation works but is significantly slower. NVIDIA GPUs with CUDA or Apple Silicon MPS provide the best speed.
- Adjust memory settings. If you run out of RAM, try a smaller model (e.g.,
stable-diffusion-2-1) or reduce the image resolution in the prompt. - Experiment with negative prompts. Open WebUI supports negative prompts to avoid unwanted elements. For example, "no blur, no text" can improve quality.
- Keep Docker updated. Newer versions of Docker Model Runner add support for more models and improve performance.
- Monitor disk usage. Each model takes several gigabytes. Use
docker model listto see what's downloaded anddocker model rm <name>to remove unneeded ones.
You now have a fully private, self‑hosted AI image generator that you can control completely. No subscriptions, no data leaks, no surprises. Happy creating!
Related Articles
- 10 Reasons Why Docker Hardened Images Are Built the Hard Way (and Why That Matters)
- The .de DNSSEC Outage: Lessons Learned from a TLD Crisis
- Mastering Distributed Caching in .NET with Postgres on Azure: A Q&A Guide
- Automated Cost Optimization: Smart Tier for Azure Blob and Data Lake Storage – FAQ
- How Kubernetes Became the Backbone of AI Infrastructure
- How to Elevate Your Container Security with Hardened Images: A Practical Guide
- PyTorch Lightning Package Compromised: Credential Stealer Targets Developers
- Cloudflare Restructures for the Agentic AI Era: A Strategic Workforce Reduction