本地部署 Stable Diffusion

本地部署 Stable Diffusion(以 SD3.5 Medium 为例)并提供 API 服务:实践与避坑手册

一、环境与硬件准备

  • GPU:NVIDIA A10(24 GB)一张即可。
  • 驱动/CUDA:驱动 550.x,CUDA 12.x(与你的 torch==2.5.x+cu121/+cu124 对齐即可)。
  • Python & 依赖:建议新建 venv;核心依赖:torch, diffusers, transformers, fastapi, uvicorn, safetensors
  • Diffusers 的内存优化:强烈建议启用 CPU Offloadenable_model_cpu_offload())与 low_cpu_mem_usage=True,这是官方推荐的节省显存/内存方式。(知乎专栏)

小贴士:Diffusers 官方“Create a server”文档建议每次请求复制 scheduler(防止线程争用);我们下面的服务端代码就按这个来。(Hugging Face)

二、模型获取:Hugging Face 或 ModelScope(魔搭)

1)Hugging Face下载 SD3

  • 某些模型(如 SD3 Medium)是 gated repo,需要你在模型页同意许可/通过授权,否则会 401/403(GatedRepoError)。解决方法:hf auth login + 在模型页点击“Request access/Agree to license”。(Hugging Face)
  • 使用大陆镜像(HF_ENDPOINT=https://hf-mirror.com)可以提速,但 镜像不会绕过授权,仍需登录并具备访问权。(知乎专栏)

2)转用 ModelScope(魔搭)下载 SD3.5 Medium

实测 AI-ModelScope/stable-diffusion-3.5-medium 可直接下载(无需 HF 授权)。推荐用 SDK:

1
2
3
4
5
6
7
8
9
10
11
12
# download_sd35_medium.py
import os
os.environ.setdefault("MODELSCOPE_CACHE", "/data/sd/ms-cache")

from modelscope import snapshot_download
model_id = "AI-ModelScope/stable-diffusion-3.5-medium"
model_dir = snapshot_download(
model_id,
cache_dir=os.environ["MODELSCOPE_CACHE"],
local_dir="/data/sd/sd3.5-medium"
)
print("SD3.5 Medium 下载到:", model_dir)
  • MODELSCOPE_CACHE 可自定义缓存位置(官方文档说明)。(ModelScope)

备注:ModelScope 历史上 ignore_file_pattern 等过滤参数行为经历过调整;若遇到版本差异导致的参数无效,可先去掉该参数再试。(GitHub)

3)环境配置

使用uv创建一个单独的环境

1
2
3
4
5
6
7
uv venv --python 3.12 .venv
#激活环境
source .venv/bin/activate
uv pip install --index-url https://download.pytorch.org/whl/cu121 torch torchvision torchaudio
uv pip install diffusers transformers accelerate safetensors fastapi uvicorn python-multipart
uv pip install bitsandbytes --index-url https://pypi.tuna.tsinghua.edu.cn/simple

三、服务端:FastAPI + Uvicorn(支持 CPU Offload、禁用 T5)

备注:由于内存紧张,所以使用了一些减负策略,若内存显存足够无需配置CPU Offload、禁用 T5

关键思路

  • SD3/3.5 的 StableDiffusion3Pipeline 默认包含 3 个文本编码器(CLIP-L、CLIP-G、T5-XXL)。推理时可以禁用 T5-XXL 来显著降低内存占用与首帧时间(Diffusers SD3 文档支持 text_encoder_3=None, tokenizer_3=None)。(CSDN)
  • CPU Offloadenable_model_cpu_offload)在推理阶段分层把权重搬运到 GPU,可在 24 GB 显存跑 1024×1024。(知乎专栏)
  • 每次请求复制 scheduler,见官方“Create a server”。(Hugging Face)
  • 若出现加载期大量 torch.compile 编译进程compile_worker)导致请求卡住/内存高,可禁用编译TORCH_COMPILE_DISABLE=1torch.compiler.disable())。(知乎专栏)
  • 本脚本提供与OpenAI文生图API所兼容的API服务,返回 b64_json

代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
# app_openai_images.py
import os, io, re, time, asyncio, logging, random, base64
from typing import Optional, Dict, Any, List, Tuple
import torch
from fastapi import FastAPI, HTTPException, Request, Depends
from pydantic import BaseModel, Field
from PIL import Image
from diffusers import StableDiffusion3Pipeline
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials

LOGLEVEL = os.getenv("LOGLEVEL", "INFO").upper()
logging.basicConfig(level=LOGLEVEL, format="%(asctime)s [%(levelname)s] %(message)s")
log = logging.getLogger("sd35-openai-b64-auth")

MODEL_DIR = os.getenv("MODEL_DIR", "/data/sd/sd3.5-medium")
MAX_CONCURRENCY = int(os.getenv("MAX_CONCURRENCY", "1"))
MODE = os.getenv("MODE", "eco").lower() # eco=cpu offload, fast=gpu
API_KEY = os.getenv("API_KEY", "your_key") # ← 可用环境变量覆盖

try:
torch.set_float32_matmul_precision("high")
except Exception:
pass

_state: Dict[str, Any] = {"loaded": False, "steps": []}
def mark(step: str): _state["steps"].append(step); log.info(step)

def _check_tokenizer_ok(sub: str, name: str):
p = os.path.join(MODEL_DIR, sub)
if not os.path.isdir(p): raise RuntimeError(f"Missing {name} folder: {p}")
fs = set(os.listdir(p))
ok = ("tokenizer.json" in fs) or (("merges.txt" in fs) and (("vocab.json" in fs) or ("vocab.txt" in fs)))
if not ok: raise RuntimeError(f"{name} tokenizer needs tokenizer.json OR merges.txt+vocab.json in: {p}")

def preflight():
_check_tokenizer_ok("tokenizer", "CLIP-L")
_check_tokenizer_ok("tokenizer_2", "CLIP-G")
_check_tokenizer_ok("tokenizer_3", "T5-XXL")
import google.protobuf # noqa
try: import sentencepiece # noqa
except Exception: log.warning("sentencepiece not found; T5 may need it")

def load_pipe():
global pipe_base
preflight(); mark("✔ preflight ok")

kw = dict(low_cpu_mem_usage=True, local_files_only=True)
try:
pipe_base = StableDiffusion3Pipeline.from_pretrained(
MODEL_DIR, dtype=torch.float16, text_encoder_3=None, tokenizer_3=None, **kw
)
except TypeError:
pipe_base = StableDiffusion3Pipeline.from_pretrained(
MODEL_DIR, torch_dtype=torch.float16, text_encoder_3=None, tokenizer_3=None, **kw
)
pipe_base.set_progress_bar_config(disable=True)
mark("✔ pipeline created")

if MODE == "fast":
pipe_base.to("cuda"); mark("✔ moved to CUDA (fast mode)")
else:
pipe_base.enable_model_cpu_offload(); mark("✔ model cpu offload enabled (eco mode)")

# 预热
try:
scheduler = pipe_base.scheduler.from_config(pipe_base.scheduler.config)
warm = StableDiffusion3Pipeline.from_pipe(pipe_base, scheduler=scheduler)
gdev = "cuda" if torch.cuda.is_available() else "cpu"
gen = torch.Generator(device=gdev).manual_seed(1)
with torch.inference_mode():
_ = warm(prompt="warmup", width=512, height=512,
num_inference_steps=2, guidance_scale=3.5, generator=gen).images[0]
mark("✔ prewarm done")
except Exception as e:
log.warning(f"prewarm skipped: {e}")

_state["loaded"] = True

# -------- OpenAI Images 兼容(只返回 b64_json) + Bearer 鉴权 --------

class OpenAIImageGenRequest(BaseModel):
model: Optional[str] = Field(default=None)
prompt: str
size: Optional[str] = "1024x1024"
n: Optional[int] = 1
response_format: Optional[str] = "b64_json" # 忽略,固定 b64_json
user: Optional[str] = None
quality: Optional[str] = None
style: Optional[str] = None
# 扩展
negative_prompt: Optional[str] = ""
seed: Optional[int] = None
steps: Optional[int] = None

def _parse_size(sz: str) -> Tuple[int, int]:
m = re.match(r"^(\d+)x(\d+)$", (sz or "").strip())
if not m: raise HTTPException(400, detail=f"invalid size: {sz}")
w, h = int(m.group(1)), int(m.group(2))
if w <= 0 or h <= 0: raise HTTPException(400, detail=f"invalid size: {sz}")
return w, h

def _img_to_b64_png(img: Image.Image) -> str:
buf = io.BytesIO(); img.save(buf, format="PNG")
return base64.b64encode(buf.getvalue()).decode("utf-8")

GPU_SEMA = asyncio.Semaphore(MAX_CONCURRENCY)
app = FastAPI(title="OpenAI-compatible Images API (SD3.5, b64-only, auth)")

# Bearer 鉴权依赖
_http_bearer = HTTPBearer(auto_error=False)

def _token_ok(token: str) -> bool:
expected = API_KEY
return token == expected or token == expected.replace("sk=", "", 1)

async def auth(credentials: HTTPAuthorizationCredentials = Depends(_http_bearer)):
if credentials is None or not credentials.scheme.lower() == "bearer":
raise HTTPException(status_code=401, detail="Missing Authorization", headers={"WWW-Authenticate": "Bearer"})
if not _token_ok(credentials.credentials):
raise HTTPException(status_code=401, detail="Invalid API key", headers={"WWW-Authenticate": "Bearer"})
return True

@app.on_event("startup")
def _startup():
try:
mark("… loading model, this can take a while")
load_pipe()
mark("✔ model ready")
except Exception as e:
log.exception("startup failed")
_state["error"] = str(e)

@app.get("/status")
def status(): # 可按需也加鉴权
return _state

@app.post("/v1/images/generations", dependencies=[Depends(auth)])
async def images_generations(_: Request, body: OpenAIImageGenRequest):
if not _state.get("loaded"):
raise HTTPException(503, detail=f"Model not ready: {_state.get('error')}")

width, height = _parse_size(body.size or "1024x1024")
n = int(body.n or 1)
if n < 1 or n > 10:
raise HTTPException(400, detail="n must be between 1 and 10")

num_steps = body.steps if body.steps else 28
guidance = 7.0

async with GPU_SEMA:
try:
scheduler = pipe_base.scheduler.from_config(pipe_base.scheduler.config)
pipe = StableDiffusion3Pipeline.from_pipe(pipe_base, scheduler=scheduler)

if body.seed is None:
body.seed = random.randint(1, 10_000_000)
gen = torch.Generator(device="cuda" if torch.cuda.is_available() else "cpu").manual_seed(body.seed)

def run_one() -> Image.Image:
with torch.inference_mode():
out = pipe(
prompt=body.prompt,
negative_prompt=body.negative_prompt or "",
width=width, height=height,
num_inference_steps=num_steps,
guidance_scale=guidance,
generator=gen,
)
return out.images[0]

loop = asyncio.get_event_loop()
images: List[Image.Image] = []
for _ in range(n):
img = await loop.run_in_executor(None, run_one)
images.append(img)

created = int(time.time())
data = [{"b64_json": _img_to_b64_png(img), "revised_prompt": body.prompt} for img in images]
return {"created": created, "data": data}
except HTTPException:
raise
except Exception as e:
log.exception("generation failed")
raise HTTPException(500, detail=str(e))

启动

1
2
3
4
5
6
7
# 避免 compile 阻塞/占内存(如遇到)
export TORCH_COMPILE_DISABLE=1
export MODEL_DIR=/data/sd/sd3.5-medium
export MODE=eco # 或 fast
export API_KEY='your_key' # ← 也可改成自己的随机长串
uvicorn app_openai_images:app --host 0.0.0.0 --port 8000 --log-level info

1
2
3
4
5
6
7
8
# 后台运行命令
nohup env TORCH_COMPILE_DISABLE=1 \
MODEL_DIR=/data/sd/sd3.5-medium \
MODE=eco \
API_KEY='your_key' \
/data/sd/.venv/bin/uvicorn app_openai_images:app \
--host 0.0.0.0 --port 8000 --log-level info \
> /data/sd/logs/sd35.out 2>&1 & echo $! > /data/sd/logs/sd35.pid

相关文档:Diffusers 的 SD3/3.5 管线与用法、禁用 T5、第 3 文本编码器、以及 CPU Offload。(CSDN)

四、调用

1)客户端测试脚本(带鉴权)

保存为:/data/sd/test_b64_client_auth.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# test_b64_client_auth.py
import argparse, base64, json, os, time
import requests
from pathlib import Path

def main():
ap = argparse.ArgumentParser()
ap.add_argument("--host", default="127.0.0.1")
ap.add_argument("--port", type=int, default=8000)
ap.add_argument("--api-key", default=os.getenv("API_KEY", "your_key"))
ap.add_argument("--prompt", default="a cozy cat reading a book by the fireplace, 1024x1024")
ap.add_argument("--size", default="1024x1024")
ap.add_argument("--n", type=int, default=1)
ap.add_argument("--steps", type=int, default=24)
ap.add_argument("--neg", default="")
ap.add_argument("--outdir", default="./outputs_client")
args = ap.parse_args()

url = f"http://{args.host}:{args.port}/v1/images/generations"
payload = {
"model": "gpt-image-1",
"prompt": args.prompt,
"size": args.size,
"n": args.n,
"response_format": "b64_json",
"steps": args.steps,
"negative_prompt": args.neg
}
headers = {"Authorization": f"Bearer {args.api_key}"}

print("POST", url)
t0 = time.time()
r = requests.post(url, json=payload, headers=headers, timeout=600)
dt = time.time() - t0
print("status", r.status_code, "time", f"{dt:.2f}s")
r.raise_for_status()
data = r.json()

outdir = Path(args.outdir); outdir.mkdir(parents=True, exist_ok=True)
saved = []
for i, item in enumerate(data.get("data", []), start=1):
b64 = item.get("b64_json")
if not b64: continue
img_bytes = base64.b64decode(b64)
fn = outdir / f"openai_b64_{int(time.time()*1000)}_{i}.png"
with open(fn, "wb") as f:
f.write(img_bytes)
saved.append(str(fn))
print("saved:", fn)

print("done, files:", json.dumps(saved, ensure_ascii=False, indent=2))

if __name__ == "__main__":
main()

运行:

1
2
uv pip install -U requests
python /data/sd/test_b64_client_auth.py --api-key 'your_key' --n 1 --steps 24 --size 1024x1024

2)GPU/进程观测

  • 实时看 GPU:watch -n 1 nvidia-sminvidia-smi dmon。(掘金)
  • 看进程/内存:ps -eo pid,cmd,%mem,rss --sort=-rss | head
  • 服务器日志加 --log-level debug 方便定位。

为什么有时显存几乎为 0?
启动阶段开启了 CPU Offload,权重常驻 CPU,仅在推理时分层搬到 GPU;空闲时显存可能接近 0,推理中会看到波动。(知乎专栏)

五、常见问题与快速修复

1)“401/403 Unauthorized/Forbidden” 下载失败

  • 这是 未登录或未获授权 访问 gated 模型(如 SD3 Medium)。
    解决:hf auth login 登录;去模型页 Request access/Accept license;或改走 ModelScope 的 SD3.5。(Hugging Face)

2)“tokenizer 文件缺失/TypeError: expected str, bytes or os.PathLike”

  • CLIP Tokenizer 需要 tokenizer.json,或 merges.txt + vocab.json 的组合;确认模型目录下 3 套 tokenizer(tokenizer/tokenizer_2/tokenizer_3)存在其一。(Hugging Face)

3)“ImportError: requires the protobuf library / SentencePiece not found”

  • Transformers 的部分 tokenizer/模型需要 protobuf、T5 常见 **sentencepiece**。
    解决:pip install -U protobuf;必要时 pip install sentencepiece。(Hugging Face)

4)“请求卡住、CPU 占用高、出现大量 compile_worker 子进程”

  • 可能触发了 PyTorch 编译路径(torch.compile)。
    解决:设置 TORCH_COMPILE_DISABLE=1 或在代码里 torch.compiler.disable()。(知乎专栏)

5)“Killed / dmesg 出现 oom-killer / Memory cgroup out of memory”

  • 这是**系统内存(不是显存)**被 cgroup **硬限制**(memory.max 或 v1 的 memory.limit_in_bytes)顶到而被 OOM 杀掉;日志里常见 *invoked oom-killer*、*Killed process …*。
    解决:减并发/降低分辨率/禁用 T5-XXL/启用 CPU Offload;或提升容器/虚机的 **内存上限**(更大 memory.max/开启 swap)。cgroup v2 的 memory.max 是**硬限制**,超过即触发回收/终止。(Linux内核档案馆)

6)“显存为 0,算力不跑”

  • 请求还没返回 就一直 0,优先检查是否卡在编译(见 #4),或网络代理/镜像导致拉权重超时;否则检查 vGPU 驱动状态。
  • 正常情况下启用 CPU Offload 时空闲显存接近 0属预期,推理中会升高。(知乎专栏)

7)“Diffusers 参数名差异:torch_dtype vs dtype”

  • 新版本统一使用 dtype;旧示例中 torch_dtype 依然可用但会提示已弃用。以SD3/3.5 官方文档为准。(CSDN)


本地部署 Stable Diffusion
http://example.com/2025/10/21/本地部署-Stable-Diffusion/
作者
Sanli Ma
发布于
2025年10月21日
许可协议