You signed in with another tab or window.Reload to refresh your session.You signed out in another tab or window.Reload to refresh your session.You switched accounts on another tab or window.Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+31-5Lines changed: 31 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,14 +10,14 @@ Inference of [Stable Diffusion](https://github.com/CompVis/stable-diffusion) in
10
10
11
11
- Plain C/C++ implementation based on[ggml](https://github.com/ggerganov/ggml), working in the same way as[llama.cpp](https://github.com/ggerganov/llama.cpp)
12
12
- Super lightweight and without external dependencies
13
-
- SD1.x andSD2.x support
14
-
-[SD-Turbo](https://huggingface.co/stabilityai/sd-turbo) support
Using Metal makes the computation run on the GPU. Currently, there are some issues with Metal when performing operations on very large matrices, making it highly inefficient at the moment. Performance improvements are expected in the near future.
121
+
122
+
```
123
+
cmake .. -DSD_METAL=ON
124
+
cmake --build . --config Release
125
+
```
126
+
115
127
### Using Flash Attention
116
128
117
129
Enabling flash attention reduces memory usage by at least 400 MB. At the moment, it is not supported when CUBLAS is enabled because the kernel implementation is missing.
You can use ESRGAN to upscale the generated images. At the moment, only the[RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth) model is supported. Support for more models of this architecture will be added soon.
261
+
262
+
- Specify the model path using the`--upscale-model PATH` parameter. example: