Releases: leejet/stable-diffusion.cpp
Releases · leejet/stable-diffusion.cpp
master-2027b16
feat: add vulkan backend support (#291) * Fix includes and init vulkan the same as llama.cpp * Add Windows Vulkan CI * Updated ggml submodule * support epsilon as a parameter for ggml_group_norm --------- Co-authored-by: Cloudwalk <cloudwalk@icculus.org> Co-authored-by: Oleg Skutte <00.00.oleg.00.00@gmail.com> Co-authored-by: leejet <leejet714@gmail.com>
master-0362cc4
fix: fix some typos (#361)
master-f5997a1
fix: do not force using f32 for some flux layers This sometimes leads to worse result
master-8847114
fix: fix issue when applying lora
master-79c9fe9
feat: do not convert some tensors
master-5c561ea
feat: do not convert more flux tensors
master-1bdc767
feat: force using f32 for some layers
master-c837c5d
style: format code
master-64d231f
feat: add flux support (#356) * add flux support * avoid build failures in non-CUDA environments * fix schnell support * add k quants support * add support for applying lora to quantized tensors * add inplace conversion support for f8_e4m3 (#359) in the same way it is done for bf16 like how bf16 converts losslessly to fp32, f8_e4m3 converts losslessly to fp16 * add xlabs flux comfy converted lora support * update docs --------- Co-authored-by: Erik Scholz <Green-Sky@users.noreply.github.com>
master-697d000
feat: add SYCL Backend Support for Intel GPUs (#330) * update ggml and add SYCL CMake option Signed-off-by: zhentaoyu <zhentao.yu@intel.com> * hacky CMakeLists.txt for updating ggml in cpu backend Signed-off-by: zhentaoyu <zhentao.yu@intel.com> * rebase and clean code Signed-off-by: zhentaoyu <zhentao.yu@intel.com> * add sycl in README Signed-off-by: zhentaoyu <zhentao.yu@intel.com> * rebase ggml commit Signed-off-by: zhentaoyu <zhentao.yu@intel.com> * refine README Signed-off-by: zhentaoyu <zhentao.yu@intel.com> * update ggml for supporting sycl tsembd op Signed-off-by: zhentaoyu <zhentao.yu@intel.com> --------- Signed-off-by: zhentaoyu <zhentao.yu@intel.com>