Commit Graph

8325 Commits

Author SHA1 Message Date
layerdiffusion 5452bc6ac3 All Forge Spaces Now Pass 4GB VRAM
and they all 100% reproduce author results
2024-08-20 08:01:10 -07:00
Panchovix f136f86fee Merge pull request #1340 from DenOfEquity/fix-for-new-samplers
Fix for new samplers
2024-08-20 10:12:37 -04:00
DenOfEquity c127e60cf0 Update sd_samplers_kdiffusion.py
add new samplers here
2024-08-20 15:01:58 +01:00
DenOfEquity 8c7db614ba Update alter_samplers.py
move new samplers from here
2024-08-20 15:00:56 +01:00
layerdiffusion 14ac95f908 fix 2024-08-20 01:37:01 -07:00
layerdiffusion 6f411a4940 fix loras on nf4 models when activate "loras in fp16" 2024-08-20 01:29:52 -07:00
layerdiffusion 65ec461f8a revise space 2024-08-19 22:43:09 -07:00
layerdiffusion fef6df29d9 Update README.md 2024-08-19 22:33:20 -07:00
layerdiffusion 6c7c85628e change News to Quick List 2024-08-19 22:30:56 -07:00
layerdiffusion 5ecc525664 fix #1322 2024-08-19 20:19:13 -07:00
layerdiffusion 475524496d revise 2024-08-19 18:54:54 -07:00
Panchovix 8eeeace725 Merge pull request #1316 from lllyasviel/more_samplers1
Add samplers: HeunPP2, IPDNM, IPNDM_V, DEIS
2024-08-19 20:49:37 -04:00
Panchovix 2fc1708a59 Add samplers: HeunPP2, IPDNM, IPNDM_V, DEIS
Pending: CFG++ Samplers, ODE Samplers
The latter is probably easy to implement, the former needs modifications in sd_samplers_cfg_denoiser.py
2024-08-19 20:48:41 -04:00
Panchovix 9bc2d04ca9 Merge pull request #1310 from lllyasviel/more_schedulers
Add Align Your Steps GITS, AYS 11 Steps and AYS 32 Steps Schedulers.
2024-08-19 16:58:52 -04:00
Panchovix 9f5a27ca4e Add Align Your Steps GITS, AYS 11 Steps and AYS 32 Steps Schedulers. 2024-08-19 16:57:58 -04:00
layerdiffusion d7151b4dcd add low vram warning 2024-08-19 11:08:01 -07:00
layerdiffusion 2f1d04759f avoid some mysteries problems when using lots of python local delegations 2024-08-19 09:47:04 -07:00
layerdiffusion 0b70b7287c gradio 2024-08-19 09:12:38 -07:00
layerdiffusion 584b6c998e #1294 2024-08-19 09:09:22 -07:00
layerdiffusion 054a3416f1 revise space logics 2024-08-19 08:06:24 -07:00
layerdiffusion 96f264ec6a add a way to save models 2024-08-19 06:30:49 -07:00
layerdiffusion 4e8ba14dd0 info 2024-08-19 05:13:28 -07:00
layerdiffusion d03fc5c2b1 speed up a bit 2024-08-19 05:06:46 -07:00
layerdiffusion d38e560e42 Implement some rethinking about LoRA system
1. Add an option to allow users to use UNet in fp8/gguf but lora in fp16.
2. All FP16 loras do not need patch. Others will only patch again when lora weight change.
3. FP8 unet + fp16 lora are available (somewhat only available) in Forge now. This also solves some “LoRA too subtle” problems.
4. Significantly speed up all gguf models (in Async mode) by using independent thread (CUDA stream) to compute and dequant at the same time, even when low-bit weights are already on GPU.
5. View “online lora” as a module similar to ControlLoRA so that it is moved to GPU together with model when sampling, achieving significant speedup and perfect low VRAM management simultaneously.
2024-08-19 04:31:59 -07:00
layerdiffusion e5f213c21e upload some GGUF supports 2024-08-19 01:09:50 -07:00
layerdiffusion 95bc586547 geowizard 2024-08-19 00:27:21 -07:00
layerdiffusion 00115ae02a revise space 2024-08-19 00:05:09 -07:00
layerdiffusion 3bef4e331a change space path order 2024-08-18 23:55:28 -07:00
layerdiffusion deca20551e pydantic==2.8.2 2024-08-18 22:58:36 -07:00
lllyasviel 0024e41107 Update install 2024-08-18 22:55:48 -07:00
layerdiffusion 631a097d0b change Florence-2 default to base 2024-08-18 21:31:24 -07:00
layerdiffusion ca2db770a9 ic light 2024-08-18 21:16:58 -07:00
layerdiffusion 4751d6646d Florence-2 2024-08-18 20:51:44 -07:00
layerdiffusion 9ec8eabeab too many models
see also https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1270
2024-08-18 20:25:38 -07:00
layerdiffusion 1cbd13c85a Animagine XL 3.1 Official User Interface 2024-08-18 20:05:16 -07:00
layerdiffusion 4e7c8b5f84 add gitignore 2024-08-18 20:04:51 -07:00
layerdiffusion 60dfcd0464 revise space 2024-08-18 19:25:39 -07:00
layerdiffusion 22a19943d2 fix lcm 2024-08-18 19:23:12 -07:00
layerdiffusion 128a793265 gradio 2024-08-18 03:57:25 -07:00
layerdiffusion da72cd6fab IllusionDiffusion 2024-08-18 03:27:44 -07:00
layerdiffusion 1e64709d41 gradio 2024-08-18 03:27:24 -07:00
layerdiffusion 608af2e64c fix progressbar missing 2024-08-18 02:02:15 -07:00
layerdiffusion 2292f9a100 Technically Correct PhotoMaker V2
that can actually reproduce author results, and be used in serious research and writing academic papers
2024-08-18 01:47:27 -07:00
layerdiffusion 101b556ee5 revise space logics 2024-08-18 01:44:29 -07:00
layerdiffusion 72ab92f83e upload meta files 2024-08-18 00:12:53 -07:00
layerdiffusion 0ccbac5389 revise space logics 2024-08-18 00:04:02 -07:00
layerdiffusion 53cd00d125 revise 2024-08-17 23:03:50 -07:00
layerdiffusion db5a876d4c completely solve all LoRA OOMs 2024-08-17 22:43:20 -07:00
DenOfEquity 93bfd7f85b invalidate cond cache if distilled CFG changed (#1240)
* Update processing.py

add distilled_cfg_scale to params that invalidate cond cache

* Update ui.py

distilled CFG and CFG step size 0.1 (from 0.5)
2024-08-17 19:34:11 -07:00
layerdiffusion 0f266c48bd make clear_prompt_cache a function 2024-08-17 19:00:13 -07:00