Commit 71f34fc
[Flux LoRA] fix issues in flux lora scripts (huggingface#11111)
* remove custom scheduler
* update requirements.txt
* log_validation with mixed precision
* add intermediate embeddings saving when checkpointing is enabled
* remove comment
* fix validation
* add unwrap_model for accelerator, torch.no_grad context for validation, fix accelerator.accumulate call in advanced script
* revert unwrap_model change temp
* add .module to address distributed training bug + replace accelerator.unwrap_model with unwrap model
* changes to align advanced script with canonical script
* make changes for distributed training + unify unwrap_model calls in advanced script
* add module.dtype fix to dreambooth script
* unify unwrap_model calls in dreambooth script
* fix condition in validation run
* mixed precision
* Update examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* smol style change
* change autocast
* Apply style fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>1 parent c51b6bd commit 71f34fc
File tree
4 files changed
+154
-171
lines changed- examples
- advanced_diffusion_training
- dreambooth
4 files changed
+154
-171
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1 | | - | |
| 1 | + | |
2 | 2 | | |
3 | | - | |
| 3 | + | |
4 | 4 | | |
5 | 5 | | |
6 | 6 | | |
7 | | - | |
| 7 | + | |
| 8 | + | |
0 commit comments