Skip to content
This repository was archived by the owner on Dec 14, 2023. It is now read-only.

Commit 9b4c7a8

Browse files
Merge pull request #24 from ExponentialML/feat/gradient-checkpointing
Add Gradient Checkpointing
2 parents 98177c5 + 21273ac commit 9b4c7a8

File tree

7 files changed

+1292
-3
lines changed

7 files changed

+1292
-3
lines changed

configs/my_config.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ trainable_modules:
3737
seed: 64
3838
mixed_precision: "fp16"
3939
use_8bit_adam: False # This seems to be incompatible at the moment.
40+
gradient_checkpointing: False
4041
enable_xformers_memory_efficient_attention: False
4142

4243
# Use scaled dot product attention (Only available with >= Torch 2.0)

configs/my_config_hq.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,6 @@ trainable_modules:
3737
seed: 64
3838
mixed_precision: "fp16"
3939
use_8bit_adam: False # This seems to be incompatible at the moment.
40-
40+
gradient_checkpointing: False
4141
# Xformers must be installed
4242
enable_xformers_memory_efficient_attention: True

configs/offset_noise_finetune.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,7 @@ trainable_modules:
4242
seed: 64
4343
mixed_precision: "fp16"
4444
use_8bit_adam: False # This seems to be incompatible at the moment.
45+
gradient_checkpointing: False
4546

4647
# Xformers must be installed
4748
enable_xformers_memory_efficient_attention: True

configs/single_video_config.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ trainable_modules:
3737
seed: 64
3838
mixed_precision: "fp16"
3939
use_8bit_adam: False # This seems to be incompatible at the moment.
40+
gradient_checkpointing: False
4041
enable_xformers_memory_efficient_attention: False
4142

4243
# Use scaled dot product attention (Only available with >= Torch 2.0)

0 commit comments

Comments
 (0)