-
Notifications
You must be signed in to change notification settings - Fork 31.4k
FIX Error when trying to load non-LoRA PEFT #42663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FIX Error when trying to load non-LoRA PEFT #42663
Conversation
This PR fixes a bug that prevented non-LoRA PEFT adapters to be loaded into a transformers model. A test for this has been added. Additionally, also testing if a non-LoRA adapter can be added to a transformers model. This was not broken but still lacked test coverage.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
Not sure who would be best suited to review this from among the transformers devs, pinging @Cyrilvallez because you reviewed the original hotswap PR. Personally, I'd consider this a regression introduced by #41297 so ideally it could be part of the v5 release :) |
Cyrilvallez
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the check is already performed above no? But I still think we should probably update the default value of hotswap in the function?? See also my comment here #41297 (comment)
| peft_config.inference_mode = not is_trainable | ||
|
|
||
| if peft_config.peft_type != PeftType.LORA: | ||
| if hotswap and (peft_config.peft_type != PeftType.LORA): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, we can fully remove this check no? It's already checked above L206
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, great catch. I removed it and confirmed that the corresponding test still passes. Please review again. Failing test_tokenization_mistral_common.py seem to be unrelated.
Cyrilvallez
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, LGTM then! Will merge as the failing test is unrelated
Fantastic, thanks. Anything I should do to ensure that this is included in v5? |
|
Nop, will be automatically in the next rc1 release! |
* FIX Error when trying to load non-LoRA PEFT This PR fixes a bug that prevented non-LoRA PEFT adapters to be loaded into a transformers model. A test for this has been added. Additionally, also testing if a non-LoRA adapter can be added to a transformers model. This was not broken but still lacked test coverage. * Reviewer feedback: Remove check completely
What does this PR do?
This PR fixes a bug that prevented non-LoRA PEFT adapters to be loaded into a transformers model. It was just a very simple logical error where a check was performed generally when it actually only is relevant for hotswapping. A test for this has been added.
Additionally, also testing if a non-LoRA adapter can be added to a transformers model. This was not broken but still lacked test coverage.
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.