Skip to content

Conversation

@jardinetsouffleton
Copy link
Collaborator

…and update tokenizer logic

if base_model_name is None:
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
else:
self.tokenizer = AutoTokenizer.from_pretrained(base_model_name)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get what's the point of that...

Copy link
Collaborator Author

@jardinetsouffleton jardinetsouffleton Dec 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, sorry a little cryptic. I made this to support LoRA checkpoints, which are stored in the model_name directory. They are applied to a base model, stored at base_model_name directory. The directory where the adapters are stored contains only the adapters, while the model's safetensors + tokenizers, etc. are located in base_model_name

@jardinetsouffleton jardinetsouffleton merged commit 0e060fc into main Dec 20, 2024
2 of 3 checks passed
@jardinetsouffleton jardinetsouffleton deleted the fix-parallel-adapter-evals branch December 20, 2024 20:17
@jardinetsouffleton jardinetsouffleton restored the fix-parallel-adapter-evals branch December 20, 2024 20:17
@jardinetsouffleton jardinetsouffleton deleted the fix-parallel-adapter-evals branch December 20, 2024 20:17
ludunjie1219 pushed a commit to agenttrek/AgentLab that referenced this pull request Dec 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants