-
Notifications
You must be signed in to change notification settings - Fork 42
GRPO loss #454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: jlp_entropy_loss
Are you sure you want to change the base?
GRPO loss #454
Conversation
|
|
||
| # Policy loss | ||
| # TODO: advantages or rewards? | ||
| log_ratio_old = torch.exp(target_logprobs - old_logprobs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this would read better as ratio_new_old = torch.exp(target_logprobs - old_logprobs)
| target_logprobs = torch.gather(logprobs, dim=2, index=labels.unsqueeze(2)).squeeze(2) | ||
|
|
||
| # Policy loss | ||
| # TODO: advantages or rewards? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
answer: advantages
| ) | ||
|
|
||
| # TODO: tokens_weights = 1/batch_size ? | ||
| # TODO: Reduce loss? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to sum over tokens and apply mask
| torch.clamp(log_ratio_old, 1 - self.epsilon_low, 1 + self.epsilon_high) * advantage, | ||
| ) | ||
|
|
||
| # TODO: tokens_weights = 1/batch_size ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think so, we do that for the simple case
|
|
||
| # TODO: tokens_weights = 1/batch_size ? | ||
| # TODO: Reduce loss? | ||
| loss = loss / batch_size # 1 x (BxL) x 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
loss = -loss - we want to maximise the objective
✨ Description