fix: avoid log(0) in KL divergence (Fixes #12233)#13635
Open
Monasri29-hub wants to merge 2 commits intoTheAlgorithms:masterfrom
Open
fix: avoid log(0) in KL divergence (Fixes #12233)#13635Monasri29-hub wants to merge 2 commits intoTheAlgorithms:masterfrom
Monasri29-hub wants to merge 2 commits intoTheAlgorithms:masterfrom
Conversation
Fixes TheAlgorithms#12233 - Filter out zero entries from y_true before computing logarithm - Add doctests demonstrating correct behavior with zeros - Mathematically correct per information theory conventions
for more information, see https://pre-commit.ci
mindaugl
approved these changes
Oct 20, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Fixes #12233
This PR fixes a critical bug in the
kullback_leibler_divergencefunction where entries withy_true = 0caused the function to returnnan.Problem
The current implementation computes
y_true * np.log(y_true / y_pred)for all entries, including wheny_true = 0. This results in0 * log(0) = 0 * (-inf) = nan, which breaks the function.Solution
Filter out zero entries from
y_truebefore computing the logarithm. This is mathematically correct because:0 * log(0) = 0lim(p→0+) p·log(p) = 0rel_entrand other standard implementations handle this caseChanges Made
mask = y_true != 0Testing
New Test Cases Added:
[0.0, 0.3, 0.7]→ Returns0.0237...✓[0.0, 0.0, 1.0]→ Returns0.6931...✓All Existing Tests Pass:
Verification