-
Notifications
You must be signed in to change notification settings - Fork 0
Added micro/macro averaging option to MetricsWrapper #54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I pushed my adjusted metric now to this draft PR :) |
|
Micro/Macro averages doesn't make sense for Shannon Entropy @hzavadil98. I have added in some averaging functionality for batchwise mean or sum of entropy. As long as the option is passed as a argument then it shouldn't make any difference. |
…al-split Implementing @salomaestro s changes to the downloading process.
Merging the metrics updates with dataloader updates
|
Yep, I'm working on mine sporadically. Will push in not too long. Could anyone clarify to me: the input to the metric, what shapes are |
I do the one-hot-encoding within the metric, but we should maybe decide on this. |
|
Logits are not one-hot encoded. The shape you should expect is N x C where N is the batchsize and C is the number of classes. |
Thats what one-hot encoding is.
Okay, good. I've noticed that this ambiguity w.r.t the shape of y_true and y_pred means that some of us expect different input shapes. Thus, I guess we should specify if the sizes you are describing is what we want to use, or if both y_pred and y_true should already have shape ( |
|
Maybe we're not refering to the same thing. Yeah, we should specify the sized. I think working with the logits and labels in the form NxC and Nx1 is easier as
It allows us to send the input sample through the network, and just use the raw output on the metric instance. |
|
I see, was mixing with the softmax output. I think we could go for that 👍🏻 |
This will hopefully simplify the arguments to each metric slightly.
Note that the precision metric needs a rewording as we use the argument macro_averaging = True/False as input but that one has the argument micro_averaging.
|
I just pushed my updated metric + tests to this PR. Note that the test will purposefully fail until the Precision metric is updated (see this commit message @Johanmkr) The tests will likely need to be changed a bit to accomodate the differing input arguments to @Seilmast's entropy metric, but this works for now 😉 See here for test on metricwrapper. |
I am resolving this now, but I am still confused about the shapes. So the logits will have shape NxC, but should we enforce y_true to have shape Nx1 rather than just N, which it seems to have in the metric wrapper test @salomaestro? I will try to fix my code in the breaks of the ethics course today. |
|
@Johanmkr as long as your inputs and outputs follow the agreed upon shape you may modify the vectors to your liking. For instance, if you want to use y_true with a shape of N, you could squeeze the dimensions using: |
Yes we can discuss during today's meeting, because it seems to me that the shape we all use for y_true in the tests is in fact (N) and not (N,1), but let's have a look later today :) |
|
@Johanmkr |

In addition to #52 where now logits are passed directly to metrics, I added, as proposed, the micro/macro averaging option to be passed to each metric (except of entropy where I believe it doesn't make sense, right? @Seilmast ). Here is a pretty thorough description of the issue Micro/Macro. This resolves #30 for Johan. Please push your adjusted metrics directly to this Draft PR before we merge them.