Skip to content

Conversation

@CarinaFo
Copy link
Contributor

#11705

implement a epochs method that allows for channel specific epoch rejection

@dominikwelke
Copy link
Contributor

apart from the main functionality and some test (tbd), we would probably also want some kind of record that this was done.
drop_log comes to mind, but i think as is it wouldnt be suitable because no epoch is dropped.

any good idea?

@dominikwelke
Copy link
Contributor

btw, i dont know about the state of the PR but for me this code doesnt work so far..
i think it's the overwriting with nan values using the mask.. but behavior was not really consistent, i have to track this down :)

CarinaFo and others added 2 commits November 17, 2023 17:06
@CarinaFo
Copy link
Contributor Author

Update: the code works after implementing a preload_check. TBD: do we want channel specific epoch rejection, meaning we kick out epoch outliers based on the channel specific standard deviation OR do we calculate the standard deviation over all channels to define outlier epochs?

@dominikwelke
Copy link
Contributor

related to one of my previous comments:
currently data has to be preloaded, so that chosen epochs can be overwritten with nan. as mentioned, currently this change is not logged anywhere which we should probably do :).
if we were to extend drop_log in a way that it can handle individual channel/epoch combos, this would enable us to process without preloading, as i think is the case for all other rejection methods.

but this would be a deeper change, and basically a design decision.
any thoughts @drammock @larsoner ?

@dominikwelke
Copy link
Contributor

TBD: do we want channel specific epoch rejection, meaning we kick out epoch outliers based on the channel specific standard deviation OR do we calculate the standard deviation over all channels to define outlier epochs?

+1 for this question!
you are right @carina - the requested functionality in #11705 was std within channels, and not across all data..

@drammock
Copy link
Member

Sorry it took me so long to look at this PR

TBD: do we want channel specific epoch rejection, meaning we kick out epoch outliers based on the channel specific standard deviation OR do we calculate the standard deviation over all channels to define outlier epochs?

This implementation is actually a bit more specific than I was expecting ("specific" in the sense that it bakes in the idea of using standard deviation to decide what to do). I was expecting @larsoner's idea (2) from here, as I tried to indicate in my subsequent comment:

marking as NaN and using nanmean / interpolation seems like the right approach to me.

In other words, I was thinking the implementation should be something like:

  • make it easy to replace channel data with nan on a per-epoch basis. Currently if a channel is bad it is bad for all epochs in the Epochs object, so epochs.interpolate_bads can't do nan-replacement on a per-trial basis. The solution might be to modify interpolate_bads or it might be a new function, not sure.
  • modify epochs.average to use np.nanmean
  • deal with the "effective n_averaged" issue somehow

That way, the user's job is to somehow create the inputs to that new function (or the modified interpolate_bads function) --- I'm imagining a dict of {epoch_idx: list_of_bad_chs} as one possible way --- and they can decide how to populate that (std deviation or somethign else, within channel vs across-channel, etc).

@larsoner was this more or less what you were expecting, or am I way off base here?

@larsoner
Copy link
Member

larsoner commented Dec 1, 2023

Yes that sounds reasonable to me!

@CarinaFo CarinaFo closed this Feb 7, 2024
@CarinaFo CarinaFo force-pushed the channel_specific_epoch_rejection branch from 7d17f89 to 9f0dfef Compare February 7, 2024 04:27
…o/mne-python into channel_specific_epoch_rejection
mne/epochs.py Outdated
ch_names = [evoked.ch_names[p] for p in picks_idx]
evoked.pick(ch_names)

# Attach per-channel nave for picked channels only (match by ch names)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this should happen within the evoked.pick method (and probably in the epochs.pick method as well).
that should resolve the current failing of test_drop_bad_epochs, too :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, I will add the this logic to the pick method in channels.py

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am unsure if we should add a nave_per_channel attribute to the evoked object? I think it is misleading to just store the minimum of epochs over all channels (currently stored in nave). However, nave_per_channel only makes sense if epochs have been rejected on a channel basis, so I am not sure where to add the attribute in the code base. Any ideas/thoughts?

Copy link
Contributor

@dominikwelke dominikwelke Nov 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think i saw your code already checking if the attribute nave_per_channel is present before accessing it..

  1. one option would be to only have the attribute if channel specific rejection was indeed applied, but not otherwise.
    it could then even function as the main indicator if this was the case.

  2. another option: just use the existing nave attribute by turning it into an array. in the standard rejection mode, each channel would simply have the same entry.

if chosing 2) we would need to update all functions that currently access the attribute (such as plots).

i just did a git grep "\.nave" to check which functions and methods do access the attribute.. there are a few (e.g. in dipole, tfr) but its not too many..
possible that we will have to enable all these functions to work with nave_per_channel anyway? then it wouldn't make a big difference, in terms of effort..

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes indeed, we must do some simulation / phantom proof-of-concept before proceeding. Sorry, I should have said that before.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that simulations before implementation are crucial. I had a look at the code and as far as I can tell there is only one line in mne/cov.py in whiten_evoked() that needs to be adjusted (see last commit). The brainstorming Elekta Phantom data looks good, so I could run some simulations with that data.

Just one last thing before I proceed: we do violate the "implicit" assumption of an identical number of valid trials across all EEG/MEG channels in an evoked data object by implementing channel-specific epoch rejection.
Do we agree that it is justified to challenge that assumption and implement channel-specific epoch rejection for EEG/MEG, which results in downstream source reconstruction adjustments?

Alternatively, we could restrict channel-specific epoch rejection to a certain type of data (e.g. ecog data).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's okay to violate that assumption as long as we say in the docstring that we do

there is only one line in mne/cov.py in whiten_evoked() that needs to be adjusted (see last commit)

If the only public endopoint is just for evoked.plot(..., noise_cov=noise_cov) and it has no effect (other than a global, non-channel-specific nave being changed) on make_inverse_operator or apply_inverse_operator then I don't think we need any simulations. I assumed this would affect any covariance-based whitening we did (like in compute_whitener and associated private functions), but if that's not the case then I think the situation is much less "risky", as we only modify viz functions under the hood not source estimation...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry that was my bad (I thought the scaled covariance would be used in source reconstruction directly), BUT nave is also heavily used in prepare_inverse_operator

I am currently trying to make sense of lines 650ff:

# Scale some of the stuff
scale = float(inv["nave"]) / nave
inv["noise_cov"]["data"] = scale * inv["noise_cov"]["data"]

# Deal with diagonal case
if inv["noise_cov"]["data"].ndim == 1:
    logger.info("    Diagonal noise covariance found")
    inv["noise_cov"]["eig"] = inv["noise_cov"]["data"]
    inv["noise_cov"]["eigvec"] = np.eye(len(inv["noise_cov"]["data"]))

inv["noise_cov"]["eig"] = scale * inv["noise_cov"]["eig"]
inv["source_cov"]["data"] = scale * inv["source_cov"]["data"]

if inv["eigen_leads_weighted"]:
    inv["eigen_leads"]["data"] = sqrt(scale) * inv["eigen_leads"]["data"]

logger.info(
    "    Scaled noise and source covariance from nave = %d to nave = %d",
    inv["nave"],
    nave,
)
inv["nave"] = nave

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This... I think we can't directly use anything channel-specific without working out the math. I was thinking we could do it during the computation of the whitener, but I forgot that's done before any interaction with nave, at least at this stage of the inv code I guess.

I'm not sure there's going to be an easy way to incorporate this. I don't think we can just multiply / divide by the channel-specific values even though the shapes match... again would have to work out the math somehow.

mne/epochs.py Outdated
self._check_consistency()
self.set_annotations(annotations, on_missing="ignore")

def drop_bad_epochs(self, reject_mask=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could add the inplace argument (either modify self -or- return a modified copy).
@larsoner - is this desirable from API perspective?


assert len(ev.ch_names) == len(ch_subset)
assert len(ev.nave_per_channel) == len(ch_subset)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps add smth like:
assert ev == ep.average(pick=ch_subset)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if we have to test that here (I think this is covered in the test for averaging epochs)

mne/epochs.py Outdated
self._check_consistency()
self.set_annotations(annotations, on_missing="ignore")

def drop_bad_epochs(self, reject_mask=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

..and a naming comment:
to me the method's name drop_bad_epochs sounds like it would do what drop_bad does - rejecting entire epochs.
maybe we should look for a more telling name? drop/reject_bad_epochs_by_channel or so?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed. I am trying to come up with a short name that specifies that we reject channel-wise. How about drop_bad_epochs_per_channel()... pretty long though

Copy link
Contributor

@dominikwelke dominikwelke Nov 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps it should based on the current standard method drop_bad?
this would make it drop_bad_per_channel or drop_bad_channelwise

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I vote for drop_bad_channelwise

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants