-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with the feature explainability methods #35
Comments
So for the first method, the spatial filters are extracted from the model = EEGNet(...) # define some EEGNet configuration
model.fit(...) # fit the model You can use
You'll see that the
This gets you the spatial filter weights that we then use together with the EEG channel locations to plot a topoplot, which is what we show in Fig 6A in the paper. The spatial filters are not defined for a single time point; rather they are trained using all the data and you learn just one filter for all time points. The number of spatial filters you learn will depend on the EEGNet model configuration you train; EEGNet-8,2 specifically learns 2 spatial filters for each of 8 temporal filters so a total of 16 spatial filters. For the second method, the convolutional kernel filter weights (Fig 7, top row) are from the first Conv2D layer which represents the temporal filter layer.
The middle and bottom rows are the spatial filter weights, using the method to extract the weights described above. Figure 8 shows spatial filters from two different methods, Filter-Bank CSP (https://www.frontiersin.org/articles/10.3389/fnins.2012.00039/full) and EEGNet. Hope this helps.. |
Hi,
I have got the DeepLIFT to work and understood the method, though the two other methods mentioned in [1] have I not managed to implement.
For the first method, summarizing averaged outputs of hidden unit activations:
For the second method, visualizing the convolutional kernel weights:
The text was updated successfully, but these errors were encountered: