y an active p
active unnötiges Wort
y an active p
active unnötiges Wort
As already done forcomponent (ii), we strive to make the other two componentspublicly available to the BCI community in the near future aswell.
well not really for (iii) right?
Decoding Method
Font extremely small below barplots (xticklabels)
two-sided sign test,
Warum jetzt t-test? Also warum dann nicht signed-rank oder signtest falls permutation zu aufwändig?
This was done by randomly assigning the true classificationlabels to the trials in the test data and calculating the resultingaccuracy of this randomly created classification.
I would rewrite: This was done by randomly permuting the assignment of test labels to test trials and calculating the resulting accuracy on these randomly permuted labels. [.. as before]
To test for significance, a random permutation test was used
jetzt doch? great! :D :) happy :)
Models were trained for a maximum of800 iterations, stopping early if accuracy did not increase for80 iterations in a row. For a detailed description of this earlystopping training method see [6
das ist verwirrend nehmt ihr jetzt meine methode? also dann early stopping + nachtraining auf valid? dann ist ja die beschreibung inkomplett, und macht ihr das auch für EEGNet? Dachte EEGNet optimiert ihr wie es halt EEGNet gemacht hat?
whereasthe other three networks use max pooling
? shallow ist doch auch mean pooling?
Hence it is more similar to the Braindecodenetworks than to the original EEGNet apporach.
.... Würde ich nicht so schreiben... wenn dann ist der Anfang ähnlicher, würde halt eventuell schreiben, "Hence the initial layers are now more similar to the Braindecode networks compared to the original EEGNet" ... den Satz finde ich sonst unnötig stark und finde auch es stimmt nicht, EEGNet (v2)ist schon noch anders...
20
Was ist denn jetzt damit? Tonios meinnung zu 20?
overestimated
should be overestimate
260
250 oder 260?
, it shows the the two phase means to be in counterphase in windows starting
?!
plot_channel_avg(X_baseline,36,'Baseline input windows')
baseline sieht verwirrend aus.
plot_channel_avg(X_RF_cropped,48,'Maximizing input windows')
wunderwunderwunderschön
plot_channel_avg(X_RF_cropped,48,'Maximizing input windows')
Wunderschön oder? :)
plot_channels(RF_data_03,sensor_names)
besser channelnamen vertikal auf x-achse schreiben, fontsize kleiner machen wenn sie überlappen. zahlen helfen ja nicht so viel da.
Mean activation for each filter in the Layer over all units and inputs of a specific class minus the mean activitaction for the remaining classes.
ok klar, aber welche klasse?
1720 trials
? Kann es sein dass du die Trials doppelt zählst oder so? :) also das eher die Zahl Crops ist? Trials sollten <1200 sein, selbst mit test, ~860 für train, also 1720/2 könnte stimmen
subplots_4_features(features_class,features_base,sort_mean_diff[:4])
hm sollte bei baseline/randomly sampled die verteilung nicht uniform sein über phasen? ads ist komisch
subplots_4_features(features_class,features_base,sort_mean_diff[:4])
very nice plots
Features that are most distinct between the maximizing input windows and randomly sampled windows of the same size
clear good
Average input signal
Don't understand average of what, the 30 inputs leading to maximum activation? write again maybe
How often specific channels are contributing the highest activating input
a bit vague/unclear. I don't fully understand. What does "contribute" mean? what does "how often" mean exactly?
[I,F,U] -> [F]
bitte ausschreiben :)
The goal is to be able to investigate which features were learned by the different filters.
I always prefer goal to be named before procedure/method. Easier to follow then :)
From those
From those inputs. Sonst unklar/schwer zu lesen.
Vielleicht ganze auch umschreiben: Procedure for finding discriminative features:
Also, I don't quite understand this with the classes, are now all inputs form the same class? or just all within the trials?
This notebook shows for each Convolutional Layer from a ConvNet trained to classify EEG data into classes of 4 different movements (right hand, left hand, idle, right foot) the on average most active filter.
Difficult to read.
also "most active" is a bit vague, I guess you mean highest mean activation? Alternative: This notebook shows for each layer of a ConvNet: Filter with highest mean activation. ConvNet was trained to classify EEG data into 4 classes of different movements (right hand, left hand, rest, right foot).
-> idle eigentlcih gut, aber verwirrend weil ich überall rest schreibe
c=gKX=1F x+X6=x+NXi=1ivi Fx+
wrong sign after F? like plus and instead minus in second F?
liner
-> linear
73.0
Expected this to be lower, not better than square
59.7
expected this to be higher
add new references
andrew whatever he was called jeniffer collinger or wahtever... can also look up in presentation pdf
kaputt
Not kaputt anymore! we will make it!
e no.
some test note
A test note, yeah