Number of time length for calibration of training and testing data #10
Replies: 4 comments
-
|
Correct! "Calibration" here means estimating the feature means and variances for each brain state, and using those values to standardize the feature scores. Training sets are fully labeled, so we can use all epochs for this calculation. New data sets are typically not fully labeled (otherwise there would be no need for automated scoring), so we just use a small number of labeled epochs provided by the user. |
Beta Was this translation helpful? Give feedback.
-
Thanks for clarifying. I have an additional question about the fixed weight used in the code. Does the one with weights = [.1 .35 .55] close to the class prevalance of your training dataset, where you use the term "reference dataset " in the paper? I am curious how important you find this parameter affect the generalizability of the test performance? I was trying to apply this to video data. I would like to know more about this. Thanks. |
Beta Was this translation helpful? Give feedback.
-
|
Good question, those are roughly the class prevalences in the training set. We don't believe that the parameter setting matters very much. And in our very limited testing, changing these values only produced tiny changes in the model's behavior. But we didn't examine this systematically. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the prompt response. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
In the code, it seems that the number of epochs/time length used to calibrate the training and the testing data are different, is that correct? For training data, all the labeled epochs in a recording are used while only limited epochs in testing recording is used for calibration.
Beta Was this translation helpful? Give feedback.
All reactions