-
Notifications
You must be signed in to change notification settings - Fork 100
Public models have more test benchmarks than no public models #2322
Description
Since my own recent research involves testing the impact of a specific biologically-inspired processing constraint on the brain-like properties of a model, I think Brainscore is a brilliant tool, so I used this framework to compare neural representations in macaques. Given that each experimental condition involves several models with different random seeds, I initially made only one of these models public.
I noticed today that BrainScore has added a number of new benchmarks—for instance, the IT benchmark count has increased from 6 to 13—but these tests appear to have been run only on the public models. I would like to ask whether you plan to run these new tests on the non-public models in the future; if so, roughly how long might that take? Alternatively, if you do not intend to run them automatically, would these additional tests be performed if I were to change the status of my currently non-public models to public? 🤔 I feel that updating the dataset will result in some changes to my results. 😵💫
I would also like to add that I wish to conduct a fine-grained analysis of the changes in features across each layer during the model training phase (based on my understanding of the Brainscore project, if a specific layer is specified, only the alignment results for that layer will be returned; if no layer is specified, the system will first calculate all get_layer() results across certain public benchmarks, then select the best one to process the remaining private benchmarks). However, I feel this might place too much of a burden on your resources, so I intend to run all public benchmarks locally. Is there currently a simple and convenient method within the project to download the public benchmarks? Previously, I could only download them one by one as mentioned in the tutorial, and then wait for it to prompt me about network issues or private data.
Thank you very much 🙏