-
Notifications
You must be signed in to change notification settings - Fork 100
VGG16 scoring model taking up > 500 GB of RAM #1305
Copy link
Copy link
Open
Description
Hi,
I was trying to score a VGG16 model on our own cluster, i.e. by running a local instance
When I use the following layer names
['avgpool' , 'features', 'classifier']
i.e. more high level layers the RAM consumption seems to be OK and works fine
However, when I go to the detailed level layer names
['features.1', 'features.2', 'features.3', 'features.4', 'features.5', 'features.6', 'features.7', 'features.8', 'features.9', 'features.10', 'features.11', 'features.12', 'features.13', 'features.14', 'features.15', 'features.16', 'features.17', 'features.18', 'features.19', 'features.20', 'features.21', 'features.22', 'features.23', 'features.24', 'features.25', 'features.26', 'features.27', 'features.28', 'features.29', 'features.30', 'classifier.0', 'classifier.1', 'classifier.2', 'classifier.3', 'classifier.4', 'classifier.5', 'classifier.6']
It takes computational space > 500 GB of RAM and runs OOM
The thing is with the high level layers
my scores are
"imagenet_trained": {
"V4": "0.3685106454430227",
"IT": "0.5185169380743393",
"V1": "0.09256884647589658",
"V2": "0.2600441204932774"
}
in V1 the score for no training is higher than imagenet trained, which is a weird effect since
the weights are random. I know sometimes a random weight could also just match because
of a statistical artefact, but this occurs in 2 iterations
"no_training": {
"V4": "0.3413502290434312",
"IT": "0.2947047868783302",
"V1": "0.2026004427555423",
"V2": "0.1448800686541028"
}
"no_training_2": {
"V4": "0.33954039465787034",
"IT": "0.29491768114613165",
"V1": "0.1974565275931902",
"V2": "0.15089219267469867"
}
I am using the following public benchmarks for scoring my model
benchmark_identifiers = ['MajajHong2015public.V4-pls', 'MajajHong2015public.IT-pls',
'FreemanZiemba2013public.V1-pls', 'FreemanZiemba2013public.V2-pls']
Any help would be gladly appreciated
Best regards,
Shreya
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels