Skip to content

Different results for different batch sizes when evaluating trained models #176

@AxelMueller

Description

@AxelMueller

Hi,
First of all, thanks for making your great code and models available.
I am currently trying out two of your models (MP-CNN and VDPWI) and noticed that when evaluating trained models (via --skip-training), different batch sizes give different results.
For example,

python -m mp_cnn ../Castor-models/mp_cnn/mpcnn.sick.model --dataset sick --batch-size 16 --skip-training

returns a different results than

python -m mp_cnn ../Castor-models/mp_cnn/mpcnn.sick.model --dataset sick --batch-size 64 --skip-training

Have your encountered this behavior before and do you know what the reasons might be? Which would be the correct result?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions