Implementation of MAML and Prototypical Network using torchmeta (additional methods will be added, soon.)
First, download the torchmeta package:
pip install torchmeta
or
git clone https://github.com/tristandeleu/pytorch-meta.git
cd pytorch-meta
python setup.py install
Next, download the datasets (reference from https://github.com/renmengye/few-shot-ssl-public):
miniImageNet: [Google Drive Link]
tieredImageNet: [Google Drive Link]
CIFAR_FS: [Google Drive Link] (or download available via code
CUB : [Google Drive Link]
- In the
datasets.pyfile located within theutilsfolder, you will come across the import statements for datasets such as miniImagenet and tieredImagenet. When you navigate to the imported file, it's necessary to modifytorchmeta.datasets.utils$\Rightarrow$ torchvision.datasets.utilsbecause of check integretity. - Make sure to verify that the
downloadis set toTruein thedatasets.py(CIFAR_FSis available)$\Rightarrow$ As a result of modifications to Google Drive's policy, downloading will now need to be done manually. Move the datasets as indicated in the provided datasets path below. -
DATA_PATH: Your own datasets folder path
DATA_PATH
└─cub
| CUB_200_2011.tgz
└─cifar100
| cifar-fs
| data.hdf5
| fine_names.json
└─miniimagenet
| mini-imagenet.tar.gz
└─tieredimagenet
| tiered-imagenet.tar
python train_maml.py --datasets [DATASETS] --epoch 60000 --num_shots 5 --batch_size 2
python train_proto.py --datasets [DATASETS] --num_ways_proto 20 --num_shots 5 --epoch 200 --batch_size 100
python eval_meta.py --[OPTIONS]
option arguments:
--epoch: epoch number (default: 60000)
--num_ways: N-way (default: 5)
--num_ways_proto: N-way for Proto-Net (default: 30)
--num_shots: k shots for support set (default: 5)
--num_shots_test: number of query set (default: 15)
--imgc: RGB(image channel) (default: 3)
--filter_size: size of convolution filters (default: 64)
--batch_size: meta-batch size (default: 2)
--max_test_task: number of tasks for evaluation (default: 1000)
--meta_lr: outer-loop learning rate (default: 1e-3)
--update_lr: inner-loop learning rate (default: 1e-2)
--update_step: number of inner-loop update steps while training (default: 5)
--update_test_step: number of inner-loop update steps while evaluating (default: 10)
--update: update method: MAML, ANIL, BOIL (default: MAML)
--scale_factor: Scaling factor for the cosine classifier (default: 10)
--dropout: dropout probability (default: 0.2)
--gpu_id: gpu device number (default: 0)
--model: model architecture: Conv-4, ResNet12 (default: conv4)
--datasets: datasets: miniimagenet, tieredimagenet, cifar-fs, CUB (default: miniimagenet)
--version: file version (default: 0)
| Datasets | 5 ways - 1 shot | 5 ways - 5 shot | |
|---|---|---|---|
mini-ImageNet (MAML) |
Original | 48.70 |
63.11 |
| Ours | 48.79 |
62.43 |
|
tiered-ImageNet (TPN) |
Original | 52.54 |
70.97 |
| Ours | 50.01 |
65.58 |
|
CIFAR_FS (R2-D2) |
Original | 58.90 |
71.50 |
| Ours | 57.36 |
72.41 |
|
CUB (FEAT) |
Original | 55.92 |
72.09 |
| Ours | 56.98 |
73.64 |
Euclidean
| Datasets | 5 ways - 1 shot | 5 ways - 5 shot | |
|---|---|---|---|
mini-ImageNet (ProtoNet) |
Original | 49.42 |
68.20 |
| Ours | 49.45 |
66.17 |
|
tiered-ImageNet (TPN) |
Original | 53.31 |
72.69 |
| Ours | 52.54 |
71.97 |
|
CIFAR_FS (R2-D2) |
Original | 55.50 |
72.00 |
| Ours | 54.33 |
73.60 |
|
CUB (FEAT) |
Original | 51.31 |
70.77 |
| Ours | 51.13 |
70.23 |