Skip to content

MCG-NJU/SpikeTAD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

News

[2026.3.30] Training code featuring ANN-to-SNN conversion capabilities is now available.

[2026.3.27] The SNN models of SpikeTAD for THUMOS14 and ActivityNet-1.3 is updated. Code for training will be updated soon.

Overview

Pipeline

Environment preparation

1: Create environment

conda env create -f environment.yml

2: Activate environment

conda activate spiketad

Data preparation

1: Download videos

For THUMOS14, please check ./tools/prepare_data/thumos for downloading videos.

Supposing these videos are in the following path:

data
└── raw_data
	└── video
		├── training
			├── video_validation_0000051.mp4
			└── .....
		└── validation
			├── video_test_0000004.mp4
			└── .....

For ActivityNet-1.3, please check ./tools/prepare_data/activitynet for downloading videos.

Supposing these videos are in the following path:

data
└── anet
     └── anet_1.3_video_val
     	 ├── NjTk2naIaac.avi
     	 └── .....

3: Prepare checkpoint weights

We adopt pre-trained model ViT-S from VideoMAE v2.

You can download SNN checkpoints for SpikeTAD from Google Drive link.

How to use

Please run the following commad for inference. Tips: It requires 4 GPUs with at least 32GB of VRAM each.

For THUMOS14,

bash scripts/spiketad_thumos.sh

For ActivityNet-1.3,

bash scripts/spiketad_anet.sh

Please run the following command to execute the complete training and inference pipeline on THUMOS-14.

bash scripts/spiketad_thumos_train.sh

Credits

We especially thank the contributors of the OpenTAD for providing helpful code.

About

SpikeTAD: Spiking Neural Networks for End-to-End Temporal Action Detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages