Create a virtual environment and install dependencies:
python -m venv venv
source venv/bin/activate
pip install -r requirements.txtAfter installation, replace the modeling_grounding_dino.py file in the transformers library with the provided custom file:
# Find transformers installation path
python -c "import transformers; import os; print(os.path.dirname(transformers.__file__))"
# Copy the custom file (replace <TRANSFORMERS_PATH> with the path from above)
cp modeling_grounding_dino.py <TRANSFORMERS_PATH>/models/grounding_dino/modeling_grounding_dino.pyTraining is performed using Jupyter notebooks located in Train_Notebooks/:
- CoOp Training:
coop.ipynb - CoCoOp Training:
cocoop.ipynb - FixMatch Training:
fixmatch.ipynb
Open and run the respective notebook for your desired training method.
python test_coop.py --dataset_root /path/to/dataset --model_path /path/to/models --n_ctx 4Arguments:
--dataset_root: Path to dataset directory (default:/mammography)--model_path: Path to trained model files (default:./trained_models)--n_ctx: Number of learnable context tokens (default:4)
python test_cocoop.py --dataset_root /path/to/dataset --model_path /path/to/models --n_ctx 4 --feature_level highestArguments:
--dataset_root: Path to dataset directory (default:/mammography)--model_path: Path to trained model files (required)--n_ctx: Number of learnable context tokens (default:4)--feature_level: Feature level -highest,lowest, ormiddle(required)
python zero_shot.py --dataset_root /path/to/datasetArguments:
--dataset_root: Path to dataset directory (default:/mammography)