In the implementations of text encoders for several CLIP variants (e.g., MobileCLIP-S1/S2, SigLIP/SigLIP2, CLIPA), it appears that these models do not utilize attn_mask or key_padding_mask to deal with padding tokens.
Is this a common practice in model structure design?
This raises concerns as the lack of such masks might cause attention mechanisms to focus on padding rather than just valid tokens, potentially impacting model performance.