CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts

Australian Institute for Machine Learning (AIML), The University of Adelaide
ECCV 2024
MY ALT TEXT

A qualitative comparison of image features for zero-shot classification using 'a photo of a [class name]' prompts shows that CLIP's features initially emphasize style over content. While both image and text augmentations shift the focus towards content-specific features, text augmentation is significantly more effective in enhancing this focus.

Abstract

Contrastive vision-language models, such as CLIP, have garnered considerable attention for various dowmsteam tasks, mainly due to the remarkable ability of the learned features for generalization. However, the features they learned often blend content and style information, which somewhat limits their generalization capabilities under distribution shifts. To address this limitation, we adopt a causal generative perspective for multimodal data and propose contrastive learning with data augmentation to disentangle content features from the original representations. To achieve this, we begin with exploring image augmentation techniques and develop a method to seamlessly integrate them into pretrained CLIP-like models to extract pure content features. Taking a step further, recognizing the inherent semantic richness and logical structure of text data, we explore the use of text augmentation to isolate latent content from style features. This enables CLIP-like model's encoders to concentrate on latent content information, refining the learned representations by pre-trained CLIP-like models. Our extensive experiments across diverse datasets demonstrate significant improvements in zeroshot and few-shot classification tasks, alongside enhanced robustness to various perturbations. These results underscore the effectiveness of our proposed methods in refining vision-language representations and advancing the state-of-the-art in multimodal learning.


A Causal Generative Model for Multi-Modal Data

MY ALT TEXT

Image and text data, derived from a unified latent space with content and style, follow distinct deterministic processes. The class label is determined solely by the latent content. (a) Soft interventions on style variables generate augmented images. (b) Similar interventions produce augmented text due to the shared latent space.


Refining CLIP via Isolating Content from Style with Data Augmentation

MY ALT TEXT

Contrastive learning with data augmentation in one modality benefits both, with text data being more amendable for style changes due to its semantic structure. The trained adapting network can be seamlessly applied in both modality for zero-shot inference.


Experimental Results

BibTeX

@article{cai2023clap,
        title={CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts},
        author={Cai, Yichao and Liu, Yuhang and Zhang, Zhen and Shi, Javen Qinfeng},
        journal={arXiv preprint arXiv:2311.16445},
        year={2023}
      }