CLAP: Isolating Content from Style
through Contrastive Learning with Augmented Prompts

Australian Institute for Machine Learning (AIML), The University of Adelaide

TL;DR: We disentangle content from style in CLIP by leveraging contrastive learning with augmented prompts.
Our method yields more robust and generalizable vision-language representations.

Abstract

Contrastive vision-language models, such as CLIP, have garnered considerable attention for various downstream tasks, mainly due to the remarkable ability of the learned features for generalization. However, the features they learned often blend content and style information, which somewhat limits their generalization capabilities under distribution shifts. To address this limitation, we adopt a causal generative perspective for multimodal data and propose contrastive learning with data augmentation to disentangle content features from the original representations. To achieve this, we begin with exploring image augmentation techniques and develop a method to seamlessly integrate them into pre-trained CLIP-like models to extract pure content features. Taking a step further, recognizing the inherent semantic richness and logical structure of text data, we explore the use of text augmentation to isolate latent content from style features. This enables CLIP-like model's encoders to concentrate on latent content information, refining the learned representations by pre-trained CLIP-like models. Our extensive experiments across diverse datasets demonstrate significant improvements in zero-shot and few-shot classification tasks, alongside enhanced robustness to various perturbations. These results underscore the effectiveness of our proposed methods in refining vision-language representations and advancing the state-of-the-art in multimodal learning.

Key Contributions

Teaser Figure
Qualitative comparison: Text augmentations (bottom) enable CLIP to focus on content, not just style.

Latent Variable Modeling

Causal Generative Model
Latent generative processes: Both image and text modalities share a latent space of content and style.

Method Overview

Method Overview
Approach: Augmenting prompts in either modality helps CLIP learn robust content-aware features.

Experimental Results

Zero-shot performance t-SNE qualitative visualization
Zero-shot classification accuracy (top) and t-SNE visualization of representations (bottom). See more in the paper.

Key Takeaways

📚 Cite this paper:
@inproceedings{Cai2024CLAP,
  title     = {CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts},
  author    = {Yichao Cai and Yuhang Liu and Zhen Zhang and Javen Qinfeng Shi},
  booktitle = {European Conference on Computer Vision (ECCV)},
  pages     = {130--147},
  year      = {2024}
}