Skip to main content

Build multimodal embedding models

Some embedding models such as CLIP come in pairs of model and compatible_model. Otherwise:

from superduper_sentence_transformers import SentenceTransformer

# Load the pre-trained sentence transformer model
model = SentenceTransformer(
identifier='all-MiniLM-L6-v2',
postprocess=lambda x: x.tolist(),
)