Text embeddings with LLMs // synthetic data

sbagency
3 min readJan 3, 2024

--

In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pre-training with billions of weakly-supervised text pairs, followed by fine-tuning with a few labeled datasets, our method does not require building complex training pipelines or relying on manually collected datasets that are often constrained by task diversity and language coverage. We leverage proprietary LLMs to generate diverse synthetic data for hundreds of thousands of text embedding tasks across nearly 100 languages. We then fine-tune open-source decoder-only LLMs on the synthetic data using standard contrastive loss. Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labeled data. Furthermore, when fine-tuned with a mixture of synthetic and labeled data, our model sets new state-of-the-art results on the BEIR and MTEB benchmarks.

Here is a summary of the key points from the research paper:

- The paper proposes a simple and efficient method to obtain high-quality text embeddings by leveraging large language models (LLMs) to generate synthetic training data.

- They use proprietary LLMs like GPT-4 to generate diverse synthetic data covering hundreds of thousands of embedding tasks across 93 languages. A two-step prompting strategy is used to first brainstorm tasks and then generate examples.

- The synthetic data is used to fine-tune open-source decoder-only LLMs like Mistral-7B with a standard contrastive loss, without needing complex multi-stage pre-training.

- Experiments show this approach achieves state-of-the-art results on text embedding benchmarks like BEIR and MTEB, outperforming previous methods by 2%. It requires less than 1k training steps.

- The model also demonstrates strong performance when using only synthetic data, without any manually labeled datasets. This shows the potential of using LLMs for both generating training data and serving as the embedding model.

- For multilingual retrieval, the model excels on high-resource languages but still lags for low-resource ones, suggesting room for improvement as multilingual LLMs advance.

- Overall, the work demonstrates how leveraging recent advances in LLMs and synthetic data generation can substantially boost the quality and efficiency of training text embeddings.

https://huggingface.co/intfloat/e5-mistral-7b-instruct

Synthetic data is a great boost for models tarining/fine-tuning.

--

--

sbagency
sbagency

Written by sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.

No responses yet