Skip to content

LINQ's Embedding Model Outperforms Giants on the MTEB Leaderboard

Source: LINQ

Cambridge, MA - The generative AI startup LINQ has announced that its large-scale embedding model, "LINQ," has achieved the top position in the text retrieval evaluation of Hugging Face's Massive Text Embedding Benchmark (MTEB) leaderboard

The model outperformed competitors like Nvidia, Salesforce, Google, and OpenAI.

LINQ was established in 2022 by Dr. Jacob Choi, a Ph.D. graduate of MIT, and provides generative AI solutions in various specialized fields, including law, insurance, finance, and healthcare. 

The MTEB leaderboard is a ranking system that assesses the performance of embedding models based on evaluation data across seven categories. 

The LINQ model achieved the highest score of 60 points in text retrieval, placing it third overall.

Embedding models are paramount in addressing the hallucination issue in generative AI's large language models (LLMs) and represent a fundamental component of Retrieval-Augmented Generation (RAG) technology. 

The company indicated in the press release that they had efficiently created high-quality data through LLMs, enabling them to achieve the best search performance on the MTEB benchmark dataset.

In a statement to the press, LINQ CEO Jacob Choi underscored the significance of search accuracy in internal company data for businesses adopting generative AI and expressed pride in developing the core embedding model. 

The company plans to expand and advance the model, focusing on specialized fields where text search accuracy is crucial.

In 2022, LINQ, formerly known as Wecovr, received early-stage investments and was selected for the MassChallenge accelerator program in the United States. 

The company maintains a collaborative relationship with KPMG US.