Skip to content

Twelve Labs Raises $50M for AI-Driven Video Understanding.

Source: Twelve Labs

Seoul, South Korea - Twelve Labs Inc., a developer of generative AI foundation models for video understanding, has announced that it has secured $50 million in Series A funding. 

The round was co-led by New Enterprise Associates (NEA) and NVIDIA's venture arm, NVentures

Existing investors, including Index Ventures, Radical Ventures, Wndr Co, and Korea Investment Partners, participated.

This most recent financing round follows a $12 million seed round extension in late 2022, bringing the total amount raised to over $77 million. 

Twelve Labs plans to utilize the funds to expand its team across various functions, focusing on research and development of its industry-leading multimodal models.

Twelve Labs' flagship models, Marengo 2.6 and Pegasus 1.0, along with its newly introduced Embeddings API, represent the cutting edge of multimodal AI. 

These models permit users to search, classify, and generate text about videos using natural language prompts. 

The company's technology has attracted partners' interest in several industries, including media, entertainment, advertising, and automotive.

"We posit that AI systems must learn from video to comprehend the world in a manner analogous to that of humans," stated Jae Lee, co-founder and CEO of Twelve Labs. "A video-first multimodal approach is essential for addressing perceptual reasoning problems at the human level."

The company currently serves tens of thousands of users across various industries. 

It aims to continue advancing the field of multimodal AI to assist more users in addressing problems across a broader range of sectors.