Adaptive raises a $20M Seed to help companies build singular GenAI experiences

Company
November 4, 2024

In the past two decades, tremendous value has been unlocked from bringing deep personalization to a broad range of tech applications: across content platforms, social media, and even marketplaces. At the heart of what are now household names (e.g., Amazon, Netflix, Twitter), recommender systems carefully craft singular feeds, helping capture unprecedented value and growth. 

Concurrently, at long last, generative AI has empowered machines to create and think–ultimately, to emulate and complement human creativity and ingenuity. Yet, generative AI is still one-size-fits-all: companies too often deploy generic models, ill-matched to their use cases and ignorant of their values; while users struggle with impersonal models, insensible to their culture, intents, and desires. 

Surprisingly, generative AI shares little with the tech success stories of the past decades–except for an appetite for oversized compute! Meanwhile, much like ourselves, recommender systems constantly learn to redefine themselves with every unique user interaction, absorbing rich, ever-growing context. In contrast, current generative AI models are desperately unchanging. 

The techniques to make models understand users’ preferences exist. What distinguishes a worldwide cultural hit like ChatGPT from the anonymous release of GPT-3.5 are precisely these techniques (i.e., reinforcement learning from human feedback). What’s missing is democratizing these preference tuning methods, enabling every enterprise to deliver generative AI experiences worthy of ChatGPT–and then more, by unlocking truly unique experiences with user-level personalization.

We believe the next big leap for generative AI is to become adaptive, enabling companies to build singular generative AI experiences. We are joined in this vision by Index Ventures, who is leading our $20M seed, and by ICONIQ Capital. Along with them, we welcome Motier Ventures, Databricks Ventures, IRIS, HuggingFund by Factorial, and many other individuals such as Xavier Niel, Olivier Pomel (Datadog), Dylan Patel (SemiAnalysis), and Tri Dao (FlashAttention). Our founding team has contributed to the training of significant open-source models, and we are looking forward to helping enterprises deliver deeply personal and singular AI models to their customers–in turn driving improved business outcomes.

We are already shipping a first version of our enterprise platform, enabling companies to continuously improve their large language models by learning directly from users’ interactions. If you are interested in joining our waitlist for deployments in 2024, reach out to contact@adaptive-ml.com

We are also hiring for roles in New York and Paris, across our Technical, Product, and Commercial Staff: if you are interested in joining us, see our list of open positions.

The Adaptive ML founding team. From left to right: Daniel Hesslow (Research Scientist), Axel Marmet (RL Wizard), Olivier Cruchant (Lead Technical Product Manager), Baptiste Pannier (CTO), Julien Launay (CEO), Alessandro Cappelli (Research Scientist).

From frontier research to an enterprise platform

Preference tuning allows models to better capture intent and to align answers with users’ preferences. Such methods have helped us go from coercing answers out of obtuse models through intricate prompting, to seamless chatting. For enterprises, optimizing for the preference of users of a specific use case enables more intuitive, engaging applications, directly driving improved user experience and business outcomes. 

Unfortunately, the most powerful of these methods–such as reinforcement learning from human or AI feedbacks (RLHF/RLAIF)–are extraordinarily convoluted to deploy. Accordingly, they remain out of reach of most practitioners, in the realm of so-called frontier research. 

At Adaptive, we first want to democratize these methods, allowing every company to leverage state-of-the-art preference tuning to drive improved outcomes for their unique use cases and products. Then, we want to expand upon these methods, towards per-user personalization of models. This will enable truly singular generative AI experiences, and unlock another order of magnitude in value for enterprises. 

To get there, we are solving a three-body problem:

  • Engineering. Advanced preference tuning workflows are complex workloads, blending inference and training across many large distributed models. We have built a codebase, Adaptive Harmony, dedicated from the ground-up to preference tuning. Methods like PPO, DPO, RLOO, or constitutional AI can be implemented in just a few lines of Python, focusing solely on the high-level logic. Under the hood, we use Rust to coordinate operations and distribution, delivering increased robustness and high performance. This combination allows unprecedented flexibility for exploring novel ideas, while ensuring production-grade reliability–all the more so when combining it with proven recipes we have built to adapt models.
  • Data. A "secret" of generative AI is that it relies on an unprecedented workforce of annotators producing dazzling amounts of data around the clock. These large data annotation contracts are also expensive, sometimes rivaling with pretraining compute costs, and cumbersome to manage. Instead, we are putting emphasis on RLAIF, wherein human annotators are replaced by models themselves; this can bootstrap new use cases, rapidly getting a first model off-the-ground. We are also building more robust preference tuning methods, which can directly learn from users’ interactions once a use case is in production, thus directly driving users’ satisfaction and relevant business outcomes. 
  • Deployment. We want every model to perpetually learn from its environment and interactions, without requiring expert implementation. To this end, we expose simple abstractions for metrics collection, and use the feedback collected for automated A/B testing and model adaptation. With in-depth visibility over their deployments, our customers can be confident their models are always delivering the best results. Notably, they can easily “rightsize” a model, by exploring the Pareto frontier of increased model size (and costs) against the benefits for their unique business outcomes and use cases.

All of this, and some more, is packaged into Adaptive Engine, our enterprise platform. Deployed in our customers’ secure environments, it helps them accelerate their generative AI journey, bridging the gap between proof-of-concept and production. With select partners, we are also exploring direct access to our internal codebase, Adaptive Harmony, to streamline their own experiments and research on preference tuning.  

Towards singular generative AI

We are just getting started, and are excited to be building the foundational research and products that will enable deeply personalized generative AI. We believe there is a tremendous overhang in value for singular generative AI experiences, building upon the success of recommender systems. 

We are engaging with design partners for our first deployments, and will be expanding to broader availability in 2024. If this sounds interesting, reach out to contact@adaptive-ml.com. Oh, and we are also hiring!

Learn more about Adaptive ML.

Get started with Adaptive Engine.

Register your interest and join the waitlist for our next deployments.

Saas Webflow Template - Whistler - Designed by Azwedo.com and Wedoflow.com