Thoughts on Open AI Dev Day

Open AI announced custom models. Only a few researchers can train ML models from scratch. Read on as to why a Custom Model is beneficial, and how you can train it.
Open AI Dev Day and Custom Models announcement
Written by
Clio AI
Published on
December 13, 2023

On Nov 6th, Open AI hosted their first ever dev day. Most of the things have been covered extensively in the press, substacks, twitter, newsletters and everywhere else. Not going into super details about what they offer and what you can do with it, just some quick thoughts on the dev day, and in the end one most important takeaway that almost everyone missed.

1. Assistants (your custom GPT)

This seems to be the headline feature, with giving users and developers the ability to create their own assistants and configure it the way they want. There is also an app store for the said agents, and devs can likely earn some revenue share on building assistants. Everyone predicts they will go deep into Agents likely obliterating many of the startups that have come up. I feel this is the first step towards abstraction of agents. Open AI is moving towards consumer market more and more, and one thing consumers hate is having to make more choices. So, instead of going full on into agents, they will just abstract those lengthy multi step prompts to just give the output a user wants without having to choose an assistant.

2. GPT 4 Turbo w 128k Context length

I was more excited about this. The difference would be whether it uses the full context length to generate output or just uses chunking and retrieval in background. If it's the former, it's going to be very powerful, and takes away the pain of querying longer documents directly. Only Claude 2 comes close in terms of context length, and it will be interesting to see the output for GPT4 Turbo.

3. Retrieval

I was surprised they highlighted this. On the positive side, it takes the pain of chunking a text, creating embeddings, and searching across embeds. On the other hand, it takes away the control. I think it would be more powerful for the end user/developers if Open AI enables users to define how to chunk, get code, and run that code in a sandboxed environment when the user queries. Reduces latency, takes away the requirement of a dedicated machine as Open AI provides compute and gives control to devs too.

There are other things like Text to Speech, multimodal capabilities with visual inputs, and managing threads. I like the part about seed which basically generates the same out that was generated before.

The Other Big Thing

One thing I saw which got about a min's coverage on the Keynote and did not find much information elsewhere was how Open AI is willing to help enterprises train models on their own data. Here is the exact quote:

The fine-tuning API is great for adapting our models to achieve better performance in a wide variety of applications with a relatively small amount of data, but you may want a model to learn a completely new knowledge domain, or to use a lot of proprietary data.

Today we're launching a new program called Custom Models. With Custom Models, our researchers will work closely with a company to help them make a great custom model, especially for them, and their use case using our tools. This includes modifying every step of the model training process, doing additional domain-specific pre-training, a custom RL post-training process tailored for specific domain, and whatever else.

We won't be able to do this with many companies to start. It'll take a lot of work, and in the interest of expectations, at least initially, it won't be cheap, but if you're excited to push things as far as they can currently go. Please get in touch with us, and we think we can do something pretty great.

Okay, it was three quotes and not one. It occurs around the 10 min mark when Sam Altman talks about the finetuning API.

There is plenty to go through in this little snippet. Some implications as to why the offering:

  • They recognize finetuning large models is difficult given the number of flops and training data these models go through. Changing a model's behavior is never easy and you never know how much data you need to finetune, and what boundary cases you miss out on finetuning.
  • It seems not many took up finetuning a GPT 3.5 or GPT4 given 1. the costs involved, and 2. Zero Shot Learning and Few Shot learning gave great results.
  • They were approached by larger companies who wanted a specific output, but could not finetune the model to get the desired results. Maybe, too much data, or the data was proprietary and they were not comfortable sharing.
  • It likely may have failed when the finetuning data was of very different domain compared to what GPTs are trained on. There the lack of knowledge[1] shows up and the UX worsens. Given the knowledge was vast, it could not be easily solved by Retrieval either I am guessing. A good example would be Consulting firms with their years of data about cases, frameworks, and markets.

From the application page, there is this too: 

 This program is particularly applicable to domains with extremely large proprietary datasets—billions of tokens at minimum

Coming to the offering, Open AI would supply a foundational (pretrained) model like a GPT , then assign a researcher who would help with Supervised Finetuning on the data from the companies, supply frameworks for Reward Modeling and Reinforcement Learning, helping enterprises to get their custom model to the same state as ChatGPT on their own data. This would not be a generic model, but mostly trained and built in a way it understands what the company does to supply output accordingly.

Source: State of GPT by Andrej Karpathy.

This is pretty much how a Gen AI model is trained to go from a base model to something like ChatGPT. This is still an abstract representation and it takes a lot more iterations. GPT4 is trained somewhat differently using MoE if rumors are to be believed. This is also the approach for training most ML models.

To go from a pretrained model like Llama or davinci to a model like ChatGPT, you need three things - lots of data, lots of compute, and lots of money/resources for labelling. (Taking expertise for granted) Hence, the offer for enterprises and a warning about how it will be costly and they are supplying the expert.

As an aside we offer the same thing on Llama 2:

At Clio AI, we can do this for a fraction of a cost, on a base model of Llama 2 - not the same as GPT4, but good enough for most corporate use cases. You still have to spend on compute and data, but you get two experts at a fraction of the researcher cost.

Why go with us?

  • Expertise: There are a handful of people who have trained and deployed a ML model like this from scratch. Clio AI's team counts one of them as Cofounder. Abhinav did it for Tokopedia in the past.
  • Costs: It would cost you a fraction of what it would cost you with Open AI. Open AI says pricing starts at $2-$3M at the minimum, with a timeline of 2-3 months. With Clio AI, you would be looking at a similar timelines, but costing far less than that. A lot of costs are offset by using Llama-2 instead of GPT4.
  • Hosting and Control: We would help you host the models on your cloud, under the deals negotiated by you.

Why should Enterprises like it?

So, we know custom models are data hungry and costly, is there an upside for enterprises to go for an approach like this? Definitely yes

Competitive Edge + Exclusivity

Regular LLMs can't dive into entirely new knowledge zones beyond their original training. In a market full of LLM users, having a vertical model cranking out ready-to-use results beats generic ones, boosting productivity and profits.

Data Security and Compliance

I think most decisions in Gen AI space would end up having to be decided on the basis of compliance and security stuff. While the business teams may push for initial training, but most big decisions would come from the policy teams. A nearly new model from scratch gives the enterprises opportunity to incorporate compliance and policy related items into the model itself, and also makes later finetuning much easier.

Automation

Right now, GPT4 is good for certain generic tasks, but very poor at specific tasks. With a custom model, you can automate certain aspects of your business far better than say with GPT4. Custom models empower you to build a knowledge base tuned precisely to your needs, ensuring more accurate and efficient automation. So, while others might grapple with the limitations of generic solutions, your custom model is streamlining processes, saving time, and elevating your business to new heights.

You can get in touch with us here if you would like a custom Gen AI model trained for your organization.

[1]: At this point, I would want to reiterate the distinction between two related but confusing concepts. You want your model to have knowledge about the topic you are querying. Finetuning is the way you tell a model what a good answer looks like.

Weekly newsletter
No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Spend time thinking not searching. Get a demo today.

By signing up for a demo, you agree to our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.