Helpful Resources

Gen AI has been a revelation over the past year. We collect the resources that we think would be useful for understanding the maze.
Tech

Transformers Paper

The research paper from Google Deepmind researchers which started it all. Titled "Attention is all you need".
Product

Transformers - Primer

Stephen Wolfram explains how tranformer architecture works. Really important if you want to go deep into Gen AI.
Talks

State of GPT

Andrej Karpathy with a great talk at Microsoft Connect about how GPT3, ChatGPT and GPT4 came to be.

Enterprise Deep-dives

Clio AI's dives deep into things like use-cases, deployment strategies, etc. to bring you the best perspectives in implementing your AI strategy.
All deep-dives

DSPY: A Programming Model for Self-Improving Language Model Pipelines

DSPy is an optmization module for LLMs. You define a signature - a pair of input output strings, and the program optimizes your prompts to get the desired output.

Chain of Thought Prompting Demystified

CoT prompting is an effective way to get larger models to solve complex tasks beyond the scope of simple instructions. This deep dive helps you develop an intuition, discusses different techniques, and helps figure out the applications for CoT.

Generative AI for Enterprises - Use Cases, Experimentation, Iterations, and Deployments

Generative AI is transforming employees' habits and workflows across the board, and changing the way customers engage with enterprises. We do a deep dive on enterprises' AI strategy, help you grasp the essence of Generative AI and leverage these models actively. We end with how to think about deployment, decoding build-vs-buy decisions, and highlight use cases.

A custom model for your use case

Supercharge your company's productivity by harnessing the massive reservoir of untapped data and insights.
By subscribing you agree to our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Research Insights

Our insights into the latest research and publications providing context into their features, breakthroughs, and business applications.
All Research Insights

Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More?

This paper from Deepmind creates a LOFT benchmark to test long context vs techniques like Retrieval, In Context Learning, and SQL.

Mixture of A Million Experts

This paper from Google Deepmind scales MoE to a million experts, each with a single layer, and then uses vector based retrieval to pick the top k experts for any given query at runtime.

Transformers meet Neural Algorithmic Reasoners

This paper from Google Deepmind combines Transformers with Neural Algorithmic Reaasoning resulting in an architecture where LLMs are good at reasoning tasks.

Mixture-of-Agents Enhances Large Language Model Capabilities

Together AI introduces a Mixture of Agents - a group of LLM experts which when mingled together can outperform GPT4 and other top LLMs

CRAG - Comprehensive RAG Benchmark

CRAG by META AI (FAIR) suggests a comprehensive evaluation framework for RAG systems - both straightforward and SOTA industry level systems.

Contextual Position Encoding: Learning to Count What’s Important

CoPE enables LLMs get better at counting tasks by contextualizing positional encoding differently that traditional token based approaches

LoRA Learns Less and Forgets Less

Analyzing LoRA to understand whether it can add new knowledge to an LLM.

MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning

MoRA builds upon the ideas of LoRA and PeFT to efficiently finetune an LLM with high rank updation using a square matrix instead of low rank matrices.

RLHF Workflow: From Reward Modeling to Online RLHF

Online Iterative RLHF builds upon reward modeling as human preference approximation with iteratively changing dataset and aligning the model better than previous techniques

Better & Faster Large Language Models via Multi-token Prediction

This Multi token prediction paper by Meta shows multi heads as memory efficient, better at performance, and faster at training compared to current next token predictors.

OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework

Open ELM by Apple is a new approach which utilizes layer wise scaling resulting in more efficient LLMs.

AutoCodeRover: Autonomous Program Improvement

AutoCodeRover from NUS provides a novel framework that looks beyond code generation to genuine problem solving with the help of AI.

The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions

This paper from Open AI research talks about training an LLM to prioritize instructions in a hierarchical order, starting from system prompt, alignment, to user prompt, tool output, and so on.

Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length

Megalodon by Meta AI is a new LLM architecture that tackles the problems in transformers and can support unlimited context length using a new attention technique.

Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention

Infini-attention uses compressive memory cache and efficient retrieval to enable practically unbounded context for a transformer within a bounded memory footprint.

ReFT: Representation Finetuning for Language Models

ReFT changes representations at different layers of an LLM by using a technique called intervention instead of changing weights/parameters using PeFT. Gives a better performance on common benchmarks and tasks.

Mixture-of-Depths: Dynamically allocating compute in transformer-based language models

MoD is a new Mixture of Depths implementation by Google Deepmind which dynamically allocates compute to input tokens.

RAFT: Adapting Language Model to Domain Specific RAG

With RAFT, an LLM is finetuned to ignore the distracting documents and focus on relevant information for any given task. Along with CoT, this enables model to assign correct weightage to relevant information, improving the generation and downstream tasks output significantly.

Gecko - Versatile Text Embeddings Distilled from Large Language Models

Gecko - from Google Deepmind - is a new embedding model architecture that utilizes two step LLM distillation process to create a high quality training dataset, and leads to a better model performance.

Jamba - A hybrid Transformer-Mamba Language Model

Jamba by AI21 Labs combines transformer layers with Mamba (SSM) layers and implements a MoE layers in middle to get a compute efficient model with high throughput.

Evolutionary Optimization of Model Merging Recipes

Evolutionary model merge uses evolutionary algorithms that automatically discover optimal ways to combine diverse open source models. This way the resultant model harnesses the capabilities of parent models without requiring extensive additional training data or compute. This makes foundational model development more accessible and efficient.

Dense X Retrieval: Proposition based Retrieval for RAG

Proposition based retrieval performs significantly better than existing techniques like paragraph based retrieval and sentence retrieval in case of RAG apps. This paper by Tencent investigates and quantifies how much better on Wikipedia articles.

PERL: Parameter Efficient RLHF

PERL or Parameter Efficient Reinforcement Learning could be a groundbreaking technique to reduce memory and time consumption when it comes to aligning a model before releasing it to the world. This paper shows how using LoRA, you get close to the same benchmarks as standard RLHF techniques, and hence you get the same quality of output. Business implications are about costs, and can be done efficiently on premises on any open source model.

CaLM - Composition of LLMs by augmentation

CaLM provides composition for LLMs similar to how libraries would in a programming language. It's a powerful method to enable combining skills of multiple LLMs depending on the use case.

Latest From Clio AI's Blog

We blog about interesting topics. You should sign up for our newsletter.

Why Clio AI?

Unlock the most obvious-yet-hidden-in-plain-sight growth hack - enable your employees to work on important things, and reduce their cognitive load and time to resolve blockers.

Fast, efficient, and in-context information to make every employee a super performer.

Spend time thinking not searching. Get a demo today.

By signing up for a demo, you agree to our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.