Jul 7, 2024
|
Searching for Best Practices in Retrieval-Augmented Generation
|
Imad Dabbura
|
May 30, 2024
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Imad Dabbura
|
Apr 29, 2024
|
ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT
|
Imad Dabbura
|
Apr 25, 2024
|
Dense Passage Retrieval for Open-Domain Question Answering
|
Imad Dabbura
|
Apr 19, 2024
|
Internet-augmented language models through few-shot prompting for open-domain question answering
|
Imad Dabbura
|
Apr 11, 2024
|
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
|
Imad Dabbura
|
Apr 9, 2024
|
OLMo: Accelerating the Science of Language Models
|
Imad Dabbura
|
Apr 1, 2024
|
REALM: Retrieval-Augmented Language Model Pre-Training
|
Imad Dabbura
|
Mar 21, 2024
|
Self-Instruct: Aligning Language Models with Self-Generated Instructions
|
Imad Dabbura
|
Mar 17, 2024
|
Mixtral of Experts
|
Imad Dabbura
|
Mar 10, 2024
|
Mistral-7B
|
Imad Dabbura
|
Feb 27, 2024
|
Code Llama: Open Foundation Models for Code
|
Imad Dabbura
|
Feb 20, 2024
|
Efficient Training of Language Models to Fill in the Middle
|
Imad Dabbura
|
Feb 15, 2024
|
Chinchilla: Training Compute-Optimal Large Language Models
|
Imad Dabbura
|
Jan 5, 2024
|
Scaling Laws for Neural Language Models
|
Imad Dabbura
|
Dec 9, 2023
|
Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling
|
Imad Dabbura
|
Nov 16, 2023
|
T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
|
Imad Dabbura
|
Oct 5, 2023
|
Llama 2: Open Foundation and Fine-Tuned Chat Models
|
Imad Dabbura
|
Sep 7, 2023
|
LLaMA: Open and Efficient Foundation Language Models
|
Imad Dabbura
|
Jun 23, 2022
|
InstructGPT: Training language models to follow instructions with human feedback
|
Imad Dabbura
|
Feb 3, 2022
|
GPT3: Language Models are Few-Shot Learners
|
Imad Dabbura
|
Nov 10, 2021
|
RoBERTa: A Robustly Optimized BERT Pretraining Approach
|
Imad Dabbura
|
Oct 21, 2021
|
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
|
Imad Dabbura
|
Apr 11, 2021
|
GPT2: Language Models are Unsupervised Multitask Learners
|
Imad Dabbura
|
Jan 16, 2021
|
GPT: Improving Language Understanding by Generative Pre-Training
|
Imad Dabbura
|