-
Hydra: Sequentially-Dependent Draft Heads for Medusa Decoding
Paper • 2402.05109 • Published • 1 -
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
Paper • 2401.10774 • Published • 59 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 79 -
Exploring the Latent Capacity of LLMs for One-Step Text Generation
Paper • 2505.21189 • Published • 62
Collections
Discover the best community collections!
Collections including paper arxiv:2404.19737
-
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models
Paper • 2405.01535 • Published • 124 -
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Paper • 2405.01434 • Published • 57 -
WildChat: 1M ChatGPT Interaction Logs in the Wild
Paper • 2405.01470 • Published • 63 -
A Careful Examination of Large Language Model Performance on Grade School Arithmetic
Paper • 2405.00332 • Published • 33
-
RARR: Researching and Revising What Language Models Say, Using Language Models
Paper • 2210.08726 • Published • 1 -
Hypothesis Search: Inductive Reasoning with Language Models
Paper • 2309.05660 • Published • 2 -
In-context Learning and Induction Heads
Paper • 2209.11895 • Published • 2 -
ReAct: Synergizing Reasoning and Acting in Language Models
Paper • 2210.03629 • Published • 27
-
Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting
Paper • 2404.18911 • Published • 31 -
Accelerating LLM Inference with Staged Speculative Decoding
Paper • 2308.04623 • Published • 25 -
An Emulator for Fine-Tuning Large Language Models using Small Language Models
Paper • 2310.12962 • Published • 13 -
The Curious Case of Neural Text Degeneration
Paper • 1904.09751 • Published • 3
-
STaR: Bootstrapping Reasoning With Reasoning
Paper • 2203.14465 • Published • 8 -
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Paper • 2401.06066 • Published • 55 -
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
Paper • 2405.04434 • Published • 21 -
Prompt Cache: Modular Attention Reuse for Low-Latency Inference
Paper • 2311.04934 • Published • 34
-
Instruction Pre-Training: Language Models are Supervised Multitask Learners
Paper • 2406.14491 • Published • 95 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 79 -
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 73 -
The Prompt Report: A Systematic Survey of Prompting Techniques
Paper • 2406.06608 • Published • 66
-
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 79 -
Is Bigger Edit Batch Size Always Better? -- An Empirical Study on Model Editing with Llama-3
Paper • 2405.00664 • Published • 21 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper • 2405.00732 • Published • 122
-
Iterative Reasoning Preference Optimization
Paper • 2404.19733 • Published • 50 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 79 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 67 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 114
-
Hydra: Sequentially-Dependent Draft Heads for Medusa Decoding
Paper • 2402.05109 • Published • 1 -
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
Paper • 2401.10774 • Published • 59 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 79 -
Exploring the Latent Capacity of LLMs for One-Step Text Generation
Paper • 2505.21189 • Published • 62
-
STaR: Bootstrapping Reasoning With Reasoning
Paper • 2203.14465 • Published • 8 -
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Paper • 2401.06066 • Published • 55 -
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
Paper • 2405.04434 • Published • 21 -
Prompt Cache: Modular Attention Reuse for Low-Latency Inference
Paper • 2311.04934 • Published • 34
-
Instruction Pre-Training: Language Models are Supervised Multitask Learners
Paper • 2406.14491 • Published • 95 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 79 -
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 73 -
The Prompt Report: A Systematic Survey of Prompting Techniques
Paper • 2406.06608 • Published • 66
-
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models
Paper • 2405.01535 • Published • 124 -
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Paper • 2405.01434 • Published • 57 -
WildChat: 1M ChatGPT Interaction Logs in the Wild
Paper • 2405.01470 • Published • 63 -
A Careful Examination of Large Language Model Performance on Grade School Arithmetic
Paper • 2405.00332 • Published • 33
-
RARR: Researching and Revising What Language Models Say, Using Language Models
Paper • 2210.08726 • Published • 1 -
Hypothesis Search: Inductive Reasoning with Language Models
Paper • 2309.05660 • Published • 2 -
In-context Learning and Induction Heads
Paper • 2209.11895 • Published • 2 -
ReAct: Synergizing Reasoning and Acting in Language Models
Paper • 2210.03629 • Published • 27
-
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 79 -
Is Bigger Edit Batch Size Always Better? -- An Empirical Study on Model Editing with Llama-3
Paper • 2405.00664 • Published • 21 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper • 2405.00732 • Published • 122
-
Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting
Paper • 2404.18911 • Published • 31 -
Accelerating LLM Inference with Staged Speculative Decoding
Paper • 2308.04623 • Published • 25 -
An Emulator for Fine-Tuning Large Language Models using Small Language Models
Paper • 2310.12962 • Published • 13 -
The Curious Case of Neural Text Degeneration
Paper • 1904.09751 • Published • 3
-
Iterative Reasoning Preference Optimization
Paper • 2404.19733 • Published • 50 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 79 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 67 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 114