-
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Paper • 2312.00752 • Published • 144 -
Elucidating the Design Space of Diffusion-Based Generative Models
Paper • 2206.00364 • Published • 17 -
GLU Variants Improve Transformer
Paper • 2002.05202 • Published • 3 -
StarCoder 2 and The Stack v2: The Next Generation
Paper • 2402.19173 • Published • 147
Collections
Discover the best community collections!
Collections including paper arxiv:2307.14995
-
The Impact of Depth and Width on Transformer Language Model Generalization
Paper • 2310.19956 • Published • 10 -
Retentive Network: A Successor to Transformer for Large Language Models
Paper • 2307.08621 • Published • 172 -
RWKV: Reinventing RNNs for the Transformer Era
Paper • 2305.13048 • Published • 19 -
Attention Is All You Need
Paper • 1706.03762 • Published • 73
-
Self-Alignment with Instruction Backtranslation
Paper • 2308.06259 • Published • 42 -
ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation
Paper • 2308.03793 • Published • 11 -
From Sparse to Soft Mixtures of Experts
Paper • 2308.00951 • Published • 21 -
Revisiting DETR Pre-training for Object Detection
Paper • 2308.01300 • Published • 9
-
Efficient Memory Management for Large Language Model Serving with PagedAttention
Paper • 2309.06180 • Published • 25 -
LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models
Paper • 2308.16137 • Published • 40 -
Scaling Transformer to 1M tokens and beyond with RMT
Paper • 2304.11062 • Published • 3 -
DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
Paper • 2309.14509 • Published • 20
-
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Paper • 2312.00752 • Published • 144 -
Elucidating the Design Space of Diffusion-Based Generative Models
Paper • 2206.00364 • Published • 17 -
GLU Variants Improve Transformer
Paper • 2002.05202 • Published • 3 -
StarCoder 2 and The Stack v2: The Next Generation
Paper • 2402.19173 • Published • 147
-
The Impact of Depth and Width on Transformer Language Model Generalization
Paper • 2310.19956 • Published • 10 -
Retentive Network: A Successor to Transformer for Large Language Models
Paper • 2307.08621 • Published • 172 -
RWKV: Reinventing RNNs for the Transformer Era
Paper • 2305.13048 • Published • 19 -
Attention Is All You Need
Paper • 1706.03762 • Published • 73
-
Efficient Memory Management for Large Language Model Serving with PagedAttention
Paper • 2309.06180 • Published • 25 -
LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models
Paper • 2308.16137 • Published • 40 -
Scaling Transformer to 1M tokens and beyond with RMT
Paper • 2304.11062 • Published • 3 -
DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
Paper • 2309.14509 • Published • 20
-
Self-Alignment with Instruction Backtranslation
Paper • 2308.06259 • Published • 42 -
ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation
Paper • 2308.03793 • Published • 11 -
From Sparse to Soft Mixtures of Experts
Paper • 2308.00951 • Published • 21 -
Revisiting DETR Pre-training for Object Detection
Paper • 2308.01300 • Published • 9