Bootstrapping World Models from Dynamics Models in Multimodal Foundation Models Paper • 2506.06006 • Published Jun 6 • 13
The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs Paper • 2504.17768 • Published Apr 24 • 14
The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs Paper • 2504.17768 • Published Apr 24 • 14 • 3
The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs Paper • 2504.17768 • Published Apr 24 • 14
The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs Paper • 2504.17768 • Published Apr 24 • 14 • 3
Hierarchical Transformers Are More Efficient Language Models Paper • 2110.13711 • Published Oct 26, 2021 • 1
No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models Paper • 2307.06440 • Published Jul 12, 2023 • 3
Hierarchical Transformers Are More Efficient Language Models Paper • 2110.13711 • Published Oct 26, 2021 • 1
nanoT5: A PyTorch Framework for Pre-training and Fine-tuning T5-style Models with Limited Resources Paper • 2309.02373 • Published Sep 5, 2023 • 1
Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference Paper • 2403.09636 • Published Mar 14, 2024 • 3
Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference Paper • 2403.09636 • Published Mar 14, 2024 • 3
nanoT5: A PyTorch Framework for Pre-training and Fine-tuning T5-style Models with Limited Resources Paper • 2309.02373 • Published Sep 5, 2023 • 1