VOCABTRIM: Vocabulary Pruning for Efficient Speculative Decoding in LLMs Paper • 2506.22694 • Published Jun 28 • 3
Direct Alignment of Draft Model for Speculative Decoding with Chat-Fine-Tuned LLMs Paper • 2403.00858 • Published Feb 29, 2024 • 1
CAOTE: KV Caching through Attention Output Error based Token Eviction Paper • 2504.14051 • Published Apr 18 • 1
KeDiff: Key Similarity-Based KV Cache Eviction for Long-Context LLM Inference in Resource-Constrained Environments Paper • 2504.15364 • Published Apr 21 • 3
On Speculative Decoding for Multimodal Large Language Models Paper • 2404.08856 • Published Apr 13, 2024 • 13