-
Can Large Language Models Understand Context?
Paper • 2402.00858 • Published • 24 -
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 84 -
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 152 -
SemScore: Automated Evaluation of Instruction-Tuned LLMs based on Semantic Textual Similarity
Paper • 2401.17072 • Published • 24
Collections
Discover the best community collections!
Collections including paper arxiv:2405.05904
-
A Survey on Data Selection for Language Models
Paper • 2402.16827 • Published • 4 -
Instruction Tuning with Human Curriculum
Paper • 2310.09518 • Published • 3 -
Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs
Paper • 2312.05934 • Published • 1 -
Language Models as Agent Models
Paper • 2212.01681 • Published
-
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 73 -
Enhancing Large Language Model Performance To Answer Questions and Extract Information More Accurately
Paper • 2402.01722 • Published -
Fine Tuning vs. Retrieval Augmented Generation for Less Popular Knowledge
Paper • 2403.01432 • Published • 2 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 27
-
Iterative Reasoning Preference Optimization
Paper • 2404.19733 • Published • 50 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 80 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 68 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 114
-
WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia
Paper • 2305.14292 • Published -
Harnessing Retrieval-Augmented Generation (RAG) for Uncovering Knowledge Gaps
Paper • 2312.07796 • Published -
RAGAS: Automated Evaluation of Retrieval Augmented Generation
Paper • 2309.15217 • Published • 3 -
Improving the Domain Adaptation of Retrieval Augmented Generation (RAG) Models for Open Domain Question Answering
Paper • 2210.02627 • Published
-
Can Large Language Models Understand Context?
Paper • 2402.00858 • Published • 24 -
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 84 -
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 152 -
SemScore: Automated Evaluation of Instruction-Tuned LLMs based on Semantic Textual Similarity
Paper • 2401.17072 • Published • 24
-
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 73 -
Enhancing Large Language Model Performance To Answer Questions and Extract Information More Accurately
Paper • 2402.01722 • Published -
Fine Tuning vs. Retrieval Augmented Generation for Less Popular Knowledge
Paper • 2403.01432 • Published • 2 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 27
-
Iterative Reasoning Preference Optimization
Paper • 2404.19733 • Published • 50 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 80 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 68 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 114
-
A Survey on Data Selection for Language Models
Paper • 2402.16827 • Published • 4 -
Instruction Tuning with Human Curriculum
Paper • 2310.09518 • Published • 3 -
Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs
Paper • 2312.05934 • Published • 1 -
Language Models as Agent Models
Paper • 2212.01681 • Published
-
WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia
Paper • 2305.14292 • Published -
Harnessing Retrieval-Augmented Generation (RAG) for Uncovering Knowledge Gaps
Paper • 2312.07796 • Published -
RAGAS: Automated Evaluation of Retrieval Augmented Generation
Paper • 2309.15217 • Published • 3 -
Improving the Domain Adaptation of Retrieval Augmented Generation (RAG) Models for Open Domain Question Answering
Paper • 2210.02627 • Published