Oct 03, 2024 QCRD: Quality-guided Contrastive Rationale Distillation for Large Lanauge Models Sep 09, 2024 Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling Aug 20, 2024 Knowledge-Augmented Reasoning distillation for Small Language Models in Knowledge-Intensive Tasks (KARD) Jan 30, 2024 Lion: Adversarial Distillation of Proprietary Large Language Models Sep 12, 2023 A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training