- formatting
- images
- links
- math
- code
- blockquotes
- external-services
•
•
•
•
•
•
-
Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling
논문 리뷰 - Knowledge Distillation, LLM, Limited Budget 관련 연구
-
Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models
논문 리뷰 - VLM, Safety 관련 연구
-
Many-shot jailbreaking
논문 리뷰 - ICL, Safety 관련 연구
-
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
논문 리뷰 - Embeddings, LLM 관련 연구
-
Knowledge-Augmented Reasoning distillation for Small Language Models in Knowledge-Intensive Tasks (KARD)
논문 리뷰 - Reasoning, Knowledge Distillation 관련 연구