2024

an archive of posts from this year

Oct 17, 2024 Rule Based Rewards for Language Model Safety
Oct 17, 2024 KNOWLEDGE ENTROPY DECAY DURING LANGUAGE MODEL PRETRAINING HINDERS NEW KNOWLEDGE ACQUISITION
Oct 10, 2024 FAITHEVAL: CAN YOUR LANGUAGE MODEL STAY FAITHFUL TO CONTEXT, EVEN IF “THE MOON IS MADE OF MARSHMALLOWS”
Oct 03, 2024 QCRD: Quality-guided Contrastive Rationale Distillation for Large Lanauge Models
Sep 23, 2024 Training Language Models to Self-Correct via Reinforcement Learning
Sep 23, 2024 SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories
Sep 09, 2024 Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling
Sep 09, 2024 Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models
Sep 02, 2024 Many-shot jailbreaking
Sep 02, 2024 LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
Aug 20, 2024 Knowledge-Augmented Reasoning distillation for Small Language Models in Knowledge-Intensive Tasks (KARD)
Aug 13, 2024 Physics of Language Models: Part 2.1, Grade-School Math and the Hidden Reasoning Process
Aug 13, 2024 Knowledge conflict survey
Jul 30, 2024 In-Context Retrieval-Augmented Language Models
Jul 23, 2024 Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning
Jul 23, 2024 Step-DPO : Step-wise preference optimization for long-chain reasoning of LLMs
Jul 23, 2024 Pyspark - How to preprocess Large Scale Data with Python
Jul 22, 2024 LLAVA - Visual Instruction Tuning
Jul 02, 2024 RL-JACK: Reinforcement Learning-powered Black-box Jailbreaking Attack against LLMs
Jul 02, 2024 Llama3 Tokenizer
Jun 11, 2024 Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
Jun 11, 2024 Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Jun 11, 2024 Contextual Position Encoding: Learning to Count What’s Important
Jun 04, 2024 Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training
May 28, 2024 SimPO: Simple Preference Optimization with a Reference-Free Reward
May 27, 2024 Understanding the performance gap between online and offline alignment algorithms
May 21, 2024 LLAMA PRO: Progressive LLaMA with Block Expansion
May 07, 2024 How to Train LLM? - From Data Parallel To Fully Sharded Data Parallel
May 07, 2024 How to Inference Big LLM? - Using Accelerate Library
Apr 30, 2024 Training diffusion modelse with reinforcement learning
Apr 30, 2024 Many-Shot In-Context Learning
Apr 23, 2024 ORPO: Monolithic Preference Optimization without Reference Model
Apr 23, 2024 Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?
Apr 16, 2024 Understanding Emergent Abilities of Language Models from the Loss Perspective
Apr 13, 2024 Scaling Laws for Data Filtering— Data Curation cannot be Compute Agnostic
Apr 02, 2024 Preference-free Alignment Learning with Regularized Relevance Reward
Mar 26, 2024 Search-in-the-Chain: Interactively Enhancing Large Language Models with Search for Knowledge-intensive Tasks
Mar 19, 2024 Unveiling the Generalization Power of Fine-Tuned Large Language Models
Mar 12, 2024 A Simple and Effective Pruning Approach for Large Language Models
Mar 11, 2024 BitNet: Scaling 1-bit Transformers for Large Language Models
Mar 05, 2024 Beyond Memorization: Violating Privacy Via Inferencing With LLMs
Feb 27, 2024 SELF-RAG: LEARNING TO RETRIEVE, GENERATE, AND CRITIQUE THROUGH SELF-REFLECTION
Feb 20, 2024 WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia
Feb 20, 2024 KNOWLEDGE CARD: FILLING LLMS’ KNOWLEDGE GAPS WITH PLUG-IN SPECIALIZED LANGUAGE MODELS
Feb 13, 2024 LLM AUGMENTED LLMS: EXPANDING CAPABILITIES THROUGH COMPOSITION
Feb 13, 2024 CAN SENSITIVE INFORMATION BE DELETED FROM LLMS? OBJECTIVES FOR DEFENDING AGAINST EXTRACTION ATTACKS
Feb 06, 2024 Self-Rewarding Language Models
Jan 30, 2024 Lion: Adversarial Distillation of Proprietary Large Language Models
Jan 23, 2024 OVERTHINKING THE TRUTH: UNDERSTANDING HOW LANGUAGE MODELS PROCESS FALSE DEMONSTRATIONS
Jan 23, 2024 IN-CONTEXT PRETRAINING: LANGUAGE MODELING BEYOND DOCUMENT BOUNDARIES
Jan 16, 2024 Mistral 7B & Mixtral (Mixtral of Experts)
Jan 16, 2024 BENCHMARKING COGNITIVE BIASES IN LARGE LANGUAGE MODELS AS EVALUATORS
Jan 09, 2024 Making Large Language Models A Better Foundation For Dense Retrieval
Jan 03, 2024 vLLM: Easy, Fast, and Cheap LLM Serving with PagedAttention
Jan 02, 2024 DETECTING PRETRAINING DATA FROM LARGE LANGUAGE MODELS