QeRL: Beyond Efficiency - Quantization-enhanced Reinforcement Learning for LLMs

Abstract

We propose QeRL, a Quantization-enhanced Reinforcement Learning framework for large language models (LLMs). While RL is essential for LLMs’ reasoning capabilities, it is resource-intensive, requiring substantial GPU memory and long rollout duration. QeRL addresses these issues by combining NVFP4 quantization with Low-Rank Adaptation (LoRA), accelerating rollout phase of RL while reducing memory overhead. Beyond efficiency, our findings show that quantization noise increases policy entropy, enhancing exploration, and enabling the discovery of better strategies during RL. To further optimize exploration, QeRL introduces an Adaptive Quantization Noise (AQN) mechanism, which dynamically adjusts noise throughout training. Experiments demonstrate that QeRL delivers around a 1.2×–1.5× speedup compared to BF16 LoRA in end-to-end RL training while drastically reducing memory usage, and a 1.5×–2.0× speedup compared to QLoRA. Moreover, this is the first framework to enable RL training of a 32B LLM on a single H100 80GB GPU, while delivering overall speedups for RL training. It also achieves faster reward growth and higher final accuracy than 16-bit LoRA and QLoRA, while matching the performance of full-parameter fine-tuning on mathematical benchmarks such as GSM8K (90.8%) and MATH 500 (77.4%) in the 7B model. These results establish QeRL as an efficient and effective framework for RL training in LLMs.

Publication
International Conference on Learning Representations
Yujun Lin
Yujun Lin
Senior Research Scientist

Senior Research Scientist at NVIDIA Research.

Yao (Jason) Lu
Yao (Jason) Lu
Senior Research Scientist

Senior Research Scientist at NVIDIA Research.

Song Han
Song Han
Associate Professor

Song Han is an associate professor at MIT EECS.

Yukang Chen
Yukang Chen
Senior Research Scientist

Senior Research Scientist at NVIDIA Research.