Reasoning is crucial for LLMs to perform complex tasks, but methods like Chain-of-Thought (CoT) reasoning often lead to significant token overhead and increased costs. We identify substantial token redundancy in the reasoning process of state-of-the-art LLMs and propose a token-budget-aware reasoning framework. This approach dynamically allocates token budgets based on problem complexity to guide the reasoning process. Experiments demonstrate that our method reduces token usage in CoT reasoning with minimal performance trade-offs, striking a practical balance between efficiency and accuracy.
📖 2. Environment
Please see requirements.txt.
🏗️3. Inference
We provide the implementation for Directly Answering and Vanilla CoT.
⚡Directly Answering
# for gpt-4o-mini on GSM8K-Test
python -u inference.py --data_name GSM8K-Test --model gpt-4o-mini
# for local model on GSM8K-Test
python -u inference.py --model <local_model_name> --data_name GSM8K-Test --output_path <your_outdir> --batch_size 256
# example
python -u inference.py --model Llama-3.1-8B-Instruct --data_name GSM8K-Test --output_path results --batch_size 256
🔗Vanilla CoT
# for gpt-4o-mini on GSM8K-Test
python -u inference.py --data_name GSM8K-Test --model gpt-4o-mini --reasoning
# for local model on GSM8K-Test
python -u inference.py --model <local_model_name> --data_name GSM8K-Test --output_path <your_outdir> --batch_size 256 --reasoning
# example
python -u inference.py --model Llama-3.1-8B-Instruct --data_name GSM8K-Test --output_path results --batch_size 256 --reasoning
💰Output token costs
The output token costs between Directly Answering and Vanilla CoT are illustrated as follows:
News
[2025-05] “Token-Budget-Aware LLM Reasoning” has been accepted to ACL 2025 (Findings)!
[2024-12] Selected as HuggingFace Daily Paper Top-1!
🚀1. Overview
This is the repo of our paper, “Token-Budget-Aware LLM Reasoning”(ACL 2025 Findings)
Reasoning is crucial for LLMs to perform complex tasks, but methods like Chain-of-Thought (CoT) reasoning often lead to significant token overhead and increased costs. We identify substantial token redundancy in the reasoning process of state-of-the-art LLMs and propose a token-budget-aware reasoning framework. This approach dynamically allocates token budgets based on problem complexity to guide the reasoning process. Experiments demonstrate that our method reduces token usage in CoT reasoning with minimal performance trade-offs, striking a practical balance between efficiency and accuracy.
📖 2. Environment
Please see requirements.txt.
🏗️3. Inference
We provide the implementation for Directly Answering and Vanilla CoT.
⚡Directly Answering
🔗Vanilla CoT
💰Output token costs
The output token costs between Directly Answering and Vanilla CoT are illustrated as follows:
🔍4. Search for optimal budget
💰Output token costs
The output token costs between Vanilla CoT and CoT with optimal searched budget are illustrated as follows:
⚙️5. TALE
We provide two implementations of TALE, TALE-EP and TALE-PT.
🧠TALE-EP
TALE with Zero-shot Estimator:
🎯TALE-PT
📚 TALE-PT-SFT
🔄TALE-PT-DPO
🤝6. Cite our work