Effective tool use is essential for large language models (LLMs) to interact meaningfully with their environment. However, progress is limited by the lack of efficient reinforcement learning (RL) frameworks specifically designed for tool use, due to challenges in constructing stable training environments and designing verifiable reward mechanisms. To address this, we propose an automated environment construction pipeline, incorporating scenario decomposition, document generation, function integration, complexity scaling, and localized deployment. This enables the creation of high-quality training environments that provide detailed and measurable feedback without relying on external tools. Additionally, we introduce a verifiable reward mechanism that evaluates both the precision of tool use and the completeness of task execution. When combined with trajectory data collected from the constructed environments, this mechanism integrates seamlessly with standard RL algorithms to facilitate feedback-driven model training. Experiments on LLMs of varying scales demonstrate that our approach significantly enhances the models’ tool-use performance without degrading their general capabilities, regardless of inference modes or training algorithms. Our analysis suggests that these gains result from improved context understanding and reasoning, driven by updates to the lower-layer MLP parameters in models.
What’s New
[2026/04/06] The paper has been accepted by ACL 2026.
We evaluate the performance of various LLMs, and present the average performance across scenarios for each dataset.
Based on it, we make the following observations.
Our approach consistently enhances the model’s tool-use capabilities across various conditions.
Performance gains achieved by our method appear to primarily stem from updates to the model’s lower-layer MLP parameters.
Current open-source LLMs do not necessarily exhibit stronger tool-use performance in reasoning mode compared to non-reasoning mode.
Usage
Requirement
Run the command to install the packages required.
pip install -r requirements.txt
Training with Our Data
Collected trajectories based on our constructed environments.
If you find this project useful in your research, please cite:
@misc{FTRL,
title={Feedback-Driven Tool-Use Improvements in Large Language Models via Automated Build Environments},
author={Junjie Ye and Changhao Jiang and Zhengyin Du and Yufei Xu and Xuesong Yao and Zhiheng Xi and Xiaoran Fan and Qi Zhang and Xuanjing Huang and Jiecao Chen},
year={2025},
eprint={2508.08791},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.08791},
}
[ACL 2026] FTRL
Feedback-Driven Tool-Use Improvements in Large Language Models via Automated Build Environments
Junjie Ye
jjye23@m.fudan.edu.cn
Aug. 03, 2025
Introduction
Effective tool use is essential for large language models (LLMs) to interact meaningfully with their environment. However, progress is limited by the lack of efficient reinforcement learning (RL) frameworks specifically designed for tool use, due to challenges in constructing stable training environments and designing verifiable reward mechanisms. To address this, we propose an automated environment construction pipeline, incorporating scenario decomposition, document generation, function integration, complexity scaling, and localized deployment. This enables the creation of high-quality training environments that provide detailed and measurable feedback without relying on external tools. Additionally, we introduce a verifiable reward mechanism that evaluates both the precision of tool use and the completeness of task execution. When combined with trajectory data collected from the constructed environments, this mechanism integrates seamlessly with standard RL algorithms to facilitate feedback-driven model training. Experiments on LLMs of varying scales demonstrate that our approach significantly enhances the models’ tool-use performance without degrading their general capabilities, regardless of inference modes or training algorithms. Our analysis suggests that these gains result from improved context understanding and reasoning, driven by updates to the lower-layer MLP parameters in models.
What’s New
Main Results
We evaluate the performance of various LLMs, and present the average performance across scenarios for each dataset. Based on it, we make the following observations.
Usage
Requirement
Training with Our Data
Collected trajectories based on our constructed environments.
Transform the train data from JSONL format to PARQUET format.
Train the model with the transformed data.
Merge the trained model into SATETENSORS format.
Evaluation for Open-Source LLMs
Evaluation for Closed-Source LLMs
License
The code is licensed under the Apache License 2.0.
Acknowledgement
We employ the VeRL 0.3.1.dev framework for training.
Citation
If you find this project useful in your research, please cite: