EV
EVOLVE-VLA

EVOLVE-VLA:
Test-Time Training from Environment Feedback
for Vision-Language-Action Models

Enabling VLAs to continuously adapt through autonomous environment interaction.
Learning by doing, not just watching.

Zechen Bai Chen Gao Mike Zheng Shou
Show Lab, National University of Singapore
Corresponding Author

Abstract

Achieving truly adaptive embodied intelligence requires agents that learn not just by imitating static demonstrations, but by continuously improving through environmental interaction, which is akin to how humans master skills through practice.

Vision-Language-Action (VLA) models have advanced robotic manipulation by leveraging large language models, yet remain fundamentally limited by Supervised Finetuning (SFT): requiring hundreds of demonstrations per task, rigidly memorizing trajectories, and failing to adapt when deployment conditions deviate from training.

We introduce EVOLVE-VLA, a test-time training framework enabling VLAs to continuously adapt through environment interaction with minimal or zero task-specific demonstrations. The key technical challenge is replacing oracle reward signals (unavailable at test time) with autonomous feedback. We address this through a learned progress estimator providing dense feedback, and critically, we design our framework to "tame" this inherently noisy signal via two mechanisms: (1) an accumulative progress estimation mechanism smoothing noisy point-wise estimates, and (2) a progressive horizon extension strategy enabling gradual policy evolution.

EVOLVE-VLA achieves substantial gains: +8.6% on long-horizon tasks, +22.0% in 1-shot learning, and enables cross-task generalization—achieving 20.8% success on unseen tasks without task-specific demonstrations training (vs. 0% for pure SFT). Qualitative analysis reveals emergent capabilities absent in demonstrations, including error recovery and novel strategies.

Overview

EVOLVE-VLA Overview
Figure 1: EVOLVE-VLA enables test-time training for VLAs. Unlike traditional supervised fine-tuning that requires extensive demonstrations and produces rigid policies, our framework needs minimal supervision and continues to learn autonomously during deployment, achieving substantial performance gains and emergent capabilities like error recovery.

Method

EVOLVE-VLA addresses the fundamental limitation of static supervised fine-tuning by enabling VLAs to continue learning at test time through autonomous environment interaction. The framework consists of several key components:

🎯 Progress-Based Reward

We replace impractical oracle rewards with a learned progress estimator that evaluates task completion based on visual observations. This provides dense, continuous feedback crucial for sample-efficient learning.

📊 Accumulative Estimation

To handle noisy progress estimates, we introduce an accumulative mechanism with interval-based sampling that aggregates and smooths point-wise estimates into stable, reliable signals over long horizons.

📈 Progressive Horizon Extension

We gradually increase the exploration horizon during training, allowing the policy to first master shorter sub-goals before tackling complete tasks. This curriculum makes learning more stable and efficient.

🔄 Online Reinforcement Learning

The VLA autonomously explores, receives progress-based feedback, and refines its behavior via GRPO optimization—mirroring the trial-and-error process through which humans develop skills.

Method Framework
Figure 2: Framework overview. During test-time training, the VLA generates diverse rollout trajectories through environment interaction. A task progress estimation module assigns rewards using our accumulative strategy, which are then used for GRPO optimization with progressive horizon extension.

Key Results

We validate EVOLVE-VLA on the challenging LIBERO benchmark, demonstrating substantial improvements across diverse settings:

+8.6%
Long-Horizon Tasks
(LIBERO-Long)
+22.0%
1-Shot Learning
(Minimal Data for SFT)
0% → 20.8%
Cross-Task Transfer
(Zero-Shot for SFT)

Demo Videos

Below are example rollouts demonstrating EVOLVE-VLA's capabilities after test-time training. Notice the error recovery and adaptive behaviors that emerge through autonomous learning.

Error Recovery: Pick and place alphabet soup in basket. The policy demonstrates the ability to recover from unsuccessful grasp attempts through repeated trials, a behavior learned autonomously during test-time training.
Adapting to State Changes: Turn on stove and place moka pot. Shows the model's ability to handle unexpected object state changes and adjust its manipulation strategy accordingly.
Novel Manipulation Strategies: Alternative grasping approach. The model develops novel motion patterns not present in demonstrations, such as grasping the cup body directly instead of the handle, demonstrating creative problem-solving learned through autonomous exploration.

More demo videos and failure case analyses available in the paper and supplementary materials.

Citation

If you find our work useful, please cite:

@article{bai2025evolve,
  title={EVOLVE-VLA: Test-Time Training from Environment Feedback for Vision-Language-Action Models},
  author={Bai, Zechen and Gao, Chen and Shou, Mike Zheng},
  journal={arXiv preprint arXiv:2512.14666},
  year={2025}
}