We devise ARMADA, a multi-robot deployment and adaptation system with human-in-the-loop shared control, featuring an autonomous online failure detection method named FLOAT.
Thanks to FLOAT, ARMADA enables paralleled policy rollout and requests human intervention only when necessary, significantly reducing reliance on human supervision. Hence, ARMADA enables efficient acquisition of in-domain data, and leads to more scalable deployment and faster adaptation to new scenarios.
FLOAT achieves remarkable progress in detection accuracy across multiple real-world tasks, compared to previous failure detection approaches. Besides, ARMADA manifests saliently larger increase in success rate and reduction in reliance on human intervention over multiple rounds of policy rollout and post-training, compared to previous human-in-the-loop learning methods.
FLOAT failure detector conducts real-time OT matching between the policy embeddings of the current rollout and all expert demonstrations, and defines the minimum OT cost as FLOAT index. We thereby calibrate the FLOAT threshold on all successful rollouts.
When the FLOAT index of a rollout trajectory exceeds the threshold, we consider it a failure and employ adaptive rewinding based on OT computation, which helps retrace a previous timestep before the scene was disturbed.
Our multi-robot system then allocates an idle human operator to the failed robot for intervention, forming an efficient deployment paradigm. All the data collected are then utilized for post-training, facilitating scalable adaptation to deployment scenarios.
We design an adaptive rewinding mechanism that allows the robot to retrace a previous timestep while human operators can help reset the scene as it was, thus ensuring an intact and informative demonstration with human corrective behaviour.
ARMADA exhibits stable progress in success rate, with a more than four-fold increase compared to previous human-in-the-loop learning approach, thanks to our adaptive rewinding mechanism.
ARMADA results in a greater than two-fold reduction in human intervention rate compared to Sirius, showing potential in scalable deployment and adaptation.
We deploy the pretrained Fold towel policy on Scene A, B, and C (in-domain) for online data collection, and evaluate the post-trained policy on Scene D (out-of-distribution). The baseline method only utilizes Scene A for data collection, and evaluates the policy on Scene D.
ARMADA boosts adaptation to unseen scenarios with paralleled policy deployment on multiple robots, compared to a traditional human-in-the-loop paradigm where one human operator attends to only one robot.
ARMADA scales up collection of human intervention trajectory with more robots in parallel, raising human occupancy and yielding correction data more prolifically on diverse deployment scenarios, which helps the policy generalize to unseen scenarios.
@misc{yu2025armadaautonomousonlinefailure,
title={ARMADA: Autonomous Online Failure Detection and Human Shared Control Empower Scalable Real-world Deployment and Adaptation},
author={Wenye Yu and Jun Lv and Zixi Ying and Yang Jin and Chuan Wen and Cewu Lu},
year={2025},
eprint={2510.02298},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2510.02298},
}