SPRINT: Scalable Semantic Policy Pretraining via Language Instruction Relabeling
Jesse Zhang, Karl Pertsch,
Jiahui Zhang,
Joseph J. Lim
ICRA 2024
[Arxiv]
[Website]
[Code]
We propose SPRINT, a scalable approach for pre-training robot policies with a rich repertoire of
skills while minimizing human annotation effort. Given a dataset of robot trajectories with an
initial set of task instructions for offline pre-training, SPRINT expands the pre-training task set
without additional human effort via language-model-based instruction relabeling and cross-trajectory skill chaining.
|