|
Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance
Jesse Zhang,
Jiahui Zhang,
Karl Pertsch,
Ziyi Liu, Xiang Ren, Minsuk Chang, Shao-Hua Sun,
Joseph J. Lim
CoRL 2023 (Oral)
[OpenReview]
We propose BOSS, an approach that automatically learns to solve new long-horizon, complex,
and meaningful tasks by growing a learned skill library with minimal supervision.
|
|
SPRINT: Scalable Semantic Policy Pretraining via Language Instruction Relabeling
Jesse Zhang, Karl Pertsch,
Jiahui Zhang,
Joseph J. Lim
In Submission
[Arxiv]
[Website]
[Code]
We propose SPRINT, a scalable approach for pre-training robot policies with a rich repertoire of
skills while minimizing human annotation effort. Given a dataset of robot trajectories with an
initial set of task instructions for offline pre-training, SPRINT expands the pre-training task set
without additional human effort via language-model-based instruction relabeling and cross-trajectory skill chaining.
|
|
Cross Domain Imitation Learning Through MPC
Jiahui Zhang,
Haonan Yu,
Jesse Zhang,
Joseph J. Lim,
Wei Xu
We present a novel approach leveraging source trajectory and
target state-action similarity to replicate the task in the source domain.
|
Principle’s scholarship, Beijing University of Technology. 2018
Outstanding Research Achievement Award, Beijing University of Technology, Fan Gongxiu Honors College. 2017
|