Jiahui Zhang

I'm a visiting student at University of Southern California (USC), advised by Prof. Erdem Biyik. I received my master's degree in Electronic Engineering from USC . Before that, I completed my undergrad in Fan Gongxiu Honors College at the Beijing University of Technology and majored in Electronic Information Engineering.

I have spent two years as a research student at Cognitive Learning for Vision and Robotics Lab (CLVR), advised by Prof. Joseph J. Lim. I was also a research intern at Horizon Robotics, working with Haonan Yu and Wei Xu.

My research interests lie in robot learning and reinforcement learning, with a particular interest in developing general-purpose robots to performing diverse tasks in daily human life. I am also interested in training robots to acquire new skills with pre-trained foundation models or large datasets.

Google Scholar  /  Twitter  /  Linkedin  /  CV


I am actively seeking a Ph.D. position for Fall 2025.

profile photo
Research

RoboCLIPv2: Learning Robot Policies with a Single Language Instruction
(Ongoing Project)
[Website]

We introduce RoboCLIPv2, an approach for learning reward functions for unseen new tasks. RoboCLIPv2 learns a reward model based on outputs from pre-trained vision-language models, which then provides rewards for policy learning.



Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance
Jesse Zhang,Jiahui Zhang, Karl Pertsch,Ziyi Liu,Xiang Ren,Minsuk Chang,Shao-Hua Sun,Joseph J. Lim

Oral presentation (top 6.6%) @ CoRL 2023

Oral presentation @ SoCal Robotics 2023

Spotlight talk @ RSS 2023 Articulate Robots Workshop

[Arxiv] [Website] [Code] [OpenReview]

Our approach BOSS (BOotStrapping your own Skills) learns to accomplish new tasks by performing "skill bootstrapping," where an agent with a set of primitive skills interacts with the environment to practice new skills without receiving reward feedback for tasks outside of the initial skill set. This bootstrapping phase is guided by LLMs that inform the agent of meaningful skills to chain together. Through this process, BOSS builds a wide range of complex and useful behaviors from a basic set of primitive skills.



SPRINT: Scalable Semantic Policy Pretraining via Language Instruction Relabeling
Jesse Zhang,Jiahui Zhang, Karl Pertsch,Joseph J. Lim

Poster @ ICRA 2024

Spotlight talk @ CoRL 2022 LangRob Workshop

Spotlight talk @ CoRL 2022 Pre-training Robot Learning Workshop

[Arxiv] [Website] [Code]

We propose SPRINT, a scalable approach for pre-training robot policies with a rich repertoire of skills while minimizing human annotation effort. Given a dataset of robot trajectories with an initial set of task instructions for offline pre-training, SPRINT expands the pre-training task set without additional human effort via language-model-based instruction relabeling and cross-trajectory skill chaining.






Cross Domain Imitation Learning via MPC
(Internship Project)
Jiahui Zhang, Haonan Yu,Jesse Zhang,Karl Pertsch,Joseph J. Lim,Wei Xu
[Website]

We introduce CDMPC, an approach for learning new skill combinations from long-horizon skill trajectories. CDMPC enables agents to chain skills from diverse source domains and integrates them with a low-level policy in the target domain. CDMPC learns to chain short-horizon skills from long-horizon trajectories across demonstrations from diverse source domains, including various skill combinations. The policy learned from CDMPC adapts to tasks from any source domain and makes the agent able to tackle new tasks that require novel skill combinations.

Service
    Served as a reviewer: IROS 2024
Awards
    Presidential scholarship, Beijing University of Technology. 2018
    Outstanding Research Achievement Award, Beijing University of Technology, Fan Gongxiu Honors College. 2017

Template borrow from Jon Barron's website