ChatVLA: Unified Multimodal Understanding and Robot Control with Vision-Language-Action Model

Zhongyi Zhou*,1,2 Yichen Zhu*,†,1 Minjie Zhu2 Junjie Wen2 Ning Liu4 Zhiyuan Xu4
Weibin Meng5 Ran Cheng1 Yaxin Peng3 Chaomin Shen2 Feifei Feng1
1. Midea Group 2. East China Normal University 3. Shanghai University
4. Beijing Innovation Center of Humanoid Robotics 5. Tsinghua University
*Equal Contribution.

Corresponding author.

Abstract

In this work, we propose ChatVLA, the first work to unify multimodal understanding and embodied control.

Humans possess a unified cognitive ability to perceive, comprehend, and interact with the physical world. Why can't large language models replicate this holistic understanding? Through a systematic analysis of existing training paradigms in vision-language-action models (VLA), we identify two key challenges: spurious forgetting, where robot training overwrites crucial visual-text alignments, and task interference, where competing control and understanding tasks degrade performance when trained jointly.

To overcome these limitations, we propose ChatVLA, a novel framework featuring Phased Alignment Training, which incrementally integrates multimodal data after initial control mastery, and a Mixture-of-Experts architecture to minimize task interference. ChatVLA demonstrates competitive performance on visual question-answering datasets and significantly surpasses state-of-the-art vision-language-action (VLA) methods on multimodal understanding benchmarks. Notably, it achieves a six times higher performance on MMMU and scores 47.2 on MMStar with a more parameter-efficient design than ECoT. Furthermore, ChatVLA demonstrates superior performance on 25 real-world robot manipulation tasks compared to existing VLA methods like OpenVLA. Our findings highlight the potential of our unified framework for achieving both robust multimodal understanding and effective robot control.

Examples for Understanding Task

teaser

ChatVLA can handle different types of multimodal understanding tasks easily.

Demos on Real-World Tasks

1.Bathroom Tasks

2.Kitchen Tasks

3.Tabletop Tasks

Demos on Long-Horizon Task with Direct Prompting

Below are three examples of long-horizon tasks with direct prompting.

User: "Put the spider-man into the drawer."

Subtasks:
  1. "Open the drawer."
  2. "Put the spider-man into it."
  3. "Close the drawer."
User: "Stack the building-blocks."

Subtasks:
  1. "Pick up the orange building-block and stack it onto the yellow one."
  2. "Pick up the pink building-block and stack it onto the orange one."
User: "Clean building blocks to the box. "

Subtasks:
  1. "Put the yellow one to the box."
  2. "Put the pink one to the box."

Demos on Long-Horizon Task with High-Level Policy Model

Below are two examples of long-horizon tasks with high-level policy model.

User: "Move the block to the basket then put the toy into the drawer."

Plans:
  1. "Move the orange block to the basket."
  2. "Open the drawer."
  3. "Put the toy into it. "
  4. "Close the drawer."
User: "Prepare the breakfast for me."

Plans:
  1. "Get the plate and place it on the tablecloth. "
  2. "Flip the cup and place it on the tablecloth. "
  3. "Move the bread to the plate. "

Experiments Results for Understanding Task

Evaluation of MLLMs and VLAs on 6 Multimodal Understanding benchmarks and 7 VQA benchmarks.

teaser teaser

ChatVLA demonstrates competitive performance relative to existing VLMs across multiple benchmarks. Specifically, ChatVLA outperforms ECoT and DiVLA by relative improvements of 9.2x and 9.5x over these baseline models.
On the MMStar benchmark, ChatVLA attains a score of 37.4, demonstrating 2.2x and 6.9x performance enhancements over DiVLA and ECoT respectively.

Experiments Results for Real Robot Tasks

Evaluation VLAs on 25 real-world robot control tasks across diverse scenes.

teaser

We conducted 528 trials on a real robot to evaluate ChatVLA's ability. ChatVLA outperforms OpenVLA and ECoT with 3.5x fewer parameters on 25 real-world robot control tasks across diverse scenes.

BibTeX

    @misc{zhou2025chatvlaunifiedmultimodalunderstanding,
      title={ChatVLA: Unified Multimodal Understanding and Robot Control with Vision-Language-Action Model},
      author={Zhongyi Zhou and Yichen Zhu and Minjie Zhu and Junjie Wen and Ning Liu and Zhiyuan Xu and Weibin Meng and Ran Cheng and Yaxin Peng and Chaomin Shen and Feifei Feng},
      year={2025},
      eprint={2502.14420},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2502.14420},
      }