We introduce HOIGPT, a token-based generative method that unifies 3D hand-object interactions (HOI) perception and generation, offering the first comprehensive solution for captioning and generating high-quality 3D HOI sequences from a diverse range of conditional signals (e.g. text, objects, partial sequences). At its core, HOIGPT utilizes a large language model to predict the bidrectional transformation between HOI sequences and natural language descriptions. Given text inputs, HOIGPT generates a sequence of hand and object meshes; given (partial) HOI sequences, HOIGPT generates text descriptions and completes the sequences. To facilitate HOI understanding with a large language model, this paper introduces two key innovations: (1) a novel physically grounded HOI tokenizer, the hand-object decomposed VQ-VAE, for discretizing HOI sequences, and (2) a motion-aware language model trained to process and generate both text and HOI tokens. Extensive experiments demonstrate that HOIGPT sets new state-of-the-art performance on both text generation (+2.01% R Precision) and HOI generation (−2.56 FID) across multiple tasks and benchmarks.
HOIGPT frames hand-object interaction modeling as a unified token-based generative task. The pipeline includes several key innovations:
@inproceedings{huang_etal_cvpr25,
author = {Mingzhen Huang and Fu-Jen Chu and Bugra Tekin and Kevin J Liang and Haoyu Ma and Weiyao Wang and Xingyu Chen and Pierre Gleize and Hongfei Xue and Siwei Lyu and Kris Kitani and Matt Feiszli and Hao Tang},
title = {HOIGPT: Learning Long Sequence Hand-Object Interaction with Language Models},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
address = {Nashville, USA},
year = {2025}
}