HandAvatar: Embodying Non-Humanoid Virtual Avatars through Hands
Yu Jiang*, Zhipeng Li*, Mufei He, David Lindlbauer, Yukang Yan
Accepted by CHI'23
HandAvatar leverages the high dexterity and coordination of users' hands to control virtual avatars, enabled through our novel approach for automatically-generated joint-to-joint mappings. We contribute an observation study to understand users’ preferences on hand-to-avatar mappings on eight avatars. Leveraging insights from the study, we present an automated approach that generates mappings between users' hands and arbitrary virtual avatars by jointly optimizing control precision, structural similarity, and comfort. We evaluated HandAvatar on static posing, dynamic animation, and creative exploration tasks. Results indicate that HandAvatar enables more accurate and flexible control, requires less physical effort, and brings equal embodiment compared to a state-of-the-art body-to-avatar control method. We believe that HandAvatar unlocks new interaction opportunities, especially for usage in Virtual Reality, by letting users become the avatar in applications including virtual social interaction, animation, gaming, or education.