In PACMCGIT (Proceedings of the 20th annual Symposium on Computer Animation)
Abstract
We present a simple and intuitive approach for interactive control of physically simulated characters.
Our work builds upon generative adversarial networks (GAN) and reinforcement learning, and introduces an imitation learning framework where an ensemble of classifiers and an imitation policy are trained in tandem given pre-processed reference clips.
The classifiers are trained to discriminate the reference motion from the motion generated by the imitation policy, while the policy is rewarded for fooling the discriminators.
Using our GAN-like approach, multiple motor control policies can be trained separately to imitate different behaviors.
In runtime, our system can respond to external control signal provided by the user and interactively switch between different policies.
Compared to existing method, our proposed approach has the following attractive properties: 1) achieves state-of-the-art imitation performance without manually designing and fine tuning a reward function;
2) directly controls the character without having to track any target reference pose explicitly or implicitly through a phase state; and
3) supports interactive policy switching without requiring any motion generation or motion matching mechanism.
We highlight the applicability of our approach in a range of imitation and interactive control tasks, while also demonstrating its ability to withstand external perturbations as well as to recover balance.
Overall, our approach has low runtime cost and can be easily integrated into interactive applications and games.