PFPN: Continuous Control of Physically Simulated Characters
using Particle Filtering Policy Network
In 2021 ACM SIGGRAPH Conference on Motion, Interaction and Games
Also NeurIPS 2021 Deep Reinforcement Learning workshop
Data-driven methods for physics-based character control using reinforcement learning have been successfully applied to generate high-quality motions. However, existing approaches typically rely on Gaussian distributions to represent the action policy, which can prematurely commit to suboptimal actions when solving highdimensional continuous control problems for highly-articulated characters. In this paper, to improve the learning performance of physics-based character controllers, we propose a framework that considers a particle-based action policy as a substitute for Gaussian policies. We exploit particle filtering to dynamically explore and discretize the action space, and track the posterior policy
represented as a mixture distribution. The resulting policy can replace the unimodal Gaussian policy which has been the staple
for character control problems, without changing the underlying model architecture of the reinforcement learning algorithm used to perform policy optimization. We demonstrate the applicability of our approach on various motion capture imitation tasks. Baselines using our particle-based policies achieve better imitation performance and speed of convergence as compared to corresponding implementations using Gaussians, and are more robust to external perturbations during character control.