CMU-CS-09-148
Computer Science Department
School of Computer Science, Carnegie Mellon University



CMU-CS-09-148

Modeling Behavior and Variation
for Crowd Animation

Manfred Chung Man Lau

August 2009

Ph.D. Thesis

CMU-CS-09-148.pdf


Keywords:

The simulation of crowds of virtual characters is needed for applications such as films, games, and virtual reality environments. These simulations are difficult due to the large number of characters to be simulated and the requirement for synthesizing realistic human-like motion efficiently. This thesis focuses on two problems: how to search through and select motion clips of behaviors so that human-like motion can be generated for multiple characters interactively, and how to model and synthesize variation in motion data.

Given a collection of blendable segmented motion clips derived from motion capture or keyframed animation, this thesis explores novel ways to apply heuristic search algorithms to generate goal-driven navigation motion for virtual human-like characters. Motion clips are organized and interconnected through a behavior graph that encodes the possible actions of a character. A planning approach is used to search over these possible actions to efficiently generate motion. This technique works well for synthesizing animations of multiple characters navigating autonomously in large dynamic environments.

In addition, this thesis introduces a novel planning approach based on precomputation that is more efficient than traditional forward search methods. We present a technique for precomputing large and diverse trees, and describe a backward search method used during runtime to solve planning queries. This new approach allows us to develop an interactive animation system that supports a large number of characters simultaneously.

Finally, this thesis addresses the issue of motion variation. Current state-of-the-art crowd simulations often use a few specific motion clips or repeated cycles of a particular motion to continuously animate multiple characters. The idea of synthesizing the subtle variations in motion data has been largely unexplored, as previous work considers variation to be an additive noise component. This thesis instead uses a data-driven approach and applies learning techniques to this problem. Given a small number of input motions, we model the data with a Dynamic Bayesian Network, and synthesize new spatial and temporal variants that are statistically similar to the inputs.

121 pages


Return to: SCS Technical Report Collection
School of Computer Science

This page maintained by [email protected]