A team of UCLA
researchers is exploring the nexus between motion
capture and realistic simulation in a project aiming to
develop practical CGI stunt people.
The researchers are
designing a modular standalone package that could
eventually be incorporated into commercial packages. In
keeping with the project’s goal of developing
intelligent, autonomous virtual actors, it combines the
physical responses of the virtual stuntman with the
control of motion capture data.
UCLA’s Ari Shapiro
said the team is currently working on plug-ins for the
major modelling packages, like 3Ds max and Maya.
"'Complex Character
Animation' project is an extension of our virtual
stuntmen project where we are attempting to create
digital stunt people that can be placed in movies and
simulations under situations that would be too dangerous
for humans,” said Shapiro.
"Thus, we use a
physically simulated environment that allows us to throw
our characters off of buildings and impact them with
heavy objects. However, unlike simple 'rag doll'
environments where characters passively respond to
external forces, our characters have motor control and
an ability to react to their environments.”
For example,
characters that are falling to the ground will twist
their bodies and place their hands down in front of them
in order to absorb the shock of impact. Also, these
characters are given other basic motor skills, such as
the ability to stand up from a prone or supine position
and will respond autonomously to their environment with
whatever skills they are given.
“These motor skills
allow a richer interaction between the character and the
environment,” said Shapiro “The idea is that motion
capture gives you fine-grained control over the motion
of an animated character, but doesn't allow the
character to interact with its environment, since the
motions are completely scripted and don't include a
method of modifying the motion according to the objects
and other characters that exist alongside the character.
"
By providing a
mechanism that (1) perturbs the original motion capture
data and (2) determines if the characters should
continue the original motion or to switch into physical
simulation and thus be controlled dynamically (like our
virtual stuntman), we now have characters that are
‘truly’ interactive.
“They can now react
intelligently to the things in their environment. They
can be pushed, poked, hit, placed in a crowd, fight with
each other and will act and react accordingly without
the need to script their movements. Thus, unlike a video
game where characters have prescripted motions and can
only interact with the environment in ways that the
author originally intended, our characters can behave in
previously unseen ways with high-quality motion.”
Like the Massive tool
utilised in the production of the Lord of the Rings
trilogy, the UCLA system will be able to create
large-scale battle simulations, where characters are
both aware of the allies and enemies on the battlefield,
as well as being able to react realistically to blows
that strike various parts of their bodies. However it is
designed to concentrate to a greater extent on the
subtleties of motion and interaction.
“My understanding is
that 'Massive' utilises a database of motion capture to
handle the movements and reactions of each character,”
said Shapiro.
“Since we use dynamic control, our
characters can respond to unique environments and react
in novel, unscripted ways. While 'Massive' relies on the
AI to develop emergent behaviour among characters, we
are using our AI to develop fine grained movements that
accurately control the motion among interacting
characters.”
The difference being
that every character hit in the head with a weapon will
react differently according to the angle of attack, mass
properties of the weapons, relative velocities of the
interacting characters and so forth.
So glancing
blows will only briefly perturb the motion of digital
warriors, while direct hits might knock them down
completely.