Behind the scenes at Weta Digital, where creativity meets cutting-edge science
James Cameron has spent the better part of a decade developing the technology used to create Avatar.
Back in 1996, James Cameron announced that he would be creating a film called Avatar, a science-fiction epic that would feature photo-realistic, computer-generated characters.
He had a treatment for the film, which already defined many things, including the Na’vi – a primitive alien race standing ten feet tall with shining blue skin, living in harmony with their jungle-covered planet Pandora.
Soon after, though, Avatar had to be shelved as the technology of the time could not satisfy the creative desires of the director.
Fast-forward to October 2009: Dan Lemmon, FX supervisor and Andy Jones, animation director at Weta Digital have about two weeks left of visual effects production for Avatar. The near-900 strong crew spanned across six locations are practically working around the clock to achieve what was deemed impossible a decade earlier.
Weta Digital, the New Zealand studio responsible for the groundbreaking visual effects in The Lord of the Rings trilogy and King Kong, is taking VFX to a new level of creative and technological excellence.
For Avatar, the studio has created over 1,800 stereoscopic, photo-realistic visual effects shots, many of them of the Na’vi as ‘hero’ characters. In addition to digital characters and environments are the machines, vehicles, equipment and everything else that help blur the line between imagination and reality.
“We’re not just talking about the environment, but the creatures, the machines and the vehicles that people use to get around. The whole world is unique and because of the way James Cameron approaches things, everything seems functional and believable. Compared to other sci-fi fantasy genre films there’s a certain level of realism just in the design that makes it very believable,” says VFX supervisor Dan Lemmon.
Realising the Na’vi
Over a decade ago, Cameron had already figured out what he wanted the Na’vi to look like. “Back then, it was clear that they were going to be blue, tall, have tails and be somewhat feline-like,” says Lemmon.
“We set out to make the Na’vi as realistic as possible. To do that we needed key departments to be firing on all cylinders,” says animation director Andy Jones. “From facial and body rigging, motion capture, to animation, and shading and rendering – all these departments reached a synergy to bring these performances to the screen.”
Lemmon adds, “We used a lot of photographs and scans of the actors and tried to incorporate the details of the physical actors into the digital characters – for both the Na’vi and humans. There are some characters like Jake, who’s played by Sam Worthington, where there’s both an Avatar double and a digital double.
“There’s a lot of data that we captured through digital scanning and Lightstage capture. In addition, we did a lot of extra texture and shader work to make sure all that detail went into the final renders.”
A new muscle system
For animating the digital characters in Avatar, Weta Digital had to develop some key technologies that would simulate realism as accurately as possible. Previously, Weta used relatively simplified muscle-simulation systems to generalise how muscles deformed a character’s skin.
With Avatar, CG supervisor Simon Clutterbuck led the team to create a more accurate skeletal and muscle-simulation system. “It’s quite cool now. Muscles intercollide, preserve their volume and are anatomically correct,” says Lemmon.
“There are tissue layers, tendon sheets and all the critical parts of how a muscle system works. It gives a much more realistic starting point for creating believable creature deformations such as all the sliding under the skin and the dynamics of flesh as it moves.”
For the Na’vi to be believable, realistic facial animation was crucial. The Na’vi experience a wide range of emotions and the facial animation had to convey these in a realistic way, or potentially fall into the ‘Uncanny Valley.’
Weta used a variety of techniques to get the facial animation to a realistic state. First of all was facial motion capture. Using a high-definition video camera attached to the face of an actor and markers on the face, Weta’s in-house software was able to map out which muscles in the face were firing.
The underlying technology is based on Paul Eckman and Wallace Friesen’s Facial Action Coding System (FACS). By creating a map of muscle firings, Weta was able to retarget the motion data onto faces that don’t match directly – in this instance, the Na’vi.
“We started doing this when we were working on King Kong,” says Lemmon. “Andy Serkis was playing Kong and his facial anatomy is fairly different from a gorilla’s. By capturing the muscle firings, we were able to retarget the motions back onto an animal with different anatomy and topology. We were looking to do essentially the same thing with the Na’vi but in a more sophisticated way.”
“This system allowed us to generate a lot of detail in the motion of the faces,” Jones adds. “Jim shot a ton of HD reference of his actors and that ended up being the saving grace for the animation process. Once the facial solve came out of motion capture, we would submit side-by-side renders of the real actor and his avatar/Na’vi counterpart, and tweak and adjust the facial animation to get every last nuance into the performance.”
Advanced facial rigs
In order to create and retain the detail in the faces, Weta upped the ante in facial rig complexity and mesh resolution. “The facial rigs are by far the most advanced I have ever worked with,” proclaims Jones.
LIFELIKE: Facial performance capture was used to recreate the actor’s every nuance
“Jeff Unay and his team really pushed the envelope on these characters, working with extremely high-resolution meshes to sculpt in details and wrinkles that would have normally been placed in displacement maps.
“With the wrinkles in the model, he could control the motion of them so that the skin actually squashes together and then forms the wrinkle, instead of it just dissolving on and off like a displacement.”
Jones also gives credit to the advances in hardware for making this possible. “In terms of motion, the technology that has helped us the most was the computer processing and graphics card speeds. A facial rig with this many polys could not have been attempted five years ago. The slow speeds would have made it impossible to animate,” he says.