Conceived over a decade ago when the means of realisation didn’t yet exist, James Cameron’s Avatar was finally released in the winter of 2009.
At the time, 3D World went behind the behind the scenes at Weta Digital, where creativity met cutting-edge science. With epic results.
It all started back in 1996 when James Cameron announced that he would be creating a film called Avatar: a science-fiction epic that would feature photo-realistic, computer-generated characters.
He had a treatment for the film, which already defined many things, including the Na’vi – a primitive alien race standing ten feet tall with shining blue skin, living in harmony with their jungle covered planet Pandora – until people turn up.
Using the ‘Avatar programme’, the human beings infiltrate the Na’vi in order to exploit Pandora’s natural resources.
Soon after the story was outlined, though, Avatar had to be shelved as the technology of the time could not satisfy the creative desires of the director.
Fast-forward to October 2009: Dan Lemmon, FX supervisor and Andy Jones, animation director at Weta Digital had just two weeks left of visual effects production for Avatar.
The near-900 strong crew spanned across six locations were working around the clock to achieve what was deemed impossible a decade earlier.
“What is unique and special about Avatar is that it takes you to a world that you’ve never been before,” says Lemmon. “It’s fleshed out in such detail and scope that you really feel like you’re in a place that exists.”
Weta Digital, the New Zealand studio responsible for the groundbreaking visual effects in The Lord of the Rings trilogy and King Kong, took VFX to a new level of creative and technological excellence.
Character pipeline
For Avatar, the studio created over 1,800 stereoscopic, photo-realistic visual effects shots, many of them of the Na’vi as ‘hero’ characters.
In addition to digital characters and environments are the machines, vehicles, equipment and everything else that help blur the line between imagination and reality.
“We’re not just talking about the environment, but the creatures, the machines and the vehicles that people use to get around. The whole world is unique and because of the way James Cameron approaches things, everything seems functional and believable.
“Compared to other sci-fi fantasy genre films there’s a certain level of realism just in the design that makes it very believable.”
The Na’vi
Over a decade ago, Cameron had already figured out what he wanted the Na’vi to look like. “Back then, it was clear that they were going to be blue, tall, have tails and be somewhat feline-like,” says Lemmon.
“We set out to make the Na’vi as realistic as possible. To do that we needed key departments to be firing on all cylinders,” says animation director Andy Jones.
“From facial and body rigging, motion capture, to animation, and shading and rendering – all these departments reached a synergy to bring these performances to the screen.”
The CG, 10-foot tall Na’vi, were created using the following tools:
- Maya was used as the main animation/3D package
- Mudbox was used for the digital sculpting and detailing
- Lightstage and digital scanning
- Multiple custom-developed in-house tools
- Muscle simulation
- Facial animation
Lemmon adds, “We used a lot of photographs and scans of the actors and tried to incorporate the details of the physical actors into the digital characters – for both the Na’vi and humans.
“There are some characters like Jake, who’s played by Sam Worthington, where there’s both an Avatar double and a digital double. There’s a lot of data that we captured through digital scanning and Lightstage capture.
In addition, we did a lot of extra texture and shader work to make sure all that detail went into the final renders.”
For animating the digital characters in Avatar, Weta Digital had to develop some key technologies that would simulate realism as accurately as possible.
New processes and technologies were created to realise the Na’vi:
- 80:20 split between full body motion capture and keyframe animation
- 60:40 split between facial motion capture and keyframe facial animation
- Custom muscle simulation software developed for high fidelity, realistic skin deformation
- Custom facial animation software developed based on Facial Action Coding System (FACS)
Previously, Weta used relatively simplified muscle-simulation systems to generalise how muscles deformed a character’s skin. With Avatar, CG supervisor Simon Clutterbuck led the team to create a more accurate skeletal and muscle-simulation system.
“It’s quite cool now. Muscles inter-collide, preserve their volume and are anatomically correct,” says Lemmon. “There are tissue layers, tendon sheets and all the critical parts of how a muscle system works.
It gives a much more realistic starting point for creating believable creature deformations such as all the sliding under the skin and the dynamics of flesh as it moves.”
Facial animation
For the Na’vi to be believable, realistic facial animation was crucial.
The Na’vi experience a wide range of emotions and the facial animation had to convey these in a realistic way, or potentially fall into the ‘Uncanny Valley.’
“The Uncanny Valley mostly comes from the lack of detail in the face making it look a bit dead or zombie-like,” says Jones.
“When your animated character is grotesque-looking, it is actually acceptable. But when you have characters that are supposed to smile, laugh, and cheer in a realistic way, you really have to nail all the details to get there.”
Weta used a variety of techniques to get the facial animation to a realistic state.
First of all was facial motion capture. Using a high-definition video camera attached to the face of an actor and markers on the face, Weta’s in-house software was able to map out which muscles in the face were firing.
The underlying technology is based on Paul Eckman and Wallace Friesen’s Facial Action Coding System (FACS).
By creating a map of muscle firings, Weta was able to retarget the motion data onto faces that don’t match directly – in this instance, the Na’vi.
“We started doing this when we were working on King Kong,” says Lemmon. “Andy Serkis was playing Kong and his facial anatomy is fairly different from a gorilla’s. By capturing the muscle firings, we were able to retarget the motions back onto an animal with different anatomy and topology.
“We were looking to do essentially the same thing with the Na’vi but in a more sophisticated way.
“This system allowed us to generate a lot of detail in the motion of the faces,” Jones adds. “Jim shot a ton of HD reference of his actors and that ended up being the saving grace for the animation process.
“Once the facial solve came out of motion capture, we would submit side-by-side renders of the real actor and his avatar/Na’vi counterpart, and tweak and adjust the facial animation to get every last nuance into the performance.”
In order to create and retain the detail in the faces, Weta upped the ante in facial rig complexity and mesh resolution.
“The facial rigs are by far the most advanced I have ever worked with,” proclaims Jones.
“Jeff Unay and his team really pushed the envelope on these characters, working with extremely high-resolution meshes to sculpt in details and wrinkles that would have normally been placed in displacement maps.
“With the wrinkles in the model, he could control the motion of them so that the skin actually squashes together and then forms the wrinkle, instead of it just dissolving on and off like a displacement.”
Jones also gives credit to the advances in hardware for making this possible.
“In terms of motion, the technology that has helped us the most was the computer processing and graphics card speeds.
“A facial rig with this many polys could not have been attempted five years ago. The slow speeds would have made it impossible to animate,” he says.
The Hometree sequence
The Na’vi’s commune is central to the movie’s story arc, and the destruction it faces in the movie brought in a wider range of VFX elements including smoke and fire.
It was CG supervisor Kevin Smith and Weta veteran Eric Saindon who were tasked with tackling around 200 shots for the attack sequence:
“We actually watched a cut of the Hometree destruction about two years before the movie was due in theaters,” recalls Saindon.
“All I could think was: ‘Wow, I wouldn’t want to be the poor bastard that has to work on that sequence.’ A week later, I volunteered! In the end, it was a blast to work on – I had some great CG supervisors and a really good production team.”
Using art books provided by Cameron’s Lightstorm Entertainment as reference, Weta essentially created the Hometree as a single asset.
“We did use a few higher-resolution sections in certain shots, but for the most part it’s just one model, with around 20 million polygons and approximately 1.2 million leaves,” says Saindon.
While the dust, fire and other effects work posed a substantial challenge, Saindon says that the fast-moving destruction sequences allowed for more freedom in other ways.
“Smoke, fire and other effects work give the eye lots to look at without looking at the minor detail work,” he explains.
“In the initial shots, where the camera was slow and the air was much clearer, we required an extra layer of detail to bring the shots to life. Adding such things as dust, decay and variation to all the vegetation and higher-resolution textures for the Hometree helped us go the extra step.”
Because of the simulations required for the fire and destruction, the Hometree destruction shots were invariably at the higher end of the scale.
“We used a lot of Z-buffer compositing, which made integrating multiple elements together in the composite far easier than the traditional method of rendering out mattes,” says CG supervisor Kevin Smith.
“It gave the lighters more freedom to render their shots in a piecewise fashion.”
While stereoscopic 3D and fast-moving action sequences don’t typically make the best of bedfellows, Saindon says the use of stereo depth in the Hometree sequence proved quite straightforward.
“Making sure we used the correct interocular and convergence for each type of shot was something developed in the course of the movie.
“Early on, we reviewed the 3D with Jim in every conference call, but by the end we would only review 2D. Jim would have a look at the 3D himself and send us any notes.
“It’s always tricky to direct elements such as fire, smoke and explosions in VFX.
“Jim is a master with 3D cameras. He knows how to focus the eye on the action, which makes for a much better stereo experience.”
Realistic environments
The world of Pandora is covered in jungle. Many visual effects shots feature the jungle in some sort of way, whether the camera is in the jungle or above it in the air.
“James Cameron and his team spent a lot of time designing the horticulture of the environment,” says Lemmon. “There are very detailed and very exotic plants.
“We were able to build some of the plants through procedural foliage software that we developed, but many plants were modelled by hand.
“Most of them were executed at fairly high detail. Larger trees had up to 1.2 million polygons.”
As any given frame might have hundreds of thousands of plants rendered, efficiency was crucial.
“We had to use proxy versions that would act as stand-ins in Maya and render procedurally in PRMan.
“We put together level-of-detail strategies so that we could have more detail up close and less geometry and detail as the camera gets further away from the objects.”
Stereo 3D
These environments are likely to be enhanced by one of the other major features of Avatar: the fact that it was conceived from the ground up to be shown in stereroscopic 3D.
The live action for Avatar was filmed using the Fusion 3D camera system developed by Cameron and Vance Pace, and the final stereo shots were composited with a custom-developed version of Shake.
“On the 3D side in particular, there were a lot more things that we would get away with in a non-stereo environment that suddenly don’t work anymore,” says Lemmon.
“Our ability to use 2D elements is compromised. At a certain distance, you can still get away with it, but up close you need to be able to see the stereo effect.
“Examples of this are water and dust-hits: you’re locked into doing full 3D simulations and these increase the complexity of things quite a bit.
“Matte paintings were a subject of a lot of internal discussion on what was going to be the most efficient and most realistic way to create the environments,” he says. “If you look at the film, there are two major environments. One is the jungle as you are amongst the trees; the other is the aerial environment.
“Both techniques leaned primarily on 3D environments for anything that was close to the camera. In some cases we were able to fill up the frame completely with 3D.”
“Another issue is the lighting and times of the day. In a lot of cases we had to change the lighting over the course of a scene. That sort of thing you traditionally try to avoid with matte paintings; you want to lock your lighting and re-use as much of that as you’re able.
“Because we knew that lighting was changing all over the place, we wanted to keep things flexible and be able to relight things as we needed to.
“That was another reason to lean on a full 3D solution for anything that was closer than mid-background.”
On working with live-action stereoscopic footage, Lemmon notes, “Before, we just worried about mattes, green spills, that sort of thing.
“Those issues become more complicated once you get into stereo.
“For example, when you photograph the plates, you get a slight discrepancy between the left and right eye in terms of colour and they way highlights behave off objects. For example, things will look slightly shinier in one eye than the other, and that creates a bit of discrepancy that you can subtly pick up.
“There are some alignment mismatches in the footage, and if left and right eye aren’t aligned together perfectly vertically, you’re going to have some difficulty with your brain putting it back together.”
Matchmoving was also an issue: “You have to make sure that both eyes are bang-on – not just matched to the images, but also to each eye distance-wise so that they’re not fluctuating.
“That’s a critical component. Because you see things in stereo, you have a much clearer picture of things such as foot contact, which conveys that whole spatial reality.
“It’s a lot more obvious and you have to be a lot more precise. I wouldn’t say it’s impossible, but you get away with a lot less in 2D.”
Leading technology
Technical facts:
- 1,852 total shot count
- 1,818 animated shots
- Close to 900 crew working on Avatar at peak
- Some of the bigger shots include more than 1,000 digital assets, excluding characters
- More than 1,900 digital assets were offi cially approved
- Some of the larger shots use between 5-50 billion polygons
- Roughly 10TB of data was generated per day
- There are 483 hero plant assets – with geometry modifications and different textures there were 3,000 possible variations to use
- 53 different hero Na’vi models with over a hundred possible variations
- 25 hero vehicles assets
- 21 hero creatures – when combined with texture variations gave a total of 68 unique creatures
- Approximately 90 unique environments and 1,500 shot-specific terrain pieces or elements that make up these environments
- 4,352 render machines
- 34,816 CPU cores
- 104TB of RAM
In addition to all these elements, an innovative new performance-capture technique was used to film Avatar.
Traditionally, a character’s motions are captured and the digital environment added in later in post-production.
The new technique displays a visualisation of the characters, 3D environments and objects to the camera operator in real-time.
“Basically, you’re shooting the film on a motion-capture stage in a way that’s similar to a live-action film,” says Lemmon.
“You’re filming things, capturing the scene as you’re seeing it through a monitor in real-time. In this case, Jim was able to visualise where everybody was in relation to the rest of the environment.
“The system enabled him to create framing and move people and objects around, just like a real stage with real objects, except on a motion-capture stage with virtual objects.”
The team hopes that the production of Avatar has set new standards in how creativity and technology can work together to create highly believable experiences.
Lemmon adds, “What I’m most proud of is how immersive the experience is. It takes you someplace you’d never be able to go in ordinary life.
Post a Comment