by Oliver Winwood, Rob Hopper, Kai Wolter
Abstract
The rich environments and detailed characters in The Jungle Book presented various challenges for the FX department. Not just in the complexity of the work but the sheer number of shots we had to complete quickly and efficiently. We were required to create physically believable simulations that would blend seamlessly with their surroundings, while interfacing with large data sets without reducing our ability to generate new iterations in a timely manner.
The effects we were tasked with were broad and wide ranging, as were our corresponding approaches to solve them. With over nine hundred shots and anything from two to ten effects tasks per shot, The Jungle Book was in excess of any of our previous film projects with a peak FX crew of over fifty artists. We had to ensure we could automate and repeat our effects multiple times, so we could focus our efforts on shots that required complex simulations.
Following is a sample of the challenges faced on The Jungle Book.
Rain and Interaction
We have taken a number of approaches to digital rain in previous shows – from simulating all the rain in 3D space, to using an entirely procedural, render time based approach. For The Jungle Book we looked at emulating the practical techniques traditionally used on set by combining thin layers of rain. This simple approach reduced our simulation time and data, while retaining flexibility and control, both when simulating and combining the final elements in comp. Typically we would use four to five layers and a sprite pass to add texture in the background.
An important component to completing the illusion was the rain interaction. We found that as long as the viewer could read or feel splashes on the entire environment they would not perceive any gaps in the layers of rain. However, with environments composed of hundreds of trees each with thousands of leaves, detailed displacement across the various terrain and furry characters, we could not simply use this data as the source of the splashes. Instead, we decided to leverage the information that the renderer could give us. We would create an orthographic camera oriented to our rain direction and then render a position pass, a normal pass and if needed a velocity pass. From these we would generate emission points in space based on the final render data, the trajectory of the rain and the angle of the surface it collides with. The amount of data needed was relatively lightweight as we could use a single high-res image for static geometry and a sequence for anything that was moving. It also had the benefit that the maps provide a natural ‘shadowing’ of areas that the rain would not reach.
Using the Best Current Fluid Solvers
Water features a lot across The Jungle Book at a number of different scales and levels of complexity. One of our biggest challenges was recreating the lazy river sequence, a famous and well loved sequence in the 1967 animated film. For this sequence, where the demands for the fluid simulations were most acute, our work was divided into two main stages.
First we had to define a flow for the river: simulating a large body of water can be time consuming and directing an artistically pleasing current is painstaking when relying on the geometry of the banks and physical objects in the scene. To create the details we needed in chosen areas we had great success with localised maya fluid simulations used as a guiding force. They had a relatively low resolution with the characteristics for the desired behaviour. These were then placed along the river to generate the details we needed and used to drive the flow of water in our primary river simulation which was created using a refine sheet based approach. A refine sheet is a fluid sim a few voxels high that sits on top of a base animation or height field.
For the close up shots of Baloo and Mowgli, this refine sheet method didn’t give us the details we needed for the water interaction. To solve this we ran a localised FLIP solver around the characters. The underlying water flow was driven by a velocity field extracted from the previous simulation, collisions were added for the character models and a foam pass was simulated from areas with high vorticity. The two resulting surfaces were then seamlessly combined as a single mesh for lighting. To maintain a calm, clear look for the river, we meshed the whitewater simulation and applied a slightly different shader, rather than relying purely on particles which can some times give a harsher look. Floating twigs, leaves and pollen elements were added into the flow of the water to give the finishing touches.
Bare Necessities: Simulating Only What You Need
In the past we have had a tendency in FX to plan on simulating everything on a per-shot basis. In our approach on Guardians of the Galaxy we had great success creating a library of pre-cached explosions for an aerial battle, see [Pieke ́ et al. 2014]. We used the library as placeholders for hero explosions finding that many of them held up far better than we previously anticipated, greatly reducing the amount of hero work required. With this in mind for The Jungle Book, we aimed to populate shots quickly and allow downstream departments such as Lighting and Comp to assemble shots at an earlier stage.
The first example of this workflow was in the Buffalo Stampede sequence. Our first step was to develop the look of the effects outside of a shot context to get buy-off on the look of the mud kicks and splashes. This included caches for mud and debris as well as water splashes to match the environment for the sequence. This look was then distilled into individual cache events for each foot- fall so that they could be applied based on the action of a given shot. For the ground deformation and ripples in puddles we procedurally generated displacement maps based on foot contact allowing us to quickly cover a far larger area than a simulation would have allowed. As animation and crowd progressed, we were able to assemble the effects procedurally from the foot data of the buffalo in a relatively short time. We had already identified likely candidates for hero simulations, but by progressing every shot using the same procedural approach first, we were able to quickly turn our focus to the shots that needed the bespoke work. Ultimately, we identified just four shots in the sequence that needed hero simulations, the first two being when the buffalo first enter the ravine. Promoting these shots to hero allowed us to really sell the foot interaction early in the sequence, making any discrepancies in the following shots far more forgiving.
Another sequence that predominantly used this technique was the burning jungle, featured in the end battle. We designed a library of fire caches which accounted for three sizes of emission area, three heights and source from either the ground or tree geometry. By scaling these up or down within a fifteen to twenty percent limit gave us a good range of sizes. Individual elements in the library were comprised of fire, smoke and embers made up of volume and particle caches. As well as being placed in 3D these were also rendered out in a template scene as elements that could be used by the compositing department.
To aid the placement we generated a single frame low-res geometry representation of the fire. A fire library placement tool was created to allow us to quickly layout the fire and switch from the proxy geometry to the heavier, render-able caches. This layout could be approved from a simple OpenGL viewport render of the geometry before submitting a more resource intensive render. The smoke elements from the fire were kept quite short to reduce cache size, giving enough detailed smoke around the fire to create a connection to larger ambient smoke caches, completing the look for the burning jungle. This successful implementation of the fire library meant we were able to reduce the number of shots that required bespoke simulations to just four in a sequence of around 200 shots.
References
PIEKÉ, R., BAILEY, L., WOLTER, K., AND PLAETE, J. 2014. Creating the flying armadas in guardians of the galaxy. In ACM SIGGRAPH 2014 Talks, ACM, New York, NY, USA, SIG- GRAPH ’14, 7:1–7:1.