The changes in the cg production pipeline

Larry Gritz, who works at Sony Imageworks, presented at High Performance Graphics in New Orleans a talk entitled «Production Perspectives in High Performance Graphics». The talk was basically a description of the procedures of making a cg production and the problems involved in it, to see how the production process can be improved and accelerated with the aid of the new generation of graphics hardware.

With amazing pieces of the las works in which he has been involved (Cloudy with a Chance of Meatballs, 2012) he explained how thinks have changed in the later years.

A typical movie can have between 100 and 2000 shots of cg, and every shot consists in about 20 to 200 frames, that have to be rendered at high quality, high resolution and without artifacts. A typical scene can use about 2 to 20 Gb of memory and deal with the loading of more than 100 Gb of textures. It is expected that one frame takes between 4 to 10 hours of rendering, but for complex scenes, 10 to 20 hours can be acceptable. To do this, loads of 8-core machines with 16 to 32 Gb of RAM each are working 24/7. The total number of cores for the renderfarm is estimated to be 5000.

Given all these huge numbers, unexpectedly, the real cost of making a movie does not come from the huge rendering time, from the office space needed for these machines or from the energy to run them, but from the human flesh and brains time; the artist is the bottleneck. Any tool that helps the artist be more efficient in his job is a great money saving. The frame time is less important than pipe time.

The rendering paradigm is also changing. Before, the reyes technique with RenderMan was used to compute several different passes for the frame (beauty, reflections, refractions, highlights, ambient occlusion, fog, etc) for a later 2D composition. Now the use of ray tracing and global illumination have changed that and Sony has moved its rendering engine from RenderMan to Arnold by Marcos Fajardo (who also was at the conference and I had the oportunity to chat with him). Now less passes are required for final compositing.

Among the tools for the artists to do more efficient job, there are those of relighting, that help the illumination artist to tweak and adjust the illumination of the scene with high quality at interactive rates. Before these tools, the relighting was brute force: if the final image was not satisfactory, the frame, or the whole sequence had to be recomputed. Also part of the relighting could be done in the composition, with the different lightings in different passes and adjusting intensities. The deep buffer technique, keeping scene data at pixel level and allowing a real time adjustment of some kinds of lights, also helped with that. Then sorbetto, based on CUDA from nVidia, appeared, but it was not interactive enough for complex scenes, as it had complexity limits. The next version of sorbetto, named mocha, was promising but was killed before shipping (and I ask myself why?). Arnold has a basic relighting tool in which the raytracing is restarted for every change in the light, but allows for interactive feedback.

In conclusion, it’s not computation power that is needed but tools that help the artist work more efficiently. These tools should be multiplatform and multi OS, and some effort from the industry is needed to define standards. Will Shaderlight do the job?

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *