Real time ray-tracing in hardware

There are two main strategies to render a computer generated image: the rasterizing, in which the polygons are projected to the screen and shaded according to some rules, and the ray-tracing, where rays are traced from the eye through the pixels to intersect the scene and compute the colour. The first one is the one that is commonly used for videogames as it allows, for a limited (though currently really high, by the order of millions) number of polygons, to update the image shown in the screen in real time (at least 30 frames per second), changing the camera position and/or moving a character. The ray-tracing is used to compute high quality and realistic images, that typically take from minutes to several hours to compute and are later used for advertisements, comercials or movies. Since the developement of high performance graphic cards especialised in rasterizing (videogames), and especially since the appearance of programmable shaders and GPU’s, lots of attempts have been made to accelerate the slower of the two techniques, the ray-tracing.

After a few years of some attempts in research, it looks like finally a few companies have put its investments in the commercial development of very fast ray-tracing in the GPU. But their strategies follow slightly different paths, as we will see. The next lines come from the the thoughts and ideas I was writing down in my notebook on the Hot3D session in High Performance Graphics conference, where Caustic presented CausticRT, Intel presented Larrabee as a possible platform for really fast ray-tracing, and nVidia presented OptiX. After the three talks, there was a bit of a panel and discussion.

CausticRT by Caustic

CausticRT consists basically in two somewhat interdependent subproducts. On the one hand a new API based on OpenGL, called CausticGL is presented. It is used to set up, configure and specify the code for programmable rays that are traced into the scene and can be used for any computation that involves loads of rays to be traced in parallel. This API is quite clever and has the aim of becoming a standard. On the other hand they present CausticOne, a graphics card that acts as an interface between the API CausticGL and the rest of the hardware (CPUs and GPUs) and allow for very efficient computation of the ray-tracing. Of course, their business model relies on selling this hardware.

One of their selling points is that the API provides an abtraction for the ray-tracing layer, and in this way the developer does not have to worry anymore for the actual code involving scene acceleration structures or the actual ray-polygon intersection algorithms. For me, as part of a development team that does a ray-tracer, this is annoying, because what we developers want is to have access to this code and algorithms, just in case something is wrong or simply can be improved. Otherwise we have to rely on the faith that everything will work fast and efficient for all cases.

They have an agreement with the Brazil renderer, which has been ported to this platform, and some of their amazing demos are made using Brazil.

Larrabee by Intel

Larrabee is, above all, a bunch of important promises from Intel. It will be, hopefully at some point next year (2010), a revolutionary multi parallel processing card that aims to beat ATI and nVidia graphic cards. It is designed, besides for working for videogames as any given GPU card, to overcome the problems that programs not especifically designed for GPU (commonly called GPGPU) have with GPU cards, and, most importantly for a programmer like me, to avoid having to learn a new complicated shading language, as the known C++ code will be able to be compiled with for card.

Ray-tracing is one of the problems that is not directly solvable by a GPU program and falls into the world of GPGPU (even though we are talking about graphics too, we have seen before that GPUs are for rasterizing, not for ray-tracing), and if Intel gives us, via Larrabee, an easy way to efficiently program it in a parallel way, for me this is a winner.

OptiX by nVidia

OptiX is, as CausticGL, an API for programming ray-tracing in the GPU. The actual code looks very similar in the way rays are set up and programmed, but this nVidia one looks at first sight a bit uglier and unclean. In any case, the fact that this API works directly with an nVidia card, instead of using a hardware interface as CausticOne to accelerate ray-tracing, makes a priory the need for the new hardware quite void. Also Mental Images has been presenting their own ray-traced renderer based on harware aceleration, called iray. As we know that Mental Images is owned by nVidia, it is quite clear that iray is built using the OptiX technology, and the demos are quite impressive.

In conclusion

As a developer, I have all my hopes set upon Larrabee. Developing a ray-tracer for Larrabee should be several times easier than doing it for CausticOne or for nVidia, although the existence of their respective APIs should make lives easier to those that do not need to know anything about acceleration structures for the scene or the way the rays are intersected.

The clear advantadge is at this moment for nVidia. Their cards dominate the market of GPU, and do not need to fight to get introduced, even with a ray-tracing acceleration specific product (as CausticOne, which, as we have said can be overriden by OptiX) or with a more or less uncertain future product as Larrabee.

One year from now, we may have a more clear idea on how things are going to be in the world of ray-tracing in hardware. Meanwhile we will still be using software to ray-trace our images.

4 opiniones en “Real time ray-tracing in hardware”

  1. Hi Txap. Your reports are really amazing, so they come directly from siggraph 09. Now I have a question about current report. Can you be more clear about Larrabee? I can imagine how sounds the other two, but your explanation of this one is not enough for me. I’m working now in GPGPU, and I don’t see where is the advantage. In other words, what are the new features? What is this new technology? On the other hand, another day, in the future, we can discuss about ray tracing implementations on GPGPU context… 😉

  2. The main promise of Larrabee is that you will not need to use a specific shading language to program it. The standard c++ code you use will be directly compiled for Larrabee and you just will have to specify which part of the code will be executed on Larrabee and which part on the main CPU. There are some calls to control communication and syncronization, but you will not have to deal with tedious HLSL or GL.

  3. OK. It seems amazing. But usually, hard APIs to hard systems, such as the GPU, are convenient to assures that developer makes the maximum effort to use the exact resources that he needs. I suspect that with this system we can repeat the situation of virtual non-limited resources on CPU that a lot of developers assume as main working religion. Of course, I see (ever) the new things as bad things, you know me. 😉

  4. Developers that assume non-limited resources are not good programmers. Eventually you bump into the limits and need to optimize. Complicated specific languages for GPU are a loss of time for the programmer (in learning, double programming, and especially, maintenance and bug fixing) that can be solved with better compilers.

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *