Rendering Terminology Explained

Biased vs. Unbiased, Reyes, and GPU-Acceleration

Was this page helpful?

If you've spent any time looking into the various render engines on the market, or read about stand-alone rendering solutions, chances are you've come across terms like biased & unbiased, GPU-acceleration, Reyes, and Monte-Carlo.

The latest wave of next-generation renderers has generated a tremendous amount of hype, but it can sometimes be tough to tell the difference between a marketing buzzword and an honest-to-god feature.

Let's take a look at some of the terminology so that you can approach things from a clearer perspective:

What Is the Difference Between Biased and Unbiased Rendering?

Digital rendering of architecture
Mina De La O/Getty Images

The discussion of what constitutes unbiased rendering versus biased rendering can get technical pretty quickly. We want to avoid that, so I'll try to keep it as basic as possible.

 

  • Unbiased - Unbiased renderers like Maxwell, Indigo, and Luxrender are typically hailed as "physically accurate" render engines. Although "physically accurate" is something of a misnomer (nothing in CG is truly physically accurate), the term is meant to imply that an unbiased renderer calculates the path of light as accurately as is statistically possible within the confines of current-gen rendering algorithms.

    In other words, no systematic error or "bias" is willfully introduced. Any variance will manifest as noise, but given enough time an unbiased renderer will eventually converge on a mathematically "correct" result.​
  • Biased - Biased renderers, on the other hand, make certain concessions in the interest of efficiency. Instead of chugging away until a sound result has been reached, biased renderers will introduce sample bias, and use subtle interpolation or blurring to reduce render time. Biased renderers can typically be fine-tuned more than their unbiased counterparts, and in the right hands, a biased renderer can potentially produce a thoroughly accurate result with significantly less CPU time.

So ultimately, the choice is between an unbiased engine, which requires more CPU time but fewer artist-hours to operate, or a biased renderer which gives the artist quite a bit more control but requires a larger time investment from the render technician.

Although there are always exceptions to the rule, unbiased renderers work quite well for still images, especially in the architectural visualization sector, however in motion graphics, film, and animation biased the efficiency of a biased renderer is usually preferable.

How Does GPU Acceleration Factor in?

GPU acceleration is a relatively new development in rendering technology. Game-engines have depended on GPU based graphics for years and years, however, it's only fairly recently that GPU integration has been explored for use in non real-time rendering applications where the CPU has always been king.

However, with widespread proliferation of NVIDIA's CUDA platform, it became possible to use the GPU in tandem with the CPU in offline rendering tasks, giving rise to an exciting new wave of rendering applications.

GPU-acclerated renderers can be unbiased, like Indigo or Octane, or biased like Redshift.

Where Does Renderman (Reyes) Fit Into the Picture?

On some level, Renderman stands somewhat apart from the current discussion. It is a biased rendering architecture based on the Reyes algorithm, developed more than 20 years ago at Pixar Animation Studios.

 

Renderman is deeply engrained in the computer graphics industry, and despite growing competition from Solid Angle's Arnold, it will most likely remain one of the top rendering solutions at high-end animation and effects studios for many years to come.

So if Renderman is so popular, why (aside from isolated pockets at places like CGTalk), don't you hear about it more often?

Because it simply wasn't designed for the independent end-user. Look around the online CG community and you'll see thousands of images from biased raytracers like Vray and Mental Ray, or unbiased packages like Maxwell and Indigo, but it's very rare to come across something built in Renderman.

It really just comes down to the fact that Renderman (like Arnold) was never intended to be widely used by independent artists. While Vray or Maxwell can be used quite competently by a single independent artist, it takes a team to use Renderman the way it was intended. Renderman was designed for large-scale production pipelines, and that's where it thrives.

What Does It All Mean for the End-User?

First of all, it means there are more options than ever. Not so long ago, rendering was a bit of a black magic in the CG world, and only the most technically minded artists held the keys. Over the course of the past decade, the playing field has leveled a great deal and photo-realism has become perfectly attainable for a one person team (in a still image, at least).

 

Check out our recently published list of render engines get a feel for how many new solutions have emerged. Rendering technology has jumped way out of the box, and newer solutions like Octane or Redshift are so different from old standbys like Renderman that it almost doesn't even make sense to compare them.