Software & Apps Design 36 36 people found this article helpful What Is 3D Rendering in the CG Pipeline? by Justin Slick Writer Former Lifewire writer Justin Slick has been creating 3D computer graphics for more than 10 years, specializing in character and environment creation. our editorial process Justin Slick Updated on November 01, 2019 Aeriform / Getty Images Design 3D Design Animation & Video Graphic Design Tweet Share Email The rendering process plays a crucial role in the computer graphics development cycle. Rendering is the most technically complex aspect of 3D production, but it can actually be understood quite easily in the context of an analogy: Much like a film photographer must develop and print his photos before they can be displayed, computer graphics professionals are burdened a similar necessity. When an artist works on a 3D scene, the models he manipulates are actually a mathematical representation of points and surfaces (more specifically, vertices and polygons) in three-dimensional space. The term rendering refers to the calculations performed by a 3D software package’s render engine to translate the scene from a mathematical approximation to a finalized 3D image. During the process, the entire scene’s spatial, textural, and lighting information are combined to determine the color value of each pixel in the flattened image. Two Types of Rendering There are two major types of rendering, their chief difference being the speed at which images are computed and finalized. Real-Time Rendering: Real-time rendering is used most prominently in gaming and interactive graphics, where images must be computed from 3D information at an incredibly rapid pace. Because it is impossible to predict exactly how a player will interact with the game environment, images must be rendered in “real-time” as the action unfolds.Speed Matters: In order for the motion to appear fluid, a minimum of 18 to 20 frames per second must be rendered to the screen. Anything less than this and action will appear choppy.The methods: Real-time rendering is drastically improved by dedicated graphics hardware, and by pre-compiling as much information as possible. A great deal of a game environment’s lighting information is pre-computed and “baked” directly into the environment’s texture files to improve render speed.Offline or Pre-Rendering: Offline rendering is used in situations where speed is less of an issue, with calculations typically performed using multi-core CPUs rather than dedicated graphics hardware. Offline rendering is seen most frequently in animation and effects work where visual complexity and photorealism are held to a much higher standard. Since there is no unpredictability as to what will appear in each frame, large studios have been known to dedicate up to 90 hours of render time to individual frames.Photorealism: Because offline rendering occurs within an open-ended time-frame, higher levels of photorealism can be achieved than with real-time rendering. Characters, environments, and their associated textures and lights are typically allowed higher polygon counts, and 4k (or higher) resolution texture files. Rendering Techniques There are three major computational techniques used for most rendering. Each has its own set of advantages and disadvantages, making all three viable options in certain situations. Scanline (or rasterization): Scanline rendering is used when speed is a necessity, which makes it the technique of choice for real-time rendering and interactive graphics. Instead of rendering an image pixel-by-pixel, scanline renderers compute on a polygon by polygon basis. Scanline techniques used in conjunction with precomputed (baked) lighting can achieve speeds of 60 frames per second or better on a high-end graphics card.Raytracing: In raytracing, for every pixel in the scene, one or more rays of light are traced from the camera to the nearest 3D object. The light ray is then passed through a set number of "bounces," which can include reflection or refraction depending on the materials in the 3D scene. The color of each pixel is computed algorithmically based on the light ray's interaction with objects in its traced path. Raytracing is capable of greater photorealism than scanline but is exponentially slower.Radiosity: Unlike raytracing, radiosity is calculated independent of the camera, and is surface oriented rather than pixel-by-pixel. The primary function of radiosity is to more accurately simulate surface color by accounting for indirect illumination (bounced diffuse light). Radiosity is typically characterized by soft graduated shadows and color bleeding, where light from brightly colored objects "bleeds" onto nearby surfaces. In practice, radiosity and raytracing are often used in conjunction with one another, using the advantages of each system to achieve impressive levels of photorealism. Rendering Software Although rendering relies on incredibly sophisticated calculations, today’s software provides easy to understand parameters that make it so an artist never needs to deal with the underlying mathematics. A render engine is included with every major 3D software suite, and most of them include material and lighting packages that make it possible to achieve stunning levels of photorealism. The Two Most Common Render Engines Mental Ray: Packaged with Autodesk Maya. Mental Ray is incredibly versatile, relatively fast, and probably the most competent renderer for character images that need subsurface scattering. Mental ray uses a combination of raytracing and "global illumination" (radiosity).V-Ray: You typically see V-Ray used in conjunction with 3DS Max — together the pair is absolutely unrivaled for architectural visualization and environment rendering. Chief advantages of VRay over its competitor are its lighting tools and extensive materials library for arch-viz. Rendering is a technical subject but can be quite interesting when you really start to take a deeper look at some of the common techniques.