Abstract
The purpose of this project is to get familiar with a different rendering technique. Deferred rendering is widely used in real-time rendering where the speed of performance is very important. Games are not the only application deferred rendering has, but they are great examples of its advantages. Implementing a deferred shader is the main focus of this project with the addition of some extension to it like reflectance fields or light volumes.
Having many lights in a scene is very complicated and normally costs time to calculate. Deferred shading renders multiple lights in real-time without trading performance. Games use this technique to defer light computations with screen-space operations. This technique separates the typical shading process into two discrete steps. One is computing geometry and determining the characteristics of the surface material, and the second calculates the light and computes the interaction between surface and light.
A deferred shader stores the surface information such as position, normal, etc. in a G-buffer in an initial pass. Then, for each light, if it has a non-zero contribution, the light shader will read the G-buffer, evaluate the current light’s contribution and add it up.
I plan to implement an extension to my renderer. I wish to take advantage of reflectance fields or light volumes. These illumination basis functions will re-light an object from a new set of light sources and hopefully enhance the deferred renderer.
Plan for week of Jan. 24th
- Doing background reading
- Setting up the framework
- Integrating an OBJ Loader with my project
Looks like a good start! Which shader language are you using? GLSL has slightly more documentation on render targets and is integrated better with OpenGL, but it's not too hard to figure out how it works with Cg either (you do have to figure out which profile to use though).
ReplyDeleteOne thing you should think about in advance is what your G-buffer is going to consist of. You need world position, normals, and textures. However, there is a lot of choices on how you store them. World position can be xyz->red,blue,green (what I did and easiest), but you'll probably get better precision if you use the depth buffer instead. There is a debate as to whether you only need two values for normals, but you probably wouldn't notice a difference either way. Also, these precision differences will only come up if you decide to render large scenes - you should think about whether that's a goal or if you'd prefer to showcase your special effects. Finally, decide if you are going to implement multiple textures, because it's not trivial to implement (unless you decide only to do deferred lighting).
Feel free to email me with any questions.
Make sure you know if a texture format is normalized or unnormalized. A normalized texture format only stores components in the range [0, 1]. So if your first pass writes 7, for example, and it turns out to be a 1 in the second pass, you know what the problem is. See the OpenGL spec for more information on texture formats.
ReplyDeleteAlso, to follow up on Ian's comments on storing normals, I recommend a quick read of Aras's Compact Normal Storage for small G-Buffers.