**Renderer Design** [A Graphics Codex Programming Project](../projects/index.html) ![Figure [teaser]: Crepuscular rays in a participating medium rendered by ray marching. San Miguel scene by Guillermo M. Leal Llaguno.](godray.jpg border=1) Introduction ============================================================================================ There are many possible rendering algorithms. The key properties are run time, image quality, and implementation elegance. You can usually achieve good values for only _two_ of those. The importance-sampling path tracer from the [Paths](../paths/index.html) project achieved very good image quality, but increased its convergence rate by making the implementation more complex, to about 100 lines of code and some tricky probability math. A pure path tracer would produce the same images from about 15 lines of code, but would also run much more slowly. Part of the beauty of path tracing is that once you have a path tracer, it only takes a few more lines of code to add most effects. Those lines are often tricky to design, however! In this design project, you'll extend the path tracer to make it faster, have higher image quality, or to replace a library routine with your own code. The result will be your increased mastery and confidence at rendering. Prerequisites -------------------------------------------------------------------------------------------- Complete the [Paths](../paths/index.html) project before this one. Educational Goals -------------------------------------------------------------------------------------------- - Design a photorealistic-rendering program yourself - Increase your skill at writing a specification - Create objectively-verifiable feature milestones - Limit the scope of the project - Revise/renegotiate as you learn more about your problem domain - Master working with path tracing variations - Develop intuition balancing convergence time against image quality - Build a mental toolchest of algorithmic building blocks (e.g., importance sampling and interpolation) - Work from minimal guidance to documentation and structure - Manage time and workflow on a multi-week project - Learn to work with resources beyond the textbook, including research papers and technical reports Meta-Specification ============================================================================================ See the [Geometry Design Project](../geo-design/index.html?#meta-specification) for the meta-specification. It is the same for all Graphics Codex Design projects. Features ============================================================================================ Choose _one_ of these features to implement for your project. The difficulty increases farther down the list. The linked images generally show the phenomena, but not algorithms that I recommend for generating those phenomena for this specific project. The primary sources cited are valuable but should be considered along with textbooks, course notes, and blog posts. - ![©Vicarious Visions/Activision](skylanders.jpg width=180px) *Post-processed depth of field and motion blur*. The [`G3D::DepthOfField`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_depth_of_field.html) and [`G3D::MotionBlur`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_motion_blur.html) classes can take an instantaneous, pin-hole image augumented with depth and motion buffers and simulate a lens camera and exposure. Look at the `starter` project for examples of invoking those classes and the end of the GLSL Ray March [shader.pix](http://g3d.cs.williams.edu/websvn/filedetails.php?repname=g3d&path=%2FG3D10%2Fsamples%2FrayMarch%2Fshader.pix) shader for examples of producing the depth buffer values. The tricky part is producing the screen-space motion vector buffer. [`G3D::TriTree`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_tri_tree_base.html) can help you and the [G-Buffer shader](http://g3d.cs.williams.edu/websvn/filedetails.php?repname=g3d&path=%2FG3D10%2Fdata-files%2Fshader%2FUniversalSurface%2FUniversalSurface_gbuffer.pix) contains some code for projecting 3D motion to screen space. The result quality has some limitations in the presence of specular reflection and refraction, compared to true depth of field an motion blur, but it will render extremely fast. These classes use GPU post-processing methods, e.g., [#Bukowski2013] [#Guertin2014] [#Jimenez2014]. Implementing those methods themselves would be a good final project, but is beyond the scope of this design lab. - ![© [http://www.randomcontrol.com/](RandomControl)](bsdf.jpg width=180px) *Scattering function* implementation: write your own `scatter`, `getImpulses`, and `finiteScatteringDensity` functions that extract the coefficients from a [`G3D::UniversalSurfel`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_universal_surfel.html) for your own BSDF. The [Disney BRDF](https://github.com/wdas/brdf/blob/master/src/brdfs/disney.brdf) [#Burley2012] is one good choice, combining ideas from Blinn-Phong [#Blinn1977], Cook-Torrance [#Cook1981], Trowbridge-Reitz [#Trowbridge1975], and GGX [#Walter2007] BSDFs. This is a case where modern SIGGRAPH course notes may be more helpful than primary sources. Do not attempt to make your own `G3D::Surfel` subclass; just replace the calls to `surfel->scatter(...)` in your code with `myScatter(surfel, ...)` and so on. - ![© [Hou et al.](http://kunzhou.net/2010/mptracing.pdf)](depthoffield.jpg width=180px) *In-camera depth of field and motion blur* take a while to converge, but are very robust and elegant. For depth of field, create primary rays that pass through an aperture at random points instead of through a pinhole. Motion blur is trickier, because it changes how you work with the spatial data structure. Two options are simply tracing 128 or so images at slightly separate times and combining them or building a spatial data structure over motion-extruded triangles and varying the intersection time per ray. - [*Striated sampling*](http://cg.informatik.uni-freiburg.de/course_notes/graphics2_04_sampling.pdf) strategies for primary rays decrease aliasing at low sample counts. Combined with exhaustive direct illumination, these increase convergence for primary rays and shadows. - *Post processed denoising* via the [Nonlocal Means](http://www.ipol.im/pub/art/2011/bcm_nlm/) [#Buades2005] algorithm. Run this on the radiance image before other post-processing such as gamma encoding and screen-space antialiasing. The state of the art methods are both slower and more sophisticated [#Bitterli2006], so don't attempt them for this project. - *2x speedup for fixed image quality* for your path tracer by implementing at least all of the following optimizations: compact arrays to remove null surfels and those with very small modulation values, use `G3D::Image::bilinearIncrement` to blend each path's contribution into adjacent pixels based on sub-pixel position (requires tracking the total weight at each pixel for normalization), use full scattered radiance instead of biradiance for light importance sampling and make lights with zero contribution have degenerate shadow rays, cull shadow rays that correspond to lights with no contribution and compact before tracing. - ![© [Jarosz et al.](http://www.ppsloan.org/publications/ppb.pdf)](photonbeams.jpg width=180px) A *participating medium* such as fog can be rendered by ray marching: instead of instantaneously transporting rays through space to surface intersections, regularly sample points along the ray and compute some direct illumination and scattering there. For simplicity, let your medium be uniform density and fill all of space. - *Optimize parallel path tracing* by eliminating degenerate rays from recursive tracing. To do this, you have to track which array position corresponds to which pixel rather than assuming a full screen of rays, and compact the ray buffers. Be careful to compact on a single thread, rather than appending from multiple threads and creating either a slow atomic synchronization point or a race condition. This should substantially increase performance on scenes that are open to the sky, and on most scenes if you start culling rays with a low modulation value because as you progress deeper into the ray tree you will have fewer rays. - ![© Benedikt Bitterli](spectrum.jpg width=180px) *Frequency-varying refraction*. Show a prism, glass ball's caustics and refraction, and complicated situations of overlapping lights. There are three main ways to approach this. 1. Simulating many (e.g., 12) independent color channels simultaneously instead of only three. 2. Maintain only RGB, but make refraction sample a random frequency, compute the correct index of refraction, and then scatter a ray with the appropriate color at that frequency. 3. Maintain RGB in the framebuffer, but shoot monochromatic rays at random frequencies. Either way, you should be able to implement this solely by changing a few lines in the UniversalSurface::scatter and getImpulses methods. Figuring those out and doing the requisite testing is not trivial, though! - *Real-time ray tracing* by switching the Paths structure to just do Whitted tracing [#Whitted1980] with a constant-time ambient approximation. This will be so fast that you should make it display on screen with an interact camera. Look at the G3D [CPU Real Time Ray Trace](http://g3d.cs.williams.edu/websvn/listing.php?repname=g3d&path=%2FG3D10%2Fsamples%2FcpuRealTimeRayTrace%2F&#af1d38f35c75b813d8936fde8fec53c42) sample program (yours should run about 10x faster but produce identical images. That sample was written to show the _easy_ way to write a Whitted tracer, not the fast way). Ensure that you don't rebuild the scene each frame because that will severely limit performance. Perform direct illumination against all sources and recursively cast all impulses not just one. Only cast one ray per pixel. - ![© Benedikt Bitterli](2d.jpg width=180px) A *2D path tracer* that visualizes paths (like a participating medium, but losing no energy) is a great way to demonstrate interesting and complex optical phenomena. Mimic the excellent [Tantalum](https://benedikt-bitterli.me/tantalum/tantalum.html). - *Proper sampling of area lights*: find all emissive triangles, and sample random points on them based on their total power emitted for direct illumination. This is just like your importance-sampling implementation of point lights, except that now you have area lights as well (or instead). You should see nice soft shadows. If you put an emissive sky dome over the entire scene, really nice ambient occlusion effects will emerge. - ![© [Darren Engwirda](https://github.com/dengwirda/aabb-tree)](bvh.jpg width=180px) *Bounding volume hierarchy*, hash grid, or other spatial data structure implementation presents several interesting design challenges. Create your own [`G3D::TriTreeBase`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_tri_tree_base.html) subclass. Expect that yours will build and detect triangle intersection about 100x slower than the ones provided by G3D. Don't worry about that constant performance overhead; just demonstrate that your implementation provides approximately logarithmic run time in the number of triangles. - *Shadow maps* [#Williams1978] for ray tracing are an optimization in which you precompute a latitude-longitude map of distance from each light to the first intersection in each direction. Then, when rendering direct illumination, do not cast visibility test rays. Instead, just look up the correct distance from the light source, and see if it is approximately equal to the distance to the surface being shaded. If it is, apply direct lighting. Otherwise, the surface is shadowed. For best results, slightly jitter shadow ray directions when reading back to avoid shadow aliasing, and bias the test to avoid self-shadow acne. Gallery ============================================================================================== For more inspiration and some impressive extensions to the base specification, look at some examples of work created by my students for this project: - [Williams College CS371 2016](https://www.cs.williams.edu/~morgan/cs371-f16/gallery/4-midterm/realtime/report.md.html) - [Williams College CS371 2014](https://www.cs.williams.edu/~morgan/cs371-f14/gallery/5-Midterm/index.html) - [Williams College CS371 2012](https://www.cs.williams.edu/~morgan/cs371-f12/gallery/5-Midterm/index.html) - [Williams College CS371 2010](https://www.cs.williams.edu/~morgan/cs371-f10/gallery/5-Midterm/index.html) (#) Bibliography [#Blinn1977]: James F. Blinn, [Models of light reflection for computer synthesized pictures](https://design.osu.edu/carlson/history/PDFs/blinn-light.pdf), SIGGRAPH 1977 [#Bitterli2016]: Benedikt Bitterli, Fabrice Rousselle, Bochang Moon, José A. Iglesias-Guitián, David Adler, Kenny Mitchell, Wojciech Jarosz, and Jan Novák, [Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings](https://benedikt-bitterli.me/nfor/), EGSR 2016 [#Buades2005] Antoni Buades, Bartomeu Coll, and Jean-Michel Morel, [A non-local algorithm for image denoising](http://www.iro.umontreal.ca/~mignotte/IFT6150/Articles/Buades-NonLocal.pdf), CVPR 2(7):60-65, 2005 [#Bukowski2013]: Mike Bukowski, Padraic Hennessy, Brian Osman, and Morgan McGuire, [The Skylanders SWAP Force Depth-of-Field Shader](http://graphics.cs.williams.edu/papers/DepthOfFieldGPUPro2013/), GPU Pro4, 2013 [#Burley2012]: Brent Burley, [Physically-Based Shading at Disney](https://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf), SIGGRAPH 2012 Presentation [#Cook1981]: Robert L. Cook and Kenneth E. Torrance, [A Reflectance Model for Computer Graphics](http://www.cs.columbia.edu/~belhumeur/courses/appearance/cook-torrance.pdf), SIGGRAPH 1981 [#Guertin2014]: Jean-Philippe Guertin, Morgan McGuire, and Derek Nowrouzezahrai, [A Fast and Stable Feature-Aware Motion Blur Filter](http://graphics.cs.williams.edu/papers/MotionBlurHPG14/), HPG 2014 [#Jimenez2014]: Jorge Jimenez, [Next Generation Post Processing in Call of Duty: Advanced Warfare](http://www.iryoku.com/downloads/Next-Generation-Post-Processing-in-Call-of-Duty-Advanced-Warfare-v18.pptx), SIGGRAPH 2014 Course slides [#Trowbridge1975]: T. S. Trowbridge and K. P. Reitz, [Average Irregularity Representation of a Rough Surface for Ray Reflection](https://www.osapublishing.org/josa/abstract.cfm?uri=josa-65-5-531), Journal of the Optical Society of America 65:5, p. 531-536, OSA, May 1975 [#Walter2007]: Bruce Walter, Stephen R. Marschner, Hongsong Li, and Kenneth E. Torrance, [Microfacet Models for Refraction through Rough Surfaces](https://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf), EGSR 2007 [#Williams1978]: Lance Williams, [Casting curved shadows on curved surfaces](https://people.eecs.berkeley.edu/~ravir/6160/papers/p270-williams.pdf), SIGGRAPH 1978 [#Whitted1980]: Turner Whitted, [An improved illumination model for shaded display](http://dl.acm.org/citation.cfm?id=358882), pages 343-349, _Communications of the ACM_, June 1980