**Geometry Design** [A Graphics Codex Programming Project](../projects/index.html) ![Figure [teaser]: A voxel world (created in Minecraft) converted to polygons for rendering as a mesh in G3D](teaser.jpg border=1) Introduction ============================================================================================ Procedural synthesis and modification of mesh geometry can create efficient and detailed 3D models without requiring an artist to hand-tune each vertex. In this project, you'll extend the ideas from the [Meshes](../meshes/index.html) and [Cubes](../cubes/index.html) to build a new geometry modeling feature of your own choice. This is a Design project. You will write the specification, design the program, and then create a report explaining your work and demonstrating its correctness and features. Prerequisites -------------------------------------------------------------------------------------------- - Complete the [Meshes](../meshes/index.html) project - 15 working hours or three calendar weeks of experience in C++ and Visual Studio - Read in the [Graphics Codex](http://graphicscodex.com): - Materials If the Graphics Codex projects are your first experience with C++ and Visual Studio, then I recommend completing the [Rays](../rays/index.html) project before this one. That way, you'll be comfortable with the tools when you begin to design your program. Educational Goals -------------------------------------------------------------------------------------------- - Design a geometry procesing program yourself - Learn to write a specification - Create objectively-verifiable feature milestones - Limit the scope of the project - Revise/renegotiate as you learn more about your problem domain - Master working with mesh data structures - Develop intuition for mesh algorithms - Build a mental toolchest of algorithmic building blocks (e.g., modular arithmetic, hash tables, reverse iteration) - Work from minimal guidance to documentation and structure - Manage time and workflow on a multi-week project - Learn to work with resources beyond the textbook, including research papers and technical reports Rules -------------------------------------------------------------------------------------------- You may use any code that you wish, in source or library form. Cite any resources used outside of G3D. If your project duplicates a feature from G3D or some other program that you looked at, you may still look at that implementation code but must clearly report that you looked at it and to what extent your code is original vs. copied. Features ============================================================================================ Choose _one_ of these features to implement for your project: - ![Minecraft image © Mojang/Microsoft](minecraft.jpg width=180px) *Voxel* to boundary mesh exporter. This is what Minecraft does when the mesh changes. Construct a [naively-culled polygonal mesh](https://0fps.net/2012/06/30/meshing-in-a-minecraft-game/) from voxels by placing a quadrilateral face wherever a solid voxel is adjacent to air. Stretch goal: produce texture coordinates so that you can apply materials. See [Mineways](http://www.realtimerendering.com/erich/minecraft/public/mineways/) for examples of doing this on Minecraft meshes (including source code). Take input as MagicaVoxel, Binvox, or Schematic format and output to OBJ format. Don't attempt any optimizations to reduce polygon count. - *Quad conversion* of an existing triangle mesh into a quadrilateral mesh. Try identifying pairs of triangles that share topological vertices and together form a planar, convex quadrilateral. Output a mesh containing all quadrilaterals, some of which are degenerate. Process an array of [`G3D::Tri`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_tri.html) into an array of your own `Quad` data structure. You can use [`G3D::Surface::getTris`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_surface.html#ac7ede58156ec2b8886751b8d058e9394) to extract the tris from [`G3D::Surface`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_surface.html)s) Obviously, your program's running time should be much better than $ \O(n^2) $ in the number of triangles or it will take too long to run on large models. I'd approach this with a greedy algorithm that iterated over edges; section 6 of Tarini et al.'s paper [#Tarini2010] describes a more sophisticated algorithm, and section 3.1 of Bommes et al.'s paper surveys a few more [#Bommes2012]. - ![© [Mitasova et al.](http://www4.ncsu.edu/~hmitaso/gmslab/irwin/irwin1.html)](erosion.jpg width=180px) *Erosion* simulation in a heightfield image using a simple cellular automata model [#Mei2007] for transporting soil between locations. Use your solution to the [Meshes](../meshes/index.html) project to generate "before and "after" images of the 3D mesh. For a stretch goal, try generating the initial image by the recursive midpoint-subdivision algorithm instead of painting it by hand. - *File loader* for a subset of some geometry format such as [Mitsuba](http://www.mitsuba-renderer.org/releases/current/documentation_lowres.pdf) or [OBJ](http://paulbourke.net/dataformats/obj/) that includes texture coordinates, materials, and normals. Load the format and construct a `G3D::ArticulatedModel` in memory from it. (It is OK to implement a format that G3D already supports...writing the parser will teach you a lot.) - ![Panda3D © CMU](panda3d.jpg width=180px) *Instancing* support for a ray tracer (requires the [Rays](../rays/index.html) project). The Rays project turned each [`G3D::Entity`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_entity.html) into its own set of triangles, even though you can have a scene in which many entities share a single model's geometry. Extend it to instead create one [`G3D::TriTree`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/namespace_g3_d.html#a31cf5746e822e5265b8f40c5de0847d4) for each unique (non-animated) [`G3D::Model`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_model.html) and store a coordinate frame for that `TriTree`. Trace rays against bounding boxes for these trees, and then transform the ray into the appropriate reference frame and trace the actual `TriTree` when the ray is in the bounding box. This will allow you to render scenes with billions of triangles that would never fit in memory. (The inset image shows pine trees that are just separate instances of a single model.) - *Terrain coloring* for heightfield terrain based on elevation and slope, with added detail features such as rocks and trees. You can assign colors via vertex colors or texture coordinates and a texture. Features can be added by generating a scene file that places additional entities. I recommend doing this as an inefficient, flat-shaded mesh because it is easier to make it look good and consistent with low-polygon trees from [TurboSquid](http://turbosquid.com) or [OpenGameArt](opengameart.org) than it is to make a photorealistic texture. - ![© ACM](kobbelt.jpg width=180px) *Subdivision* on an existing triangle mesh for better silhouettes. Create additional triangles and then push them out along the curve implied by the vertex normals. The simplest approach is to replace every triangle with four. Once you get that working, try Kobbelt's more sophisticated √3-Subdivision method [#Kobbelt2000]. - *Level of detail* reduction by the [half-edge collapse](http://www.utdallas.edu/~praba/6v81/3dModeling.pdf) operator can reduce the polygon count for a mesh while preserving its overall shape. This makes rendering faster and can reduce the space stored as well. Be sure to avoid ever flipping a triangle in the process. Experiment with different strategies for choosing which edge to greedily collapse at each iteration. - ![© Microsoft](pipes.jpg width=180px) *Pipes*: implement a random 3D labyrinth- or maze-generator that produces results that look like the classic [3D pipes screensaver](https://www.youtube.com/watch?v=MKqrLGFoK9E) when viewed from the outside. This is a depth-first random walk; a minor challenge is avoiding collisions and a major challenge is wrapping your cylinder-generating code around corners smoothly. Don't attempt to animate the pipes--just grow them and then save them to a file for viewing. If you like some more symmetry in your shapes, you could also generate a 3D [space filling curve](https://www.youtube.com/watch?v=x-DgL49CFlM), which is not particularly more difficult. - *Realistic trees* are just recursive structures made of cylinders that shrink a bit with each branching. The formal name for this is a Lindenmayer System ("L-system"), but "recursive cylinder" is everything that you need to know. Except for the leaves, which are best modeled by quadrilaterals with alpha-masked textures (ideally of a small branch with a few leaves) on them. For best results, apply both bump maps and texture maps to the tree to simulate bark, and try to flare the branches where they meet the trunk to avoid a sharp, angular joint. - ![© [Andy Gainey](https://experilous.com/1/blog/post/procedural-planet-generation)](hexplanet.jpg width=180px) *Hex-grid planet* generation. Tessellate the sphere using mostly hexagons (you will need a few pentagons, which can be hidden in oceans), and then color them based on a latitude-longitude image of the earth. Output an OBJ file. You can either use vertex colors or group all hexes of each category (rock, grass, sand, water, snow, etc.) into a single "geom" within the OBJ file and assign a material to it. Using a material makes it easier to create a thin black rim around each hexagon to emphasize the grid. As a stretch goal, set the elevation relative to the minmum by loading elevations (radial heights) from a latitude-longitude heightmap image and then extruding each polygon into a prism. [This](https://www.blenderguru.com/tutorials/create-a-realistic-earth/) blender tutorial has links at the bottom to NASA images for the Earth. The [Earthgen](https://github.com/vraid/earthgen-old) project has some inspiring images of the final result. - *Voxelize* a 3D mesh, save it in [MagicaVoxel format](https://ephtracy.github.io/index.html?page=mv_vox_format), and then render it with that program. There are many voxelization methods, including: ray stabbing [#Nooruddin2003], z-buffer carving [#Karabassi1999], thin voxel rasterization [#Crassin2012], and thick voxel conversion [#Eisemann2008]. - ![Courtesy [Emily Whiting](http://web.cs.dartmouth.edu/people/emily-whiting)](dragon.jpg width=180px) *2D Contour Slicing*: Intersect a closed mesh with a series of parallel planes to produce 2D contours that can be cut from cardboard or wood by a laser cutter [to reconstruct the 3D model](https://www.google.com/search?q=laser+cutter+2d+slicing&espv=2&tbm=isch&tbo=u&source=univ&sa=X&ved=0ahUKEwiZlbyi2cXPAhVI7iYKHb7bAd0QsAQIKw&biw=1003&bih=776). Pack the slices to reduce material usage and augment them with small internal holes or external tabs to aid in alignment during assembly. Meta-Specification ============================================================================================ Part of the exercise is writing your own specification. This section describes the properties that your specification must have and the reporting requirements. The structure of your report should resemble the concatenation of a Graphics Codex project description and a typical report for one of those projects. The report must contain the following: 1. *Title* describing the feature for your project, _not_ "Geometry Design". 2. Evocative *teaser image* with explanatory one-sentence caption. 3. *Introduction* to your core feature in fewer than 200 words. 4. *Specification* as a numbered list of objectively-verifiable elements of your program, report, and files. This should be less than one page. Write the specification first and then revise it frequently. Model this after the other Graphics Codex project specifications, with: - Features of the program - Specific entry points (e.g., "method `App::makeCylinder(float radius, float height)`") - Output files - Program evaluation metrics (e.g., "a graph of time vs. triangle count") - Correctness images - Quality images - Evocative result 5. *Topic overview*/tutorial on your topic, about one printed page long. - Include many images, diagrams, and equations - Include citations to external works (hyperlinks are good, but put them in the citations, so that your document assigns proper authorial credit) 6. *Design* of your program (1-2 pages). - Call/control or data flow graph for major pieces of your software architecture (prefer Markdeep diagrams, but embed graphviz or an image produced in an external tool if necessary). - Table listing the major classes or functions (hyperlink to Doxygen documentation) with one-sentence descriptions. - Prose description of the design, and _why_ it is good. Don't tell a story of how you arrived at the design (that's what the journal and change log are), but do motivate why these choices are preferable to obvious alternatives. 7. *Correctness Results* demonstrating that the program is working as intended (or not), as dictated by your specification. These will probably be on very simple cases that can be hand-verified or are self-evidently correct. Explain why these are good and nearly-comprehensive tests. The results are frequently images in graphics projects, but may also be other forms of data. 8. *Quality Results* demonstrating your program applied to complex or challenging inputs. If images, these can be retouched or composited so long as the modifications are described. Evaluate each result in text, explaining what it demonstrates about your program's correctness and features (or, incorrectness and limitations for a negative result). 7. *Evocative Result*: one of your quality results should be for the teaser image, and provide a visual statement of why your project is interesting. 8. *Self Evaluation*. Give yourself a letter grade (A through F) for the project and text evaluation of at least each of the following. In doing so, you may only consider the project _as you presented it_ in the report, journal, and code. The entire evaluation must contain fewer than 200 words. - How good is your specification? Is it scoped appropriately to the intended project scale? Is it relatively unambiguous? Is it concise and understandable? - What did you actually learn (vs. the educational goals of the project)? Keep in mind that the development skills are as important a part of your education as the individual graphics algorithms. - How well did you manage your workflow? Don't tell the story of your project (again, that's what the journal is for). Respond to that story: what was good about it, what do you need to change for your next project, and how will you change it? Consider how you adapted the specification and schedule over time. - Evaluate your code (design and implementation) quality. Consider overall structure, documentation, and code clarity and performance. - Evaluate your report quality. How easy is it to understand the project from the report? How professional does it look? Is it too brief or too verbose? 9. *Schedule* of dates at which a specific piece of functionality or the report will be complete, and who is responsible for completing it if working on a team. Write this first even though it appears at the end of the project. 10. *Change log* recording changes to the specification or program design. - As a numbered list in chronological order. - Each entry should be a single sentence, ending with a team member name and date. - Boldface any key word that should draw the eye when skimming. All results should fit on one page. Where there are many images, save space using tables of thumbnails with no more than five to a row. (Markdeep automatically creates a table when you put multiple images on a single line in a file and specify their `width`s.) The journal and change log should demonstrate not only that changes occurred but also motivate why they were made. Do not report time spent on this project. Time reporting on the regular projects is one way of measuring workflow efficiency. For a design project, the time will depend heavily on the feature that you've chosen to implement and number of people you're working with (if any!). So, time is not a good workflow metric for a design project. Advice ============================================================================================ Workflow -------------------------------------------------------------------------------------------- Your report and code will be a little more than twice as long as for [Meshes](../meshes/index.html). Expect to spend about _four_ times longer producing them. The additional time will be (well) spent discovering a good specification, design, and methodology on your own instead of working from my specifications and advice, which guide you toward a pre-designed program structure on the regular projects. Expect to revise your specification and program design frequently. If you do not revise them, that is a sign that you are not responding to what you learn about the problem as work on it. Most changes the the specification are for: - Narrowing the feature set (everyone takes on too much at first!) - Reducing ambiguity in the description of features. - Increasing rigor of testing by adding new correctness result requirements. Follow the same iterative design and implementation planning process from other projects for this one. On a long-term project, attempting to draft everything first is essential. It will indicate what aspects are too hard and need to be respecified or eliminated. Flat Shading -------------------------------------------------------------------------------------------- For several of these projects, you'll want the output to be "flat shaded" by creating a mesh that reveals its facets instead of concealing them by vertex normal interpolation. The normal, texture coordinate, color, and position of a vertex are all locked together in most rendering frameworks. This means that if two triangles meet at a single topological vertex, they must have the same normal at that vertex. The only way to create a separate normal for adjacent faces to reveal their different orientations, or a different texture coordinate in order to apply different materials, is to place two _different_ topological vertices at the same geometric position. Grids -------------------------------------------------------------------------------------------- When working with structures such as voxels or the pipes where your program will need to query "is anything near this location", the obvious data structure is a 3D array. However, 3D arrays take up prohibitive amounts of space, and those models don't actually fill most of the volume that contains them. A common spatial data structure for solving this problem is the *hash grid*. A hash grid is a hash table ([`G3D:Table`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_table.html)) in which the keys are *integer* coordinates such as [`G3D::Point3int32`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/namespace_g3_d.html#a4e0f113c5abbd975bc1a39ba8510ad01), [`G3D::Point3int16`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/namespace_g3_d.html#af1cd0d6af844ae8339b9feee2d1dce53), or [`G3D::Point3uint8`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/namespace_g3_d.html#acc260e971beecc9cd047b6575eaed134). We have to use integer points for this data structure because the tiniest difference in the floating-point representation of a `Point3` will cause it to hash to a completely different value. For the pipes, you actually only need a *hash set* ([`G3D::Set< Point3int16 >`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_set.html)) to determine if anything is present in the volume that the pipe wants to grow into. You can also use a hash grid to quickly locate floating-point vertices that lie within integer-indexed grid cells (which need not be meter-scale; apply any scaling factor that you wish when converting between meters and indices). [`G3D::PointHashGrid`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_point_hash_grid.html) and [`G3D::FastPointHashGrid`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_fast_point_hash_grid.html) are designed for this application. The [`G3D::Welder`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_welder.html) internally uses a point hash grid to efficiently merge colocated vertices that have compatible normals and texture coordinates. You can generate topologically-disconnected meshes and use the welder to produce optimized meshes. For some algorithms, that is actually the only reasonable way to produce a proper indexed triangle mesh. For other applications, it is relatively straightforward to generate the mesh directly in indexed triangle form and the run time cost of the welder is not justified. Runtime Model Generation --------------------------------------------------------------------------------------------- ![ ](g3d-proc.jpg width=180px) The G3D [proceduralGeometry](http://g3d.cs.williams.edu/websvn/listing.php?repname=g3d&path=%2FG3D10%2Fsamples%2FproceduralGeometry%2F&#abe8db5f4efedc225489d13f7edf22fff) sample program demonstrates how to create a model at runtime vs. saving it to a file and loading with [`G3D::ArticulatedModel`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_articulated_model.html). If you're working on a file loader, this is a good program to study. File Formats --------------------------------------------------------------------------------------------- In the [Meshes](../meshes/index.html) project, we used the OFF file format. It is easy to use...and that's about its only feature. To work with models that you find online, in G3D's data directory, and those created by tools like Maya and Blender, you need to upgrade to other formats. OBJ files are a very old format. While they lack many of the features of newer formats like FBX, they have the advantage that hundreds of tools can work with them and there are millions of OBJ files available. For most projects, you'll want to output to OBJ, and you'll be accepting other OBJ files as inputs. G3D can load a lot of 3D file formats. You can load many through ArticulatedModel, which converts them to a standardized format in memory and then performs several optimizations on the meshes. You can also use [`G3D::ParseOBJ`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_parse_o_b_j.html), [`G3D::ParsePLY`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_parse_p_l_y.html), and [`G3D::ParseVOX`](http://g3d.cs.williams.edu/g3d/G3D10/build/manual/class_g3_d_1_1_parse_v_o_x.html) to parse specific formats into a memory data structure similar to their original file structure. This preserves polygon meshes, for example. The [Open Asset Import Library](http://www.assimp.org/) (Assimp) distributed with G3D allows loading and saving of many other formats. If you export your meshes files to STL format, then you can 3D print them! Gallery ============================================================================================== For more inspiration, look at some examples of work created by my students for this project: - [Williams College CS371 2016](https://www.cs.williams.edu/~morgan/cs371-f16/gallery/4-midterm/index.html) - [Williams College CS371 2014](https://www.cs.williams.edu/~morgan/cs371-f14/gallery/5-Midterm/index.html) - [Williams College CS371 2012](https://www.cs.williams.edu/~morgan/cs371-f12/gallery/5-Midterm/index.html) - [Williams College CS371 2010](https://www.cs.williams.edu/~morgan/cs371-f10/gallery/5-Midterm/index.html) (#) Bibliography [#Bommes2012]: David Bommes, Bruno Lévy, Nico Pietroni, Enrico Puppo, Claudio Silva, Marco Tarini, and Denis Zorin, [State of the Art in Quad Meshing](http://alice.loria.fr/publications/papers/2012/QuadStar_EG/STAR_quad_meshing.pdf), Eurographics STARS 2012 [#Crassin2012]: Cyril Crassin and Simon Green, [Octree-Based Sparse Voxelization Using The GPU Hardware Rasterizer](https://research.nvidia.com/publication/octree-based-sparse-voxelization-using-gpu-hardware-rasterizer), OpenGL Insights, 2012 [#Eisemann2008]: Elmar Eisemann and Xavier Décoret, [Single-pass GPU solid voxelization for real-time applications](http://dl.acm.org/citation.cfm?id=1375728), GI 2008 [#Karabassi1999]: Evaggelia-Aggeliki Karabassi, Georgios Papaioannou, and Theoharis Theoharis, [A Fast Depth-Buffer-Based Voxelization Algorithm](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.103.9268&rep=rep1&type=pdf), JGT 1999 [#Kobbelt2000]: Leif Kobbelt, [√3-Subdivision](https://www.graphics.rwth-aachen.de/media/papers/sqrt31.pdf), SIGGRAPH 2000 [#Mei2007]: Xing Mei, Philippe Decaudin, and Bao-Gang Hu, [Fast Hydraulic Erosion Simulation and Visualization on GPU](http://www-ljk.imag.fr/Publications/Basilic/com.lmc.publi.PUBLI_Inproceedings@117681e94b6_fff75c/FastErosion_PG07.pdf), Pacific Graphics 2007 [#Nooruddin2003]: Fakir S. Nooruddin and Greg Turk, [Simplification and Repair of Polygonal Models Using Volumetric Techniques](https://smartech.gatech.edu/handle/1853/3404), TVCG 2003 [#Tarini2010]: Marco Tarini, Nico Pietroni, Paolo Cignoni, Daniele Panozzo, and Enrico Puppo, [Practical quad mesh simplification](http://vcg.isti.cnr.it/Publications/2010/TPCPP10/Tarini%20Pietroni%20Cignoni%20Panozzo%20Puppo%20-%20Practical%20Quad%20Semplification%20-%20EG%202010.pdf), Eurographics 2010