Final Project

Proposal Due: May 22, 2003
Project Due:
June 10, 2003

Your final project is to produce a realistic image of a real object or scene. The scene or object should be challenging enough to require you to design and implement an advanced rendering algorithm. The final project is your chance to investigate an area that interests you in more depth, and to showcase your creativity. To get an idea of our expectations, check out the images produced by past participants. As extra incentive, we are offering a grand prize that includes a free trip to SIGGRAPH in San Diego in July for the best image produced. Think about following when choosing a project:

  • What are your goals? Try and phrase this as specific questions that you would like to know the answers to, e.g. ``How do I model reflection from surfaces with fine geometric structure, such as fur?''
  • What unique imagery would convincingly demonstrate that you have accomplished your goals? Try and keep this in mind throughout your project, since in computer graphics our work is often judged by the images we make.
  • What has already been done in this area? You probably won't have time to completely investigate this, but you should definitely spend some time reading research papers. We can help you with finding appropriate references. When you read a paper, look for what has not been done as well as what is already understood; think about new things that you could try.
  • Depending on the scope of your goals, you may want to work in a group. We encourage two person groups; larger groups will only be allowed to very, very challenging projects. Does your project split naturally into several pieces? Look for projects where each person's work is separable, and yet everyone contributes toward a shared goal that could not be accomplished individually.

Possible Projects

Here are some examples of challenging projects:

  • Fancy primitives. Implement a class of more complicated primitives from Hanrahan's chapter in Glassner's book. Choose wisely. Quadrics are too simple; deformed surfaces are much more challenging. Recommended are bicubic patches (display directly or by meshing), CSG models, or fractals. Fractals are relatively easy to implement and fun to use. For extra fun, map textures onto your patches or fractals. For lots of fun, try fur modeled as geometry (as opposed to as a volume).
  • Exotic wavelength-dependent effects such as dispersion and thin film effects. We can give you some references.
  • Adaptive stochastic supersampling. Use any sample distribution, subdivision criteria, and reconstruction method you like. Allow interactive control over key parameters of your sampling scheme. In a separate window alongside your rendered image, display a visualization of how many rays were cast per pixel.
  • Subsurface scattering. Look at Hanrahan and Krueger's Siggraph '93 paper for examples of applying subsurface scattering to plants and faces. For the more ambitious, model the microgeometry of the surface. For example, consider an explicit geometric model of the warp and the weft of cloth, the pits in plaster, the scratches in metal, and the structure of velvet or satin. Ray trace the microgeometry in order to compute the brdf. Look at Westin et al. in SIGGRAPH '92; they describe methods for modeling carpet and velvet.
  • Shading language. Develop a language for programmable shading formulas akin to (but simpler than) RenderMan's language (Hanrahan and Lawson, Siggraph '90). At a minimum, your language should allow the specification of a shade tree that includes mix nodes driven by textures as in Cook's Siggraph '84 paper on shade trees. Don't spend a lot of time on the interpreter - a simple syntax will do. For extra fun, implement (in conjunction with texture mapping) a nontrivial 2D or 3D texture synthesis method. Examples are spot-noise or reaction-diffusion equations (see the two papers on this subject in Siggraph '91).
  • Volume rendering. Start by implementing spatially inhomogeneous atmospheric attenuation. Divide each ray into intervals. For each interval, interpolate between the ray's color and some constant fog color based on a procedurally computed opacity for that location in space. Experiment with opacity functions. Once you get this working, try defining a solid texture (probably procedurally) that gives color and opacity for each interval. See Perlin and Hoffert's Siggraph '89 paper on solid texture synthesis and Kajiya and Kay's teddy bear paper for ideas. If you want to make your volume renderer fast, use hierarchical spatial subdivision (e.g. an octree).

Resources

Your main task should be to implement cool rendering algorithms, rather than spending all of your time modeling a complex scene.  However, you do need to provide lrt with a scene to render!  You may find that some of the links below provide useful resources related to the scene description file format that lrt parses.

·        http://www.exluna.com/products/links.html

·        http://sourceforge.net/projects/liquidmaya/

·        http://www.ayam.org

Project Proposal

As a first step you should write a one page project proposal. The project proposal should be in the form of a web page. To submit the project proposal, send the url to cs348b-spr0203-staff@lists.stanford.edu. This is due Thursday, May 22

The proposal should contain a picture of a real object or scene that you intend to reproduce. We suggest that you first pick something that you would like to simulate, and then investigate what techniques need to be used. A real object that you can carry around with you is best, but a good photograph or painting is almost as good.

This proposal should state the goal of your project, motivate why it is interesting, identify the key technical challenges you will face, and outline briefly your approach. If you are implementing an algorithm described in a particular paper, provide the reference to the paper. If you plan on collaborating with others, briefly describe how each person's piece relates to the others.

We will provide feedback as to whether we think your idea is reasonable, and also try to offer some technical guidance, e.g. papers you might be interested in reading.

Demo/Judgement Day

The project will be due on the afternoon of Tuesday, June 10th. During this time each group will be given 15 minutes to demonstrate their system and show some images that they produced. All demos will be in the Sweet Hall Graphics Lab. Remember to bring the object/images that you are modeling and reproducing. Remember, the goals and technology that you developed should be obvious from the image itself. After all, this is graphics.

Grading

The final project will count 1/2 (or more, if based on our judgement, we consider the project truly outstanding) towards your final grade in the course. We will consider strongly the novelty of the idea (if it's never been done before, you get lots of credit), your technical skill in implementing the idea, and the quality of the pictures you produce. Mega-lines of code does not make a project good.

When you are finished with your project you should submit the source for your system and any test scenes and images that you have created. You should also submit your original project proposal, and an updated version that reads as a two to three page project summary, more or less of the same format as the project proposal, but with a brief results section and any conclusions or comments you have based on your experience.

You are permitted to work in small groups, but each person will be graded individually. A good group project is a system consisting of a collection of well defined subsystems. Each subsystem should be the responsibility of one person and be clearly identified as their project. A good criteria for whether you should work in a group is whether the system as a whole is greater than the sum of its parts!

Rendering Prize

To provide additional incentive, we are offering several prizes for the best images produced as part of the final project.

  • Grand Prize.

An all-expense-paid trip to SIGGRAPH 2003 for one (worth about $1000).

  • First Prize.

To be determined.

  • Honorable Mentions.

A book on rendering.

The jury will be Dan Goldman, Stanford graphics lab alumnus and computer graphics supervisor at ILM, Eric Veach, also a Stanford graphics lab alumnus and inventor of a number of key Monte Carlo rendering algorithms, and Ian Buck, cs348b TA.  Both technical and artistic merit will be used by the judges.