While the applications of computer graphics are by no means limited to entertainment, the most familiar examples of computer-generated imagery (CGI) probably come from film, television, and video games. CGI has progressed beyond the realm of the occasional special-effect to become a supporting character (and sometimes even the lead); advances in hardware have made ever more complex and photorealistic scenes possible.
The goal of this course is to gain an understanding of the fundamental issues at the heart of 3D computer graphics, including scan conversion, 3D viewing and projection, visible surface detection, illumination and color, raytracing, 3D object and scene representation, and animation.
A series of programming assignments will provide experience in these topics, and will culminate in a project allowing students to explore a topic of interest in more depth.
M 1-2pm, W 4-5:30pm, Th 9:30-11am, F 1:30-2:30pm
|Class Hours and Meeting Place||
(general information about the course, including assessment)
(course policies on attendance, collaboration, late/makeup work, and other things)
(syllabus, including links to handouts, assignments, reading material...pretty much everything you want on a daily basis is here)
(lots of useful information about the Linux systems at HWS)
(review of basic Linux commands e.g. mkdir, ls, cd, cp, ...)
[4/3] For part I of the raytracer: if the scenes which are inside large spheres (such as sphereflake and whitted) appear OK except for the sphere not being lighted correctly (and any other lighting issues can be explained by the scene objects reflecting the incorrectly-lit sphere) - and all other scenes not inside large spheres look OK - the problem is probably due to using the outward-pointing normal instead of the inward-pointing normal for lighting the large sphere (because the ray-sphere intersection point is really an exit point). You don't need to handle this for part I, but you will need to do so for part II.
[3/31] If you are working with the provided lighting model, then you don't need to worry about the N.L > 0 test for dealing with shadows - the shadow method will not be called if it doesn't need to be, so just compute the shadow ray and carry out the steps in the 'if' part of the 'if ( N.L > 0 )'.
[3/31] Project 3 note: If you click in the scene window to set the debugging pixel, the "setting debugging pixel" message printed in the terminal window displays an incorrect y coordinate - to get the actual y coordinate, subtract the value printed from the height of the drawing window. For example, if it says the debugging pixel was set to (100,230) and the drawing window is 400x400 (the default), then the debugging pixel is actually (100,170). This doesn't affect functionality in any way (the correct pixel is selected as the debugging pixel), but might be confusing if you are tracing code and looking at the current pixel's x and y coordinates.
[3/29] Read-the-documentation note for project #3: Pay attention to the comments for the camera's getProjector method - it is supposed to return a projector with a positive z coordinate (i.e. pointing along the positive z axis). Yet, we want a ray which points into the scene (along the negative z axis in VC). You will need to take this into account, or you won't see much other than background in your rendered scene.
[3/28] There was an omission in Part Ia of the raytracer: your ray renderer subclass also needs a constructor which takes the maximum depth as a parameter. See the section marked "New" in Part Ia in the handout.
[3/27] The demo has been upgraded to use thicker lines for the display of vectors in the mesh window. If you want this same feature in your own renderer, do the following:
[3/24] For project 3: Some code was rearranged in gui/rendermain.cc, and patch may have difficulties automatically patching the file. If you get rejects for this file, get a clean copy here - save this file over your copy of gui/rendermain.cc. You will need to uncomment the bits you've uncommented in previous projects - look for the "create renderer" or "create lighting" comments in the sceneKeys and main functions and uncomment the appropriate lines, including changing the names of the classes as needed. You will also need to add the appropriate #includes.
[3/5] Some information has been added to the debugging section of the project 2 handout:
To compare two images on a pixel-by-pixel basis (say, to compare your rendered scene to the demo), use:
pnmarith -difference image1.ppm image2.ppm > output.ppm
where "image1.ppm" and "image2.ppm" are the two files you want to compare, and "output.ppm" is the image file to create. You can then view output.ppm in your favorite image viewer - if the output image is entirely black, image1.ppm and image2.ppm are identical. If you are comparing your renderer to the demo and you notice that the output image has non-black regions, then something may not be quite right with your renderer.
[2/17] A new version of renderdemo has been installed - it fixes a bug with the handling of the height and radius of cylinders and cones. Probably this won't be an issue for you. The bug fix code will be included in the next patch (for the next project).
[2/14] The commented-out sections in the else if ( nodename == PROJECTIONALT_TAG ) section of configureCamera in camera/camera.h should be:
// camera_ = new ParallelCamera(vrp,lookat,vup,prp,alpha,phi, // viewplane,frontclip,backclip, // aspect,width); // camera_ = new PerspectiveCamera(vrp,lookat,vup,prp,alpha,phi, // viewplane,frontclip,backclip, // aspect,width);
instead of what is written. (i.e. what was "dop" should now be "alpha,phi")
[2/10] Check out the new FAQ for project and other related questions. Each question is marked with the date of when it was posted, so that you can easily check for new things.
[2/9] The first update (v1.1) to the renderer support code is available. Features include:
You should apply this update because of the bug fix (and the fast zoom is mighty useful, and the teapot is cool), though you can probably get through this project without it (as long as you avoid teapots). The renderdemo executable has been updated.
To apply the update:
[2/7] About the Camera class methods in project #1 - there are two categories of methods in Camera, the get*Matrix() methods and everything else.
The "everything else" methods are accessors and mutators for the Camera class - a camera object stores its own settings in its instance variables, and these methods allow for examination or manipulation of those values. Something like moveVRP, for example, provides one way for the user of Camera to modify the camera's VRP (and look at point). The mutator methods aren't actually called anywhere in the current renderer program, but are provided to help make Camera a more useful class. (They may be used in a future version of the renderer.)
The get*Matrix() methods are purely accessors (note they are all const) and exist to make it convenient to retrieve the various viewing pipeline matrices - these matrices depend on the settings in the camera. They may use Camera accessors to retrieve the necessary camera settings (or may access Camera's instance variables directly), but they do not call any Camera mutators (e.g. even though the view translation matrix is the matrix needed to move the VRP to the origin, getViewTranslateMatrix doesn't actually call moveVRP to do this - moveVRP actually moves the camera, and getViewTranslateMatrix is only supposed to calculate the matrix based on the camera's current position).
[2/7] A small correction has been made to the slides for part 1 of the viewing pipeline - the modeling transformation for the scene graph example was backwards.
[2/7] The newly-posted documentation for the scene file format includes a description of some features which are labelled as "version 1.1". Right now these aren't supported in your renderer code or in the demo. A patch will be made available soon to add this functionality. However, these features are not necessary for completion of the camera/wireframe project.
[1/16] Welcome to CPSC 324! This web page is your source for a great deal of important and useful material, so you should take a few minutes to familiarize yourself with the website. Check back often for announcements and new information.