CPSC 324 Fundamentals of Computer Graphics Spring 2006

FAQ

Running the Renderer

Scene File Mechanics

The Viewing Pipeline and Related Calculations

Lighting and Shading

Support Code Details

Raytracing


I've put a scene file in my cs324/scenes directory, but when I start renderdemo, it's not in the "load scene" list.

renderdemo (and your renderer) look in two places automatically for scene files: /classes/s06/cs324/scenes and ../scenes (a relative path, based on where the program is started from)

If you are running renderdemo or your renderer from your cs324/build directory, then ../scenes is your cs324/scenes directory and all is well. If you are in another directory when you start renderdemo, that's why it isn't finding your scenes.

There are two options to fix this:


I've created a file and named it hw5.xml and put it in my cs324/scenes folder. The second line of the file is: <!DOCTYPE scene SYSTEM "hw5.dtd"> Should that line say hw5.xml or .dtd? When I run the demo the file is listed, but when I try to load it nothing happens - I just get a black screen.

Look in the terminal window where you started the renderer - there should be an error message.

That second line should always be:

<!DOCTYPE scene SYSTEM "/classes/s06/cs324/scenes/scenegraph.dtd">

That line tells the parser where the DTD (which defines what the tags are and how they are arranged) can be found. Since all of the scene files use the same DTD, you don't need to change it.


I'm working on Camera::getViewCenter and I know we have to use the formula about xp and yp on page 55 of the projection slides (part 3), but what should I use for the point P shown in the top picture to get the (x,y,z) coordinates used?

Actually, you don't use that formula. That slide (and the xp and yp it is talking about) is dealing with computing the square up matrix for perspective projection, which is completely unrelated to the problem of calculating the center of the view window. It *is* possible to use those equations to get the x and y coordinates of the view center, but only if you start with a point P which happens to project to the center of the view window - and that's exactly the problem you are running in to, because you don't know of such a point.

The strategy is thus to not pursue that line of attack. (We *could* find such a point P, but it would involve the same kind of calculation that will yield CW directly - in fact, you can think of finding the CW as the task of determining a point P which projects to CW...and which happens to lie on the view plane. Then there's no need to calculate xp and yp because x = xp and y = yp.)

So, we forget about that P and xp and yp (for this particular problem, in any case :) and instead start by identifying what we do know (because they are camera settings): the PRP, the DOP, and viewplaneZ. Since the DOP is a defined as a vector which points in the direction of the PRP from the center of the window, this gives us a way to define the CW in terms of known quantities (such as the PRP and DOP). The viewplaneZ is relevant because the DOP is not necessarily exactly the vector from CW to the PRP (i.e. is isn't necessarily true that PRP=CW+DOP) - all we know is that the CW lies along a line parallel to the DOP which passes through the PRP. Thus, we have a point (the PRP) and a vector (the DOP) to define a line, and we need to intersect that line with the plane z = viewplaneZ. That intersection point is the center of the view window...

Wouldn't we be able to avoid calculating the intersection all together and just take the x and y of the prp and the view plane z as the z coordinate for the center of the view window?

You do, in fact, just take the view plane z as the z coord for the CW, but you can only use the PRP x and y as the CW x and y if the projection is not oblique. In general, though, the DOP is not pointing directly along the z axis (in VC). See, for example, the leftmost picture on slide 49 of the projection slides (part 3). This is for perspective, but the same idea applies for parallel projections. It isn't until the shear in the first step of projection that the DOP is aligned with the z-axis, and VC is what you have just before projection begins.


For the norm matrix methods of Camera, I have the 4 corners of the view window and then it says apply the getProjector method to see where they hit the front and back clip planes...but wouldn't I then need to get out 8 points for each plane? I was thinking since I can get the z coordinates for the front and back clip planes then I should use those and then use the x and y from whatever point the getProjector gives me...

getProjector doesn't return an intersection point - it returns a vector pointing along the projector through a given point. To find one of the eight corners of the view volume - say the upper left front corner - you want to intersect the line defined by the upper left corner of the view window (a point) and the projector through that point (a vector) with the front clip plane (defined by a z coordinate). The math for this is the same as the math for computing the CW given the PRP, the DOP, and the z of the view plane.

You can compute all 8 corners of the view volume, though you can actually get away with just computing a few of them. For example, the normalization translate matrix translates the lower left back corner of the view volume to the origin - so to calculate that matrix, you just need to compute the lower left back corner.


So, Camera::getViewTranslateMatrix calls Camera::moveVRP, right? The view translation matrix moves the VRP to the origin, after all.

No. There are two categories of methods in Camera, the get*Matrix() methods and everything else.

The "everything else" methods are accessors and mutators for the Camera class - a camera object stores its own settings in its instance variables, and these methods allow for examination or manipulation of those values. Something like moveVRP, for example, provides one way for the user of Camera to modify the camera's VRP (and look at point). The mutator methods aren't actually called anywhere in the current renderer program, but are provided to help make Camera a more useful class. (They may be used in a future version of the renderer.)

The get*Matrix() methods are purely accessors (note they are all const) and exist to make it convenient to retrieve the various viewing pipeline matrices - these matrices depend on the settings in the camera. They may use Camera accessors to retrieve the necessary camera settings (or may access Camera's instance variables directly), but they do not call any Camera mutators (e.g. even though the view translation matrix is the matrix needed to move the VRP to the origin, getViewTranslateMatrix doesn't actually call moveVRP to do this - moveVRP actually moves the camera, and getViewTranslateMatrix is only supposed to calculate the matrix based on the camera's current position).


For the wireframe renderer, how I can get the camera matrix when there's no Camera parameter?

You aren't given the camera as a parameter to the render method, but you are given the scene. So go hunting in scene.h - there's a getCamera method which returns a pointer to the Camera. Once you have the camera, you can use getCameraMatrix() on it to get the camera matrix.


For the worskstation transformation...when I am scaling the view volume I wouldn't use the norm scale matrix method, correct? I would create my own matrix with the correct numbers in it, but then in order to get the view volume do I have to do all of the computation over (like call getViewCorners for the view window and then calculate the view volume corners again)?

Remember what comes out of the viewing pipeline - when you've applied the modeling transform and the getCameraMatrix matrix, all of the scene geometry has been squashed into a unit cube whose lower back left corner is at (0,0,0) and whose upper front right corner is at (1,1,1). The workstation transformation then takes this unit cube and resizes it in x and y to fit the window. So you do create your own matrix for the workstation transformation (it isn't exactly the same as the normalization matrices, though it will involve scaling and translation steps), but you don't have to do all the work with computing the corners of the view volume because you know where they are (they are the corners of the unit cube).


I've #included a file, but the compiler is still complaining about not knowing what things are.

If the file you are #include-ing is not in the same directory as the file you are writing the #include in, you need to provide more information about where to find it. The Makefile is set up so that it tells the compiler to look in your code directory, so you'll need to specify which subdirectory of that contains the header file in question. For example, if you want to #include the drawing.h file, you should write #include "gui/drawing.h".


How do I use the routines in the Drawing class?

They are static, so use the class name instead of an object:

Drawing::drawLine(10,20,30,40);

What color do I use for drawing things in my wireframe renderer?

The SceneObject class has a getMaterial method (inherited from Object), and Material has a getDiffuseColor method which returns the object's Color. Check out scene/object.h and scene/material.h.


My parallel camera works for stages 5 and 6 but the perspective camera doesn't move the view volume to the origin on stage 5 like it's supposed to...but I use the same matrices for stages 5 and 6 for both cameras, correct?

Yes, the normalization matrices are the same for both parallel and perspective.

Are you working in the proper coordinate system when you figure out the values for the normalization matrices? Keep in mind that projection (shear and square up) moves the view volume around. Thus, if you compute the lower left back corner of the view volume in view coordinates (which makes the most sense, because everything you use to compute that corner is in VC), then you need to apply the two projection matrices to that point before plugging things into the normalization matrices. (Normalization turns the post-projection view volume into the unit cube, so you need the post-projection corners of the view volume.)


So, I use Light::getIntensity to get the intensity of the light for the lighting equation, right?

No. The naming of the getIntensity method is perhaps unfortunate. The key thing to note is that getIntensity returns a double value, but the light intensity used in the illumination equation is parameterized by wavelength - that means it may be different for red, green, and blue. What you want is the Light::getColor method.

In the scene file, lights can be configured by specifying a color and an intensity (think of a dimmer switch, which you can turn brighter or darker). The reason is that it may be more natural to think about and easier to configure a light if the color of the light is separated from its brightness - that way, you can specify a yellow light as the color (1,1,0) and then adjust the brightness until the scene looks the way you want. The actual energy leaving the light (or what is called "intensity" in the lighting equation) is the product of the brightness and the color i.e. a brightness of .6 would make the intensity of the yellow light (.6,.6,0). This is what the Light::getColor method returns.


So, the specular color is just Material::getSpecularFrac?

No, the specular color is a color - getSpecularFrac returns a double. Remember that the specular color of a material depends on the kind of material - for plastics, the specular color is the color of the light shining on the object; for metals, the specular color is the diffuse color of the object; for other things, the specular color is a combination of the object's diffuse color and the light's color. getSpecularFrac tells you how much of the object's diffuse color is used in the specular color. You need to actually compute the specular color for use in the lighting equation, adding the getSpecularFrac fraction of the object's color and the remaining fraction (1-getSpecularFrac) of the light's color.


For the attenuation function f(d), I just retrieve the three attenuation values, sum them, and divide 1 by this sum, right?

No. The attenuation function f(d) = 1/(a0 + a1*d + a2*d*d) where d is the distance from the point being lit to the light source. The values a0, a1, a2 are what is retrieved with the PositionalLight::getAttenuation function.


When it says the normal for Gouraud shading is the average of all the adjacent polygons does that mean I'm getting the 3 different cross products for the edges of the triangle and then averaging them?

Remember when we talked about vertex normals in class. The idea is that in a smooth curved surface, the normals change smoothly as you move across the surface. With triangles, however, the normal changes abruptly as you move across the edge from one triangle to the next. Imagine two triangles, viewed from the side (so they are sticking into and out of the screen):

    _____
   / 
  /
 /

If you just use the triangle's normal to light each of the points on the triangle (similar to flat shading, just with more points), the point shared by both triangles will (potentially) get very different colors depending on which triangle (and thus which triangle normal) is being drawn.

Vertex normals aim to fix this problem, so that the same normal will be used for that shared point regardless of which triangle is being drawn. But what to use? A reasonable choice is to use something between the normals of each triangle sharing the point - say, the average. It's not that you are taking a single triangle and averaging a bunch of different things to get a vertex normal (which is what your question sounds like) - instead, you average the normals of all of the different triangles which share that point.

There are more efficient solutions, but one way to compute the vertex normal for a vertex is to find out the index in the mesh for that vertex (this is what is passed to TriangleMesh::getPoint as a parameter; the Triangle knows this information), and to run through each triangle in the mesh, including its normal in the average if that triangle shares the vertex point you are currently interested in.


It seems like there's nothing wrong with my ray generation, but all I get is background color.

Pay attention to the comments for the camera's getProjector method - it is supposed to return a projector with a positive z coordinate (i.e. pointing along the positive z axis). Yet, we want a ray which points into the scene (along the negative z axis in VC). You will need to take this into account, or you won't see much other than background in your rendered scene.


Valid HTML 4.01!