CPSC 324 Fundamentals of Computer Graphics Spring 2006

CPSC 324 Renderer Documentation

The first three projects for this course involve building a renderer for 3D scenes. (Actually, you'll be building several renderers which produce different styles of views.) While you will be filling in the essential core of the renderer, you have been provided with a lot of support code to handle necessary (but less central) parts of the renderer (such as reading scene files, handling the user interface, and providing basic linear algebra functionality). This document describes the provided code. The code is commented, and you should supplement your reading of this document with a perusal of the header files named.

Directory Organization

The renderer code (both the support code and the parts you will be writing) is arranged into the following directories:


The camera directory contains the virtual camera. The virtual camera encapsulates all but the modeling and workstation transformation steps of the 3D viewing pipeline. (The modeling transformation, derived from the scene graph, is stored with each scene object. The workstation transformation is computed when the scene is rendered, because window information is only provided at that point.)

Camera (camera.h) defines an interface for all virtual cameras and implements projection-independent functionality. (Subclasses will provide projection-dependent functionality.) The camera encapsulates all of the parameters and transformations involved in the viewing transformation, projection, and view volume normalization parts of the viewing pipeline.

The first set of methods deal with the camera's position and orientation. All of the values are in world coordinates.

    virtual void setVRP ( const Point4d & vrp );
    virtual Point4d moveVRP ( const Point4d & vrp );
    virtual void setLookAt ( const Point4d & lookat );
    virtual void setVUP ( const Vector4d & vup );
    virtual void setPositionAndOrientation ( const Point4d & vrp,
					     const Point4d & lookat,
					     const Vector4d & vup );

    virtual Point4d getVRP () const;
    virtual Point4d getLookAt () const;
    virtual Vector4d getVPN () const;
    virtual Vector4d getVUP () const;

Of particular note is moveVRP: it moves the VRP to the specified location, updating the look at point so that the VPN remains unchanged. This is convenient for moving the camera while keeping it pointed in the same direction (which means that the VPN doesn't change and thus the view plane remains the same). Using setVRP instead will cause the camera to move but to continue pointing at the same place (changing the VPN and thus the orientation of the view plane).

The next methods deal with projection. All quantities are in VC.

    virtual void setPRP ( const Point4d & prp );
    virtual void setDOP ( const Vector4d & dop );
    virtual void setProjectionAngles ( double alpha, double phi );

    virtual Point4d getPRP () const;
    virtual Vector4d getDOP () const;
    virtual double getAlpha () const;
    virtual double getPhi () const;

    virtual Vector4d getProjector ( const Point4d & point ) const = 0;

These methods are common to both parallel and perspective projections (though the implementation of getProjector depends on the type of projections). All projections are characterized by the PRP and the DOP vector or the equivalent alpha and phi values. getProjector returns the vector describing a projector which passes through the specified point. Since there are two such vectors (differing only in sign), it should return the vector with a non-negative z component. (Either vector may be return if the z component is 0; this case does not generally occur because it means that the projector is parallel to the film plane so no image would be seen.) It may return (0,0,0) as the vector if the point is the same as the PRP.

The third set of methods deal with the specification of the view volume. Since the view volume is defined in view coordinates (i.e. with respect to the camera), all of these values are in view coordinates.

    virtual void setViewPlaneZ ( double viewPlaneZ );
    virtual void setFrontClipZ ( double frontClipZ );
    virtual void setBackClipZ ( double backClipZ );

    virtual void setViewAspectRatio ( double viewAspectRatio );
    virtual void setViewWidth ( double viewWidth );

    virtual double getViewPlaneZ () const;
    virtual double getFrontClipZ () const;
    virtual double getBackClipZ () const;

    virtual double getViewAspectRatio () const;
    virtual double getViewWidth () const;
    virtual double getViewHeight () const;

    virtual Point4d getViewCenter () const;
    virtual vector getViewCorners () const;

The fourth set of methods construct and return individual matrices involved in the viewing pipeline.

    virtual Matrix4d getViewTranslateMatrix () const;
    virtual Matrix4d getViewRotateMatrix () const;
    virtual Matrix4d getProjShearMatrix () const = 0;
    virtual Matrix4d getProjSquareUpMatrix () const = 0;
    virtual Matrix4d getNormTranslateMatrix () const;
    virtual Matrix4d getNormScaleMatrix () const;

The getView*Matrix methods return the translation and rotation parts of the viewing transformation, the getProj*Matrix methods return the shear and square-up parts of the projection, and the getNorm*Matrix methods return the translation and scale parts of the view volume normalization. Note that the getProj*Matrix methods are pure virtual - they must be implemented in subclasses, since they depend on the type of projection.

The following method constructs and returns a portion of the viewing pipeline, from world coordinates up to and including the matrix indicated by the parameter (whose value will be one of the specified constants). This method is used by the "world" window to display intermediate stages of the viewing pipeline.

    virtual Matrix4d getStageMatrix ( ViewStage stage ) const;

    enum ViewStage { STAGE_WORLD,        // WC (no transformations applied)
                     STAGE_VIEW_TRANS,   // translation part of viewing transform
                     STAGE_VIEW,         // VC (complete viewing transform applied)
                     STAGE_PROJ_SHEAR,   // shear part of projection
                     STAGE_PROJ,         // PC (complete projection applied)
                     STAGE_CVV_TRANS,    // translation part of normalization
                     STAGE_CVV };        // in canonical view volume

Finally, the camera provides a method for constructing the entire world-coordinates-to-canonical-view-volume transformation matrix.

    virtual Matrix4d getCameraMatrix () const;

This should return the same matrix as getStageMatrix(STAGE_CVV). It does not include the modeling and workstation transformations.


The gui directory supports the renderer's user interface.

The Drawing class (drawing.h) provides routines for drawing to the screen. Its main purpose is to hide the complexity of actually getting pixels on the screen. All of the methods are static, and draw to the current drawing window. (Even though there are multiple drawing windows in the program, the rendering code never specifies which window to draw into - it merely uses Drawing to draw to the current window, and the user interface handles making the correct window "current".)

Drawing provides routines for setting the current drawing color, and for drawing several kinds of primitives (points, lines, ovals, and text). All coordinates are screen coordinates, though the drawing window interprets (0,0) as the lower left corner of the window instead of the upper left. This is different from most drawing-in-windows, but it means the viewing pipeline doesn't have to worry about flipping y-coordinates.

    static void setColor ( double red, double green, double blue );
    static void setColor ( const Color & color );

    static void drawPoint ( int x, int y );
    static void drawLine ( int x1, int y1, int x2, int y2 );
    static void fillOval ( int x, int y, int rx, int ry );
    static void drawText ( const string & str, int x, int y );

Drawing commands are buffered (for efficiency), and do not appear on the screen until flushed. The support code flushes the drawing commands when the entire view has been rendered (unless the -flush commandline option is specified, in which case operations are flushed immediately). As a result, you shouldn't need to call Drawing's flushing routines directly - but if you need them, checking out drawing.h.

The main program is defined in rendermain.cc. The main program handles setting up the windows and user interface, and processes user interaction (e.g. mouse actions and keyboard presses). The user interface is implemented using the OpenGL graphics package, so don't worry if rendermain doesn't make much sense.


The linalg directory provides the necessary mathematics involving points, vectors, and matrices.

Point4d (linalg.h) defines a point in 3D homogeneous coordinates. Points with an h coordinate of 0 are degenerate and should be avoided. Retrieve and set individual coordinates of a point using [] e.g. p[2] = 4 sets the z coordinate of point p to 4. You can compare two points for equality using ==, and this works correctly for homogeneous coordinates (e.g. comparing (1,2,3,1) and (2,4,6,2) using == will return true, because these points are considered to be the same). The relevant arithmetic operations are defined for points (point+vector, point += vector, point-vector, point -= vector, point-point), and are defined correctly for homogeneous coordinates (e.g. adding the vector (1,1,1) to the point (2,4,6,2) will return a point equivalent to the one obtained by adding the same vector to (1,2,3,1)). Homogenize a point with p.homogenize(). Finally, << is defined so points can print themselves.

Vector4d (linalg.h) defines a 3D vector. Since the h coordinate of a vector is always 0, this is not explicitly stored. Retrieve and set individual components of a vector using [] e.g. v[2] = 3 sets the z component of vector v to 3. You can compare two vectors for equality using == (two vectors are equal if they have the same length and point in the same direction). The relevant arithmetic operations are defined for vectors (-vector, vector+vector, vector += vector, vector-vector, vector -= vector, vector*scalar, vector *= scalar, scalar*vector, vector/scalar, vector /= scalar). Get the length of a vector with v.length() and normalize it with v.normalize(). Free functions dot and cross are provided for dot product and cross product, respectively. Finally, << is defined so vectors can print themselves.

Matrix4d (linalg.h) defines a 4x4 matrix. The default constructor creates the identity matrix, and a second constructor allows a matrix to be constructed with specified entries. Retrieve and set individual components of the matrix using [][] e.g. m[2][3] = 10 sets the element in row 2, column 3 of the matrix to 10. The relevant arithmetic operations are defined for matrices (matrix*point, matrix*vector, matrix*matrix) - of note is that matrix*point returns a homogenized point. Get the transpose of a matrix with m.transpose() and the inverse with m.invert(). There are also three static methods which are provided as a convenience for creating special kinds of matrices: Matrix4d::getTranslateMatrix, Matrix4d::getRotateMatrix, and Matrix4d::getScaleMatrix. Finally, << is defined so matrices can print themselves.


The render directory contains everything relevant to producing an image from a scene description, including support for visible surface determination, lighting models, and the code to actually compute and display the image.

Renderer (renderer.h) defines an interface which must be supported by any class which produces an image of a scene. It defines a single method, render, which renders the scene. Renderer must be subclassed to do any actual work.

Lighting (lighting.h) defines an interface for a lighting model (to determine the illumination for a particular point in the scene) and provides some convenience routines related to the implementation of a lighting model. The illuminate method is what implements the lighting model; it must be overridden in a subclass. The computeViewer, computeLR, and computeV methods handle computing vectors which are typically used in lighting computations. These methods are static since they are not dependent on any particular lighting model. Finally, the enableShadows method allows shadows to be enabled or disabled. ("Shadows enabled" means that objects between the light and the point being lit - which may partially or totally block light from reaching the point being lit - are taken into account.) Note that enableShadows does not automatically turn shadows on and off - it just sets the protected shadows_ instance variable. illuminate must still be implemented to check the value of shadows_ and act accordingly.


The scene directory contains classes relevant to the description and geometry of a scene.

Scene (scene.h) contains the full scene description. The loadScene method reads a scene description from a file into the Scene object. Accessors are provided to retrieve the camera (getCamera), objects (numObjects and getObject), and lights (numLights and getLight). Two additional methods are provided, though they generally don't need to be called directly: initialize handles any precomputation that should be done once per scene (such as computing normal vectors); it is called automatically by loadScene, but must also be called manually if the scene's camera parameters are changed. clear removes all scene elements from the scene; it is also called automatically by loadScene prior to reading the new scene description.

Object (object.h) is the top-level class for all things which contain a modeling transformation and a material. The accessors getTransform and getMaterial are provided for retrieving this information. Object is useful primarily to avoid repeating the instance variables and accessors for the modeling transformation and material, since many classes need to store this information.

SceneObject (sceneobject.h), a subclass of Object, defines the interface for all objects which can be found in a scene (excluding cameras and lights). Six methods must be implemented by all scene objects:

wireframe supports wireframe rendering, tessellate supports polygon pipeline-based renderers, and the remaining methods are used in raytracing.

SceneObject has a number of subclasses which implement specific kinds of objects: Cone (cone.h), Cube (cube.h), Cylinder (cylinder.h), Sphere (sphere.h), and PolyMesh (polymesh.h). (Additional object types may be added.) With the exception of PolyMesh, all of the objects are canonical forms of the object:

Specify transformations in the scene file to change the position, orientation, and/or size of these objects.

Wireframe (wireframe.h) represents a wireframe view of an object. It consists of a collection of Segment objects (wireframe.h) which outline the object, plus the object's material and modeling transform. (Wireframe coordinates are in MC.) The Wireframe class supports addSegment, numSegments, and getSegment methods, plus the material and modeling transform accessors provided by the superclass Object. The Segment's endpoints are retrieved via [] e.g. s[1] returns the second endpoint of the segment. Finally, << is defined for Segment.

TriangleMesh (trimesh.h) is a triangle mesh - a surface made up of triangles. The triangle mesh coordinates are in MC; the mesh is a subclass of Object and includes the material and modeling transform. The mesh provides methods for building the mesh (addPoint and addTriangle) and retrieving mesh geometry (numPoints, getPoint, numTriangles, and getTriangle). Of note is that points within the mesh are assigned an integer index and are referred to using this index (e.g. addTriangle takes three integers as parameters to specify the points of the triangle, instead of three Point4ds). This is partly for efficiency (so that points which are part of several triangles are not stored repeatedly) and partly to deal with precision issues. (Since only a certain number of bits are used to store doubles, only a certain number of decimal places can be stored. This can manifest itself in several ways, such as printing out a value which should be 1 and getting 0.99999999 or getting false for an == comparison when you expect the result to be true.) The mesh supports several additional routines (getNormal, getL, getR, and getV) for computing/retrieving various vectors used in the lighting model. All four methods take an integer index identifying the point for which the vector is to be computed.

The Triangle object (trimesh.h) returned by TriangleMesh's getTriangle supports [] to get the actual MC points of the triangle (e.g. tri[2] gets the third point of the triangle), getCenter and getCenterNormal to retrieve the triangle's center point and normal at that point (both quantities in WC), and getNormal, getL, getR, and getV to retrieve the specified vector for a particular point of the triangle (i.e. point 0, 1, or 2).

Light (light.h) defines a light in the scene. All lights may be on or off (use the accessor on to find out the light's status) and have a color (retrieve with getColor). Subclasses of Light implement specific kinds of lights. AmbientLight defines an ambient light, and has no additional methods. PositionalLight defines a light with a specific position, and has an accessor getPosition for retrieving that position. Positional lights may also have attenuation, and getAttenuation retrieves the constants for the attenuation function. PointLight, a subclass of PositionalLight, defines a point light source which radiates equally in all directions. It adds only constructors to the methods provided by PositionalLight. Finally, << is defined for all kinds of lights.

Material (material.h) defines a material - materials encapsulate all of the properties which affect the appearance of a surface including its color, shininess, transparency, and reflectiveness.

Color (color.h) defines an RGB color whose color components are between 0 and 1. Capping is not performed, so you must ensure that the color components are between 0 and 1 before passing them to the constructor. Retrieve the color components using [] e.g. color[GREEN] is the green color component. Note that the preferred way to access the color components is with RED, GREEN, and BLUE - if you want to use integers instead, you must cast them to RGBComponent e.g. color[(RGBComponent)0] to get the red color component. Finally, << is defined so colors can print themselves.

BoundingVolume (boundvol.h) defines an easy-to-intersect shape (it happens to be a sphere, though this is an implementation detail). Bounding volumes are used to speed up the intersection computations in raytracing and in handling shadows. The bounding volume coordinates are assumed to be in WC, and the ray provided to the isIntersected method must also be in WC.


The util directory contains several useful routines.

defs.h contains the definition for the constant EPSILON. This constant is useful for comparing two doubles for equality. Because of precision issues (a fixed number of bits for storing doubles means a limited number of decimal places can be stored), using == to compare two doubles will often return false when it should return true. Instead, treat two doubles as equal if they are within EPSILON of each other e.g.

  double a, b;
  // a, b are given values and manipulated
  if ( fabs(a-b) < EPSILON ) {           // test if a == b

fabs returns the absolute value.

utiltemplates.cc contains template functions, in particular, a template function for retrieving an element from a map object which is const. [] doesn't work for this task due to various technicalities about what is const and what isn't. The short version? Use the mapGet function defined in this file to retrieve elements from a map. Also keep in mind that since mapGet is a template function, you'll need to #include "utiltemplates.cc" in files where you use mapGet (and utiltemplates.cc is not listed in the Makefile).

debug.h defines a debugging mechanism - several debugging flag constants, and a DEBUG routine for printing debugging messages. The purpose of this mechanism is to facilitate turning selected debugging messages on and off without having to remove/comment out the messages. You are encouraged to use this debugging mechanism instead of using cout to print debugging messages. Two steps are required:

See debug.h to find out what debugging flags have been defined.

Valid HTML 4.01!