| CPSC 424 |
Computer Graphics |
Fall 2025 |
CPSC 424 Exam 2 Review Information
Exam 2 will cover material from weeks 3-7 and labs 4-7: the rest of
the polygon pipeline (shading and lighting, image textures),
generating texture coordinates and procedural textures, and aspects of
photorealistic rendering within the polygon pipepline (bump mapping,
environment mapping, shadow mapping). three.js and animation will not
be on this exam.
The exam will be written (not on the computer), with the focus on
concepts rather than specific syntax details — you won't be
asked to write actual WebGL code, but you should have a reading
knowledge of WebGL and be able to describe or give implementations at
the pseudocode level.
-
"Reading knowledge" or "reading level" means that you should be
familiar enough with WebGL (both the JavaScript and shader parts) to
understand and be able to explain or modify code you are given. For
specific WebGL operations (such as gl.drawArrays
or gl.drawElements) you should be able to plug in the right
parameter values for a given situation but you do not need to be
able to write the whole call from memory.
-
"Pseudocode level" means that you know the steps involved in the
code if not necessarily the exact syntax. You can lump together
the four statements needed to set up values for shader attributes
into a single "set shader parameter X to Y".
You may have a single page of notes (8.5x11", one
side) which will be handed in with your exam. This page may be
handwritten or typed and can contain whatever you would like, but
it must be a hardcopy — on a piece of paper, not a
laptop, tablet, phone, or other device — and must be
personally prepared by you — you may not copy another
student's page or hand out copies of yours to others. Creating your
own notes is an essential part of the learning process — deciding
what to include requires engagement with the material which
reinforces understanding and improves long-term retention of the
material, provides an opportunity for review in order to identify
gaps in your knowledge in time to ask questions before the exam,
increases confidence in what you do know, and encourages taking
ownership of your own learning.
Specific topics, terms, and concepts you should be familiar with:
- RGB color and the alpha color component
- terminology: color (represents the fraction of incident light reflected by a surface), spectral distribution, RGB insufficiency
- material properties: ambient color, diffuse color, specular color,
shininess (specular exponent)
- kinds of light sources: point, directional, spotlight
- light properties: intensity (ambient, diffuse, specular) +
type-specific properties (position, direction, cutoff angle, spot
exponent)
- the OpenGL lighting equation — you don't
need to be able to produce the standard lighting equation but you
should understand the ideas (e.g. the amount of light hitting the
surface for diffuse reflection decreases as the angle between the
surface normal and the light direction increases) and, seeing the
equation, be able to identify what's what
- its role (specifying how to determine color for a point)
- generally how light and material contribute to color (light
leaving source times fraction reaching surface times fraction
reflected towards viewer, at each wavelength)
- how ambient material color and ambient light contribute to the color
- how light angle and surface orientation (normal) affect
diffuse reflection
- how light angle, surface orientation (normal), and
direction to the viewer affect specular reflection
- the effect of the shininess material property
- for spotlights, the effect of the cutoff angle and spot exponent
- attenuation — what it is, how it affects color
- shading models: flat shading, smooth shading, Phong shading
- face/polygon and vertex normals, how to compute vertex
normals for smooth shading
- textures
- 2D vs 3D textures, image vs procedural textures, cubemap
textures: what they are, how they differ (in definition,
application, appearance), when to use what
- texture coordinates: what they are for and how they are used,
strategies and techniques for
generating 2D texture coordinates when they are not supplied at
part of the object's geometry, where texture coordinates come
from for 3D textures, what coordinate system is used for texture
coordinates and why
- texture transforms and their effects, including determining
the specific transform(s) needed to achieve a particular effect
- texture repeat modes (clamp to edge, repeat, mirrored
repeat) — what each mode does, what they apply to
- minification and magnification filtering of image textures:
what and why, the options and their pros and cons
- mipmaps — what they are, how they improve
minification filtering
- incorporating textures into lighting (mix, replace, modulate)
- bump mapping
- what it is / what it is used for
- the idea/process for bump mapping (steps for computing and
what they accomplish, how it fits into lighting)
- limitations/shortcomings of bump mapping as a strategy
- environment mapping
- skyboxes: what they are
- terminology: refraction and index of refraction
- environment mapping: what it is and what it is used to
accomplish, the idea/process of using environment mapping for
reflections,
limitations/shortcomings of using environment mapping for
reflection and refraction
- dynamic cubemaps: what they are, why they are useful in
environment mapping
- shadow mapping
- what it is / what it is used for
- the idea/method for shadow mapping
- limitations/shortcomings of shadow mapping as a strategy
- working with the WebGL programmable pipeline
- flow of data
- shaders: what computations are done (viewing pipeline,
lighting, texture transforms, application of textures,
bump mapping, ...), where (vertex or fragment shader), and
why
- materials, textures, lighting and shading in OpenGL 1.1 and their implementation in WebGL
- what it means for a light position to be in object
coordinates, world coordinates, eye coordinates and how to
achieve each
- how the lighting equation is used in OpenGL 1.1 (to
determine vertex colors), where it is implemented in WebGL
- implementing flat, smooth, and Phong shading in OpenGL 1.1 and WebGL
- implementing shaders: where the lighting equation is
computed (for flat, smooth, and Phong shading), where textures
are applied
- texture repeat modes (clamp to edge, repeat, mirrored repeat)
- minification and magnification filtering of image textures,
the options, the issues
- mipmaps — what they are, how and why they are used for
improved minification filtering
- WebGL implementation of photorealistic extensions
- bump mapping
- what coordinate system(s) computations are done in and
why
- implementation in WebGL
(where the steps for applying bump mapping are written)
- environment mapping
- what coordinate system(s) computations are done
in and why
- implementation in WebGL (the steps to render a
scene with reflective objects)
- dynamic cubemaps
- implementation in WebGL (procedure for generating
a dynamic cubemap)
- framebuffers and renderbuffers
- what they are
- their role
in generating dynamic cubemaps in WebGL
- shadow mapping
- what coordinate system(s) computations
are done in and why
- implementation in WebGL (steps needed,
things to be aware of e.g. appropriate size of view volume,
shadow map size)
- artifacts (shadow acne, peter panning,
blocky hard-edged shadows) — what they are and why they
arise, strategies for reducing/mitigating