CS424 Notes, 2 Mar 2012
- gl-matrix.js
- We begin with some basic information on gl-matrix.js from last Wednesday's notes.
- Some characteristics of functions in gl-matrix.js:
- The translation, scaling, and rotation functions take an array of three numbers as a parameter rather than separate parameters for the x, y, and z components. For example: mat4.translate(matrix,[2,3,4])
- Functions that operate on vectors and matrices have an optional final parameter that gives the destination for the result of the operation. If that parameter is not given, then the operation modifies the first input parameter. If a destination is given, the input parameter is not modified. For example: mat4.translate(matrix,[2,3,4]) modifies matrix by multiplying it on the right by a translation, while mat4.translate(matrix,[2,3,4],destMatrix) does not change matrix and stores the product of matrix and the translation into destMatrix.
- The return value of most functions is the destination of the operation (which is also one of the inputs to the function). If a destination is specified in the function call, then that destination is returned (containing the result of the operation). If no destination is specified, then the modified input object is returned (again, containing the result of the operation).
- In the documentation, parameter types are referred to as vec3, vec4, mat3, and mat4, which can be confusing because these are not names of types! Here, vec3 means an array of three numbers, vec4 means an array of four numbers, mat3 means an array of nine numbers representing a 3-by-3 matrix, and mat4 means an array of 16 numbers representing a 4-by-4 matrix.
- For rotation, gl-matrix measures angles in radians. Degrees can be converted to radians
using the following function (which is NOT predefined):
function toRadians(degrees) { return (degrees/180 * Math.PI); }
- In addition to mat4.rotate(mat,angle,axis), gl-matrix defines functions for roatations about the x, y, and z axes: mat4.rotateX(mat,angle), mat4.rotateY(mat,angle), and mat4.rotateZ(mat,angle). (All these take an optional additional parameter for the destination of the operation.)
- The ModelView Transform
- A modeling transformartion transforms object coordinates into world coordinates. It corresponds to positioning an object in the world.
- A viewing transformation transforms world coordinates into "eye coordinates", that is, a coordinate system in which the viewer is at the origin and is looking in the direction of the negative z-axis. The viewing transform corresponds to positioning the viewer -- or camera -- into the world.
- However, there is no principled distinction between modeling and viewing transformations. In the fixed-function pipeline, OpenGL made no distinction between them and combined them both into a single "ModelView Matrix." It's common to do the same in WebGL.
- Conceptually, though, we first position the camera (i.e. specify the viewing transformation), then
draw the scene, applying modeling transformations to the objects in the scene.
- At the start, the camera/viewer is initially positioned at the origin pointing in the negative z direction, and the modelview matrix is the identity.
- Then the viewing transform is specified as a series of transforms applied to the modelview matrix. These transforms are applied to the camera/viewer in the order that they are specified in the code, but the effect on the camera is the inverse of the specified matrix operation. For example, mat4.translate(modelview,[5,0,10]) translates the camera by [-5,0,-10].
- Now the modeling transform is specified as another series of transforms applied to the modelview matrix. These transforms are applied to the objects in the scene in the order opposite to the order in which they are specified in the code.
- Finally, the objects are drawn, using object coordinates. The modelview matrix transforms these coordinates directly into eye coordinates, combining the effect of the modeling transform and the viewing transform.
- Lighting calculations are done in eye coordinates, that is, after applying the modelview tranformation. This is when the actual visible colors are computed. The fact that lighting calculations have to be at this point is probably the reason that the modelview transform is kept separate from the transformations that follow it (to produce screen coordinates in the end).
- As an example, consider the following series of operations, implemented with the gl-matrix
library:
var modelview = mat4.identity(); // start with identity matrix (camera at [0,0,0]) mat4.translate(modelview, [0,0,-10]); mat4.rotateY(modelview, Math.PI/2); // 90 degrees // draw a 2-by-2-cube centered at the origin
There are three different ways that we can interpret this simple example:- The viewing transform is the identity; the camera is at (0,0,0) looking in the negative z direction. The cube is first rotated by 90 degrees about the y-axis, so that what was originally its left side is facing front. Then the cube is translated by -10 in the z-direction, so that it is at (0,0,-10). The camera looks directly at the left side of the cube, 10 units in front of the camera position.
- mat4.translate(modelview,[0,0,-10]) is the viewing transform. It translates the camera by (0,0,10), since the effect is the inverse of the specified transform. So, the camera is at (0,0,10), looking in the negative z direction -- that is, towards (0,0,0). The rotation is the modeling transform; it rotates the cube so that its left face is now pointed in the positive z direction. The cube is still at (0,0,0), since the translation is not part of the modeling transform. With the camera at (0,0,10) and the rotated cube at (0,0,0), the camera looks directly at the left side of the cube, 10 units in front of the camera position.
- The translation and rotation are the viewing transform, and there is no modeling transform. The cube is sitting at (0,0,0) in its usual orientation (with its left side facing left, in the direction of the negative x-axis). The camera starts at (0,0,0), facing in the negative z-direction. It is first translated to (0,0,10). Then the rotation rotates the camera by -Math.PI/2 radians about the y-axis. (Remember the thing about inverses and the view transform!) This puts the camera on the negative x-axis, still looking towards the origin. From this position, it sees the left side of the cube. So, the camera looks directly at the left side of the cube, 10 units in front of the camera position.
Note that whichever way we interpret it, the view from the camera looks exactly the same!
- mat4.lookAt
- If you think of placing a camera in the world, you can specify its placement by: (1) the point where it is located; (2) a point that it is pointing at; and (3) a direction that points upwards in the camera's view. These three things completely determine a viewing transform.
- To set up a viewing transform of this type, gl-matrix.js defines a function
mat4.lookAt( eye, center, up );
where eye, center, and up are arrays of three numbers. This creates a view transformation matrix that corresponds to putting the camera/viewer at eye, looking at center, and with up pointing upwards in the camera's view. (The points eye and center should be different, and up should not point along the direction from eye to center; up doesn't have to be perpendicular to that direction, but only the component of up that is perpendicular to that direction matters.) - This function could be used as follows:
var modelview = mat4.lookAt( [5,5,15], [0,0,0], [0,1,0] ); // Apply modeling transform to modelview // Draw objects using object coordinates
In this case, we start with a viewing transform that puts the camera at [5,5,15], looking towards the origin, with the y-axis pointing upwards in the camera's view. Modeling transforms for the objects in the scene are then applied to this viewing transform to produce the combined modelview matrix. - Standard OpenGL has no "lookAt" function, but the corresponding function is defined in a standard library of OpenGL utility functions. The name of the library is GLU, and the name of the corresponding function is gluLookAt.