[ Previous Section | Chapter Index | Main Index ]

Normals and Textures

OpenGL associates several quantities with every vertex that is generated. These quantities are called attributes of the vertex. One attribute is color. It is possible to assign a different color to every vertex in a polygon. The color that is assigned to a vertex is the current drawing color at the time the vertex is generated, which can be set, for example, with glColor3f. (This color is used only if lighting is off or if the GL_COLOR_MATERIAL option is on.) The drawing color can be changed between calls to glBegin and glEnd. For example:

```gl.glBegin(GL.GL_POLYGON);
gl.glColor3f(1,0,0);
gl.glVertex(-0.5,-0.5);  // The color associated with this vertex is red.
gl.glColor3f(0,1,0);
gl.glVertex(0.5,-0.5);   // The color associated with this vertex is green.
gl.glColor3f(0,0,1);
gl.glVertex(0,1);        // The color associated with this vertex is blue.
gl.glEnd();```

Assuming that the shade model has been set to GL_SMOOTH, each vertex of this triangle will be a different color, and the colors will be smoothly interpolated to the interior of the triangle.

In this section, we will look at two other important attributes that can be associated with a vertex. One of them, the normal vector, is an essential of lighting calculations. Another, the texture coordinates, is used when applying texture images to surfaces.

2.4.1  Introduction to Normal Vectors

The visual effect of a light shining on a surface depends on the properties of the surface and of the light. But it also depends to a great extent on the angle at which the light strikes the surface. That's why a curved, lit surface looks different at different points, even if its surface is a uniform color. To calculate this angle, OpenGL needs to know the direction in which the surface is facing. That direction can be specified by a vector that is perpendicular to the surface. A vector can be visualized as an arrow. It has a direction and a length. It doesn't have a particular location, so you can visualize the arrow as being positioned anywhere you like, such as sticking out of a particular point on a surface. A vector in three dimensions is given by three numbers that specify the change in x, the change in y, and the change in z along the vector. If you position the vector (x,y,z) with its start point at the origin, (0,0,0), then the end point of the vector is at the point with coordinates (x,y,z).

Another word for "perpendicular" is "normal," and a vector that is perpendicular to a surface at a given point is called a normal to that surface. When used in lighting calculations, a normal vector must have length equal to one. A normal vector of length one is called a unit normal. For proper lighting calculations in OpenGL, a unit normal must be specified for each vertex. (Actually, if you turn on the option GL_NORMALIZE, then you can specify normal vectors of any length, and OpenGL will convert them to unit normals for you; see Subsection 2.2.2.)

Just as OpenGL keeps track of a current drawing color, it keeps track of a current normal vector, which is part of the OpenGL state. When a vertex is generated, the value of the current normal vector is copied and is associated to that vertex as an attribute. The current normal vector can be set by calling gl.glNormal3f(x,y,z) or gl.glNormal3d(x,y,z). This can be done at any time, including between calls to glBegin and glEnd. This means that it's possible for different vertices of a polygon to have different associated normal vectors.

Now, you might be asking yourself, "Don't all the normal vectors to a polygon point in the same direction?" After all, a polygon is flat; the perpendicular direction to the polygon doesn't change from point to point. This is true, and if your objective is to display a polyhedral object whose sides are flat polygons, then in fact, all the normals of one of those polygons should point in the same direction. On the other hand, polyhedra are often used to approximate curved surfaces such as spheres. If your real objective is to make something that looks like a curved surface, then you want to use normal vectors that are perpendicular to the actual surface, not to the polyhedron that approximates it. Take a look at this example:

The two objects in this picture are made up of bands of rectangles. The two objects have exactly the same geometry, yet they look quite different. This is because different normal vectors are used in each case. For the top object, I was using the band of rectangles to approximate a smooth surface (part of a cylinder, in fact). The vertices of the rectangles are points on that surface, and I really didn't want to see the rectangles at all -- I wanted to see the curved surface, or at least a good approximation. So for the top object, when I specified a normal vector at one of the vertices, I used a vector that is perpendicular to the surface rather than one perpendicular to the rectangle. For the object on the bottom, on the other hand, I was thinking of an object that really is a band of rectangles, and I used normal vectors that were actually perpendicular to the rectangles. Here's a two-dimensional illustration that shows the normal vectors:

The thick blue lines represent the rectangles. Imagine that you are looking at them edge-on. The arrows represent the normal vectors. Two normal vectors are shown for each rectangle, one on each end.

In the bottom half of this illustration, the vectors are actually perpendicular to the rectangles. There is an abrupt change in direction as you move from one rectangle to the next, so where one rectangle meets the next, the normal vectors to the two rectangles are different. The visual effect on the rendered image is an abrupt change in shading that is perceived as a corner or edge between the two rectangles.

In the top half, on the other hand, the vectors are perpendicular to a curved surface that passes through the endpoints of the rectangles. When two rectangles share a vertex, they also share the same normal at that vertex. Visually, this eliminates the abrupt change in shading, resulting in something that looks more like a smoothly curving surface.

The upshot of this is that in OpenGL, a normal vector at a vertex is whatever you say it is, and it does not have to be literally perpendicular to your polygons. The normal vector that you choose should depend on the object that you are trying to model.

There is one other issue in choosing normal vectors: There are always two possible unit normal vectors at a vertex, pointing in opposite directions. Recall that a polygon has two faces, a front face and a back face, which are distinguished by the order in which the vertices are generated. (See Section 2.2.) A normal vector should point out of the front face of the polygon. That is, when you are looking at the front face of a polygon, the normal vector should be pointing towards you. If you are looking at the back face, the normal vector should be pointing away from you.

Unfortunately, it's not always easy to compute normal vectors, and it can involve some non-trivial math. I'll have more to say about that in Chapter 4. For now, you should at least understand that the solid surfaces that are defined in the GLUT library come with the correct normal vectors already built-in. So do the shape classes in my glutil package.

We can look at one simple case of supplying normal vectors by hand: drawing a cube. Let's say that we want to draw a cube centered at the origin, where the length of each side of the cube is one. Let's start with a method to draw the face of the cube that is perpendicular to the z-axis with center at (0,0,0.5). The unit normal to that face is (0,0,1), which points directly out of the screen along the z-axis.:

```private static void drawFace1(GL gl) {
gl.glBegin(GL.GL_POLYGON);

gl.glNormal3d(0,0,1);  // Unit normal vector, which applies to all vertices.

gl.glVertex3d(-0.5, -0.5, 0.5);  // The vertices, in counterclockwise order.
gl.glVertex3d(0.5, -0.5, 0.5);
gl.glVertex3d(0.5, 0.5, 0.5);
gl.glVertex3d(-0.5, 0.5, 0.5);

gl.glEnd();
}```

We could draw the other faces similarly. For example, the bottom face, which has normal vector (0,−1,0), could be created with

```gl.glBegin(GL.GL_POLYGON);
gl.glNormal3d(0, -1, 0);
gl.glVertex3d(-0.5, -0.5, 0.5);
gl.glVertex3d(0.5, -0.5, 0.5);
gl.glVertex3d(0.5, -0.5, -0.5);
gl.glVertex3d(-0.5, -0.5, -0.5);
gl.glEnd();```

However, getting all the vertices correct and in the right order is by no means trivial. Another approach is to use the method for drawing the front face for each of the six faces, with appropriate rotations to move the front face into the desired position:

```public static void drawUnitCube(GL gl) {

drawFace1(gl);  // the front face

gl.glPushMatrix();  // the bottom face
gl.glRotated( 90, 1, 0, 0);  // rotate 90 degrees about the x-axis
drawFace1(gl);
gl.glPopMatrix();

gl.glPushMatrix();  // the back face
gl.glRotated( 180, 1, 0, 0);  // rotate 180 degrees about the x-axis
drawFace1(gl);
gl.glPopMatrix();

gl.glPushMatrix();  // the top face
gl.glRotated( -90, 1, 0, 0);  // rotate -90 degrees about the x-axis
drawFace1(gl);
gl.glPopMatrix();

gl.glPushMatrix();  // the right face
gl.glRotated( 90, 0, 1, 0);  // rotate 90 degrees about the y-axis
drawFace1(gl);
gl.glPopMatrix();

gl.glPushMatrix();  // the left face
gl.glRotated( -90, 0, 1, 0);  // rotate -90 degrees about the y-axis
drawFace1(gl);
gl.glPopMatrix();

}```

Whether this looks easier to you might depend on how comfortable you are with transformations, but it really does require a lot less work!

2.4.2  Introduction to Textures

The 3D objects that we have created so far look nice enough, but they are a little bland. Their uniform colors don't have the visual appeal of, say, a brick wall or a plaid couch. Three-dimensional objects can be made to look more interesting and more realistic by adding a texture to their surfaces. A texture -- or at least the kind of texture that we consider here -- is a 2D image that can be applied to the surface of a 3D object. Here is an applet that shows six objects with various textures:

(Topographical Earth image, courtesy NASA/JPL-Caltech. Brick and metal textures from http://www.grsites.com/archive/textures/. EarthAtNight image taken from the Astronomy Picture of the Day web site; it is also a NASA/JPL image. Some nicer planetary images for use on a sphere can be found at http://evildrganymede.net/art/maps.htm and http://planetpixelemporium.com/earth.html; they are free for you to use, but I can't distribute them on my web site. The textures used in this program are in the folder named textures in the source directory.)

In this applet, you can rotate each individual object by dragging the mouse on it. The rotation makes it look even more realistic. The source code for the program can be found in TextureDemo.java.

Textures are one of the more complex areas in the OpenGL API, both because of the number of options involved and the new features that have been introduced in various versions. The Jogl API has a few classes that make textures easier to use. For now, we will work mostly with the Jogl classes, and we will cover only a few of the many options. You should understand that the Jogl texture classes are not themselves part of the OpenGL API.

To use a texture in OpenGL, you first of all need an image. In Jogl, the Texture class represents 2D images that can be used as textures. (This class and other texture-related classes are defined in the package com.sun.opengl.util.texture.) To create a Texture object, you can use the helper class TextureIO, which contains several static methods for reading images from various sources. For example, if img is a BufferedImage object, you can create a Texture from that image with

`Texture texture = TextureIO.newTexture( img, true );`

The second parameter is a boolean value that you don't need to understand just now. (It has to do with optimizing the texture to work on objects of different sizes.)

This means that you can create a BufferedImage any way you like, even by creating the image from scratch using Java's drawing commands, and then use that image as a texture in OpenGL. More likely, you will want to read the image from a file or from a program resource. TextureIO has methods that can do that automatically. For example, if file is of type java.io.File and represents a file that contains an image, you can create a texture from that image with

`Texture texture = Texture.newTexture( file, true );`

The case of reading an image from a resource is probably more common. A resource is, more or less, a file that is part of your program. It might be packaged into the same jar file as the rest of your program, for example. I can't give you a full tutorial on using resources here, but the idea is to place the image files that you want to include as resources in some package (that is, folder) in your project. Let's say that you store the images in a package named textures, and let's say that "brick001.jpg" is the name of one of the image files in that package. Then, to retrieve and use the image as a texture, you can use the following code in any instance method in your program:

```Texture texture;  // Most likely, an instance variable.
try {
URL textureURL;  // URL class from package java.net.
texture = TextureIO.newTexture(textureURL, true, "jpg");
// The third param above is the extension from the file name;
// "jpg", "png", and probably "gif" files should be OK.
}
catch (Exception e) {
// Won't get here if the file is properly packaged with the program!
e.printStackTrace();
}```

All this makes it reasonably easy to prepare images for use as textures. But there are a couple of issues. First of all, in the original versions of OpenGL, the width and height of texture images had to be powers of two, such as 128, 256, 512, or 1024. Although this limitation has been eliminated in more recent versions, it's probably still a good idea to resize your texture images, if necessary, so that their dimensions are powers of two. Another issue is that Java and OpenGL have different ideas about where the origin of an image should be. Java puts it at the top right corner, while OpenGL puts it at the bottom left. This means that images loaded by Java might be upside down when used as textures in OpenGL. The Texture class has a way of dealing with this, but it complicates things. For now, it's easy enough to flip an image vertically if necessary. Jogl even provides a way of doing this for a BufferedImage. Here's some alternative code that you can use to load an image resource as a texture, if you want to flip the image:

```Texture texture;
try {
URL textureURL;
ImageUtil.flipImageVertically(img);
texture = TextureIO.newTexture(img, true);
}
catch (Exception e) {
e.printStackTrace();
}```

We move on now to the question of how to actually use textures in a program. First, note that textures work best if the material of the textured object is pure white, since the colors from the texture are actually multiplied by the material color, and if the color is not white then the texture will be "tinted" with the material color. To get white-colored objects, you can use use glColor3f and the GL_MATERIAL_COLOR option, as discussed in Subsection 2.2.2.

Once you have a Texture object, three things are necessary to apply that texture to an object: The object must have texture coordinates that determine how the image will be mapped onto the object. You must enable texturing, or else textures are simply ignored. And you have to specify which texture is to be used.

Two of these are easy. To enable texturing with a 2D image, you just have to call the method

`texture.enable();`

where texture is any object of type Texture. It's important to understand that this is not selecting that particular texture for use; it's just turning on texturing in general. (This is true at least as long as you stick to texture images whose dimensions are powers of two. In that case, it doesn't even matter which Texture object you use to call the enable method.) To turn texturing off, call texture.disable(). If you are planning to texture all the objects in your scene, you could enable texturing method once, in the init method, and leave it on. Otherwise, you can enable and disable texturing as needed.

To get texturing to work, you must also specify that some particular texture is to be used. This is called binding the texture, and you can do it by calling

`texture.bind()`

where texture is the Texture object that you want to use. That texture will be used until you bind a different texture, but of course only when texturing is enabled. If you're just using one texture, you could bind it in the init method. It will be used whenever texturing is enabled. If you are using several textures, you can bind different textures in different parts of the display method.

Just remember: Call texture.enable() and texture.bind() before drawing the object that you want to texture. Call texture.disable() to turn texturing off. Call texture2.bind(), if you want to switch to using a different texture, texture2.

That leaves us with the problem of texture coordinates for objects to which you want to apply textures. A texture image comes with its own 2D coordinate system. Traditionally, s used for the horizontal coordinate on the image and t is used for the vertical coordinate. s is a real-number coordinate that ranges from 0 on the left of the image to 1 on the right, while t ranges from 0 at the bottom to 1 at the top. Values of s or t outside of the range 0 to 1 are not inside the image.

To apply a texture to a polygon, you have to say which point in the texture should correspond to each vertex of the polygon. For example, suppose that we want to apply part of the EarthAtNight image to a triangle. Here's the area in the image that I want to map onto the triangle, shown outlined in thick orange lines:

The vertices of this area have (s,t) coordinates (0.3,0.05), (0.45,0.6), and (0.25,0.7). These coordinates in the image, expressed in terms of s and t are called texture coordinates. When I generate the vertices of the triangle that I want to draw, I have to specify the corresponding texture coordinate in the image. This is done by calling gl.glTexCoord2d(s,t). (Or you can use glTexCoord2f.) You can call this method just before generating the vertex. Usually, every vertex of a polygon will have different texture coordinates. To draw the triangle in this case, I could say:

```gl.glBegin(GL.GL_POLYGON);
gl.glNormal3d(0,0,1);
gl.glTexCoord2d(0.3,0.05);  // Texture coords for vertex (0,0)
gl.glVertex2d(0,0);
gl.glTexCoord2d(0.45,0.6);  // Texture coords for vertex (0,1)
gl.glVertex2d(0,1);
gl.glTexCoord2d(0.25,0.7);   // Texture coords for vertex (1,0)
gl.glVertex2d(1,0);
gl.glEnd();```

Note that there is no particular relationship between the (x,y) coordinates of a vertex, which give its position in space, and the (s,t) texture coordinate associated with the vertex. which tell what point in the image is mapped to the vertex. In fact, in this case, the triangle that I am drawing has a different shape from the triangular area in the image, and that piece of the image will have to be stretched and distorted to fit.

Note that texture coordinates are attributes of vertices. Like other vertex attributes, values are only specified at the vetex. OpenGL will interpolate the values between vertices to calculate texture coordinates for points inside the polygon.

Sometimes, it's difficult to decide what texture coordinates to use. One case where it's easy is applying a complete texture to a rectangle. Here is a method from Cube.java that draws a square in the xy-plane, with appropriate texture coordinates to map the entire image onto the square:

```/**
* Draws a square in the xy-plane, with given radius,
* where radius is half the length of the side.
*/
private void square(GL gl, double radius) {
gl.glBegin(GL.GL_POLYGON);
gl.glNormal3f(0,0,1);
gl.glTexCoord2d(0,0);
gl.glTexCoord2d(1,0);
gl.glTexCoord2d(1,1);
gl.glTexCoord2d(0,1);
gl.glEnd();
}```

At this point, you will probably be happy to know that the shapes in the package glutil come with reasonable texture coordinates already defined. To use textures on these objects, you just have to create a Texture object, enable it, and bind it. For Cube.java, for example, a copy of the texture is mapped onto each face. For UVCylinder.java the entire texture wraps once around the cylinder, and circular cutouts from the texture are applied to the top and bottom. For a UVCSphere.java, the texture is wrapped once around the sphere. The flat texture has to be distorted to fit onto the sphere, but some textures, using what is called a cylindrical projection, are made to work precisely with this type of texture mapping. The textures that are used on the spheres in the TextureDemo example are of this type.

One last question: What happens if you supply texture coordinates that are not in the range from 0 to 1? It turns out that such values are legal, but exactly what they mean depends on the setting of two parameters in the Texture object. The most desirable outcome is probably that the texture is copied over and over to fill the entire plane, so that using s and t values outside the usual range will simply cause the texture to be repeated. In that case, for example, if you supply texture coordinates (0,0), (0,2), (2,2), and (2,0) for the four vertices of a square, then the square will be covered with four copies of the texture. For a repeating texture, such as a brick wall image, this can be much more effective than stretching a single copy of the image to cover the entire square. This behavior is not the default. To get this behavior for a Texture object tex, you can use the following code:

```tex.setTexParameteri(GL.GL_TEXTURE_WRAP_S, GL.GL_REPEAT);
tex.setTexParameteri(GL.GL_TEXTURE_WRAP_T, GL.GL_REPEAT);```

There are two parameters to control this behavior because you can enable the behavior separately for the horizontal and the vertical directions. OpenGL, as I have said, has many parameters to control how textures are used. This is an example of how these parameters are set using the Texture class.

[ Previous Section | Chapter Index | Main Index ]