[ Previous Section | Chapter Index | Main Index ]

Subsections
Texture Targets
Mipmaps and Filtering
Texture Transformations
Creating Textures with OpenGL
Loading Data into a Texture
Texture Coordinate Generation
Texture Objects

Section 4.5

Textures


Textures were introduced in Subsection 2.4.2. In that section, we looked at Java's Texture class, which makes it fairly easy to use image textures. However, this class is not a standard part of OpenGL. This section covers parts of OpenGL's own texture API (plus more detail on the Texture class). Textures are a complex subject. This section does not attempt to cover all the details.


4.5.1  Texture Targets

So far, we have only considered texture images, which are two-dimensional textures. However, OpenGL also supports one-dimensional textures and three-dimensional textures. OpenGL has three texture targets corresponding to the three possible dimensions: GL_TEXTURE_1D, GL_TEXTURE_2D, and GL_TEXTURE_3D. (There are also several other targets that are used for aspects of texture mapping that I won't discuss here.) OpenGL maintains some separate state information for each target. Texturing can be turned on and off for each target with

gl.glEnable(GL.GL_TEXTURE_1D);     /     gl.glDisable(GL.GL_TEXTURE_1D);
gl.glEnable(GL.GL_TEXTURE_2D);     /     gl.glDisable(GL.GL_TEXTURE_2D);
gl.glEnable(GL.GL_TEXTURE_3D);     /     gl.glDisable(GL.GL_TEXTURE_3D);

At most one texture target will be used when a surface is rendered. If several targets are enabled, 3D textures have precedence over 2D textures, and 2D textures have precedence over 1D. Except for one example later in this section, we will work only with 2D textures.

Many commands for working with textures take a texture target as their first parameter, to specify which target's state is being changed. For example,

gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_REPEAT);

tells OpenGL to repeat two-dimensional textures in the s direction. To make one-dimensional textures repeat in the s direction, you would use GL.GL_TEXTURE_1D as the first parameter. Texture wrapping was mentioned at the end of Subsection 2.4.2, where repeat in the s direction was set using an object tex of type Texture with

tex.setTexParameteri(GL.GL_TEXTURE_WRAP_S, GL.GL_REPEAT);

More generally, for a Texture, tex, tex.setTexParameteri(prop,val) is simply an abbreviation for gl.glSetTexParameteri(tex.getTarget(),prop,val), where tex.getTarget() returns the target -- most likely GL_TEXTURE_2D -- for the texture object. The Texture method simply provides access to the more basic OpenGL command. Similarly, tex.enable() is equivalent to gl.glEnable(tex.getTarget()).

Texture targets are also used when setting the texture environment, which determines how the colors from the texture are combined with the color of the surface to which the texture is being applied. The combination mode for a given texture target is set by calling

gl.glTexEnvi( target, GL.GL_TEXTURE_ENV_MODE, mode );

where target is the texture target, such as GL.GL_TEXTURE_2D, for which the combination mode is being changed. The default value of mode is GL.GL_MODULATE, which means that the color components from the surface are multiplied by the color components from the texture. This is commonly used with a white surface material. The surface material is first used in the lighting computation to produce a basic color for each surface pixel, before combination with the texture color. A white surface material means that what you end up with is basically the texture color, modified by lighting effects. This is usually what you want, but there are other texture combination modes for special purposes. For example, the GL.GL_REPLACE mode will completely replace the surface color with the texture color. In fact, the texture environment offers many options and a great deal of control over how texture colors are used. However, I will not cover them here.


4.5.2  Mipmaps and Filtering

When a texture is applied to a surface, the pixels in the texture do not usually match up one-to-one with pixels on the surface, and in general, the texture must be stretched or shrunk as it is being mapped onto the surface. Sometimes, several pixels in the texture will be mapped to the same pixel on the surface. In this case, the color that is applied to the surface pixel must somehow be computed from the colors of all the texture pixels that map to it. This is an example of filtering; in particular, it is "minification filtering" because the texture is being shrunk. When one pixel from the texture covers more than one pixel on the surface, the texture has to be magnified, and we have an example of "magnification filtering."

One bit of terminology before we proceed: The pixels in a texture are referred to as texels, short for texture pixels, and I will use that term from now on.

When deciding how to apply a texture to a point on a surface, OpenGL has the texture coordinates for that point. Those texture coordinates correspond to one point in the texture, and that point lies in one of the texture's texels. The easiest thing to do is to apply the color of that texel to the point on the surface. This is called nearest neighbor filtering. It is very fast, but it does not usually give good results. It doesn't take into account the difference in size between the pixels on the surface and the texels. An improvement on nearest neighbor filtering is linear filtering, which can take an average of several texel colors to compute the color that will be applied to the surface.

The problem with linear filtering is that it will be very inefficient when a large texture is applied to a much smaller surface area. In this case, many texels map to one pixel, and computing the average of so many texels becomes very inefficient. OpenGL has a neat solution for this: mipmaps.

A mipmap for a texture is a scaled-down version of that texture. A complete set of mipmaps consists of the full-size texture, a half-size version in which each dimension is divided by two, a quarter-sized version, a one-eighth-sized version, and so on. If one dimension shrinks to a single pixel, it is not reduced further, but the other dimension will continue to be cut in half until it too reaches one pixel. In any case, the final mipmap consists of a single pixel. Here are the first few images in the set of mipmaps for a brick texture:

You'll notice that the mipmaps become small very quickly. The total memory used by a set of mipmaps is only about one-third more than the memory used for the original texture, so the additional memory requirement is not a big issue when using mipmaps.

Mipmaps are used only for minification filtering. They are essentially a way of pre-computing the bulk of the averaging that is required when shrinking a texture to fit a surface. To texture a pixel, OpenGL can first select the mipmap whose texels most closely match the size of the pixel. It can then do linear filtering on that mipmap to compute a color, and it will have to average at most a few texels in order to do so.

Starting with OpenGL Version 1.4, it is possible to get OpenGL to create and manage mipmaps automatically. For automatic generation of mipmaps for 2D textures, you just have to say

gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_GENERATE_MIPMAP, GL.GL_TRUE);

and then forget about 2D mipmaps! Of course, you should check the OpenGL version before doing this. In earlier versions, if you want to use mipmaps, you must either load each mipmap individually, or you must generate them yourself. (The GLU library has a method, gluBuild2DMipmaps that can be used to generate a set of mipmaps for a 2D texture, with similar functions for 1D and 3D textures.) The best news, perhaps, is that when you are using Java Texture objects to represent textures, the Texture will manage mipmaps for you without any action on your part except to ask for mipmaps when you create the object. (The methods for creating Textures have a parameter for that purpose.)


OpenGL supports several different filtering techniques for minification and magnification. The filters that can be used can be set with glTexParameteri. For the 2D texture target, for example, you would call

gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, magFilter);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, minFilter);

where magFilter and minFilter are constants that specify the filtering algorithm. For the magFilter, the only options are GL.GL_NEAREST and GL.GL_LINEAR, giving nearest neighbor and linear filtering. The default for the MAG filter is GL_LINEAR, and there is rarely any need to change it. For minFilter, in addition to GL.GL_NEAREST and GL.GL_LINEAR, there are four options that use mipmaps for more efficient filtering. The default MIN filter is GL.GL_NEAREST_MIPMAP_LINEAR which does averaging between mipmaps and nearest neighbor filtering within each mipmap. For even better results, at the cost of greater inefficiency, you can use GL.GL_LINEAR_MIPMAP_LINEAR, which does averaging both between and within mipmaps. (You can research the remaining two options on your own if you are curious.)

One very important note: If you are not using mipmaps for a texture, it is imperative that you change the minification filter for that texture to GL_NEAREST or, more likely, GL_LINEAR. The default MIN filter requires mipmaps, and if mipmaps are not available, then the texture is considered to be improperly formed, and OpenGL ignores it!


4.5.3  Texture Transformations

Recall that textures are applied to objects using texture coordinates. The texture coordinates for a vertex determine which point in a texture is mapped to that vertex. Texture coordinates can be specified using the glTexCoord* families of methods. Textures are most often images, which are two-dimensional, and the two coordinates on a texture image are referred to as s and t. Since OpenGL also supports one-dimensional textures and three-dimensional textures, texture coordinates cannot be restricted to two coordinates. In fact, a set of texture coordinates in OpenGL is represented internally as homogeneous coordinates (see Subsection 3.1.4), which are referred to as (s,t,r,q). We have used glTexCoord2d to specify texture s and t coordinates, but a call to gl.glTexCoord2d(s,t) is really just shorthand for gl.glTexCoord4d(s,t,0,1).

Since texture coordinates are no different from vertex coordinates, they can be transformed in exactly the same way. OpenGL maintains a texture transformation matrix as part of its state, along with the modelview matrix and projection matrix. When a texture is applied to an object, the texture coordinates that were specified for its vertices are transformed by the texture matrix. The transformed texture coordinates are then used to pick out a point in the texture. Of course, the default texture transform is the identity, which has no effect.

The texture matrix can represent scaling, rotation, translation and combinations of these basic transforms. To specify a texture transform, you have to use glMatrixMode to set the matrix mode to GL_TEXTURE. With this mode in effect, calls to methods such as glRotated, glScalef, and glLoadIdentity are applied to the texture matrix. For example to install a texture transform that scales texture coordinates by a factor of two in each direction, you could say:

gl.glMatrixMode(GL.GL_TEXTURE);
gl.glLoadIdentity(); // Make sure we are starting from the identity matrix.
gl.glScaled(2,2,2);
gl.glMatrixMode(GL.GL_MODELVIEW); // Leave matrix mode set to GL_MODELVIEW. 

Now, what does this actually mean for the appearance of the texture on a surface? This scaling transforms multiplies each texture coordinate by 2. For example, if a vertex was assigned 2D texture coordinates (0.4,0.1), then that vertex will be mapped, after the texture transform is applied, to the point (s,t) = (0.8,0.2) in the texture. The texture coordinates vary twice as fast on the surface as they would without the scaling transform. A region on the surface that would map to a 1-by-1 square in the texture image without the transform will instead map to a 2-by-2 square in the image -- so that a larger piece of the image will be seen inside the region. In other words, the texture image will be shrunk by a factor of two on the surface! More generally, the effect of a texture transformation on the appearance of the texture is the inverse of its effect on the texture coordinates. (This is exactly analogous to the inverse relationship between a viewing transformation and a modeling transformation.) If the texture transform is translation to the right, then the texture moves to the left on the surface. If the texture transform is a counterclockwise rotation, then the texture rotates clockwise on the surface.

The following applet lets you experiment with texture transformations. You can apply rotation, scaling, or translation to the texture. When animation is turned on, the transformation changes from frame to frame and the texture is animated. The point is not so much the animation -- which is a rather unusual thing to do with textures -- as to show the effect of the texture transforms. (Source code is TextureAnimation.java.)


4.5.4  Creating Textures with OpenGL

Texture images for use in an OpenGL program usually come from an external source, most often an image file. However, OpenGL is itself a powerful engine for creating images. Sometimes, instead of loading an image file, it's convenient to have OpenGL create the image internally, by rendering it. This is possible because OpenGL can read texture data from its own color buffer, where it does its drawing. To create a texture image using OpenGL, you just have to draw the image using standard OpenGL drawing commands and then load that image as a texture using the method

gl.glCopyTexImage2D( target, mipmapLevel, internalFormat,
                                     x, y, width, height, border );

In this method, target will be GL.GL_TEXTURE_2D except for advanced applications; mipmapLevel, which is used when you are constructing each mipmap in a set of mipmaps by hand, should be zero; the internalFormat, which specifies how the texture data should be stored, will ordinarily be GL.GL_RGB or GL.GL_RGBA, depending on whether you want to store an alpha component for each texel; x and y specify the lower left corner of the rectangle in the color buffer from which the texture will be read and are usually 0; width and height are the size of that rectangle; and border, which makes it possible to include a border around the texture image for certain special purposes, will ordinarily be 0. That is, a call to glCopyTexImage2D will typically look like

gl.glCopyTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB, 0, 0, width, height, 0);

As usual with textures, the width and height should ordinarily be powers of two, although non-power-of-two textures are supported if the OpenGL version is 2.0 or higher.

As an example, the sample program TextureFromColorBuffer.java uses this technique to produce a texture. The texture image in this case is a copy of the two-dimensional hierarchical graphics example from Subsection 2.1.4. Here's an applet version of the program:

The texture image in this program can be animated. For each frame of the animation, the program draws the current frame of the 2D animation, then grabs that image for use as a texture. It does this in the display() method, even though the 2D image that is draws is not shown. After drawing the image and grabbing the texture, the program erases the image and draws a 3D textured object, which is the only thing that the user gets to see in the end. It's worth looking at that display method, since it requires some care to use a power-of-two texture size and to set up lighting only for the 3D part of the rendering process:

public void display(GLAutoDrawable drawable) {
    GL gl = drawable.getGL();
    
    int[] viewPort = new int[4];  // The current viewport; x and y will be 0.
    gl.glGetIntegerv(GL.GL_VIEWPORT, viewPort, 0);
    int textureWidth = viewPort[2];  // The width of the texture.
    int textureHeight = viewPort[3]; // The height of the texture.

    /* First, draw the 2D scene into the color buffer. */
    
    if (version_2_0) {
           // Non-power-of-two textures are supported.  Use the entire
           // view area for drawing the 2D scene.
        draw2DFrame(gl); // Draws the animated 2D scene.
    }
    else {
           // Use a power-of-two texture image. Reset the viewport
           // while drawing the image to a power-of-two-size,
           // and use that size for the texture.
        gl.glClear(GL.GL_COLOR_BUFFER_BIT);
        textureWidth = 1024;
        while (textureWidth > viewPort[2])
            textureWidth /= 2; // Use a power of two that fits in the viewport.
        textureHeight = 512;
        while (textureWidth > viewPort[3])
            textureHeight /= 2; // Use a power of two that fits in the viewport.
        gl.glViewport(0,0,textureWidth,textureHeight);
        draw2DFrame(gl);  // Draws the animated 2D scene.
        gl.glViewport(0, 0, viewPort[2], viewPort[3]);  // Restore full viewport.
    }
        
    /* Grab the image from the color buffer for use as a 2D texture. */
    
    gl.glCopyTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, 
            0, 0, textureWidth, textureHeight, 0);
    
    /* Set up 3D viewing, enable 2D texture, 
       and draw the object selected by the user. */
    
    gl.glPushAttrib(GL.GL_LIGHTING_BIT | GL.GL_TEXTURE_BIT);
    
    gl.glEnable(GL.GL_LIGHTING);
    gl.glEnable(GL.GL_LIGHT0);
    float[] dimwhite = { 0.4f, 0.4f, 0.4f };
    gl.glLightfv(GL.GL_LIGHT0, GL.GL_SPECULAR, dimwhite, 0);
    gl.glEnable(GL.GL_DEPTH_TEST);
    gl.glShadeModel(GL.GL_SMOOTH);
    if (version_1_2)
        gl.glLightModeli(GL.GL_LIGHT_MODEL_COLOR_CONTROL, 
                                            GL.GL_SEPARATE_SPECULAR_COLOR);
    gl.glLightModeli(GL.GL_LIGHT_MODEL_LOCAL_VIEWER, GL.GL_TRUE);

    gl.glClearColor(0,0,0,1);
    gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);
    camera.apply(gl);
    
    /* Since we don't have mipmaps, we MUST set the MIN filter 
     * to a non-mipmapped version; leaving the value at its default 
     * will produce no texturing at all! */
    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);

    gl.glEnable(GL.GL_TEXTURE_2D);

    float[] white = { 1, 1, 1, 1 }; // Use white material for texturing.
    gl.glMaterialfv(GL.GL_FRONT_AND_BACK, GL.GL_AMBIENT_AND_DIFFUSE, white, 0);
    gl.glMaterialfv(GL.GL_FRONT_AND_BACK, GL.GL_SPECULAR, white, 0);
    gl.glMateriali(GL.GL_FRONT_AND_BACK, GL.GL_SHININESS, 128);
    int selectedObject = objectSelect.getSelectedIndex(); 
           // selectedObject Tells which of several object to draw.
    gl.glRotated(15,3,2,0); // Apply some viewing transforms to the object.
    gl.glRotated(90,-1,0,0);
    if (selectedObject == 1 || selectedObject == 3)
        gl.glTranslated(0,0,-1.25);
    objects[selectedObject].render(gl);
    
    gl.glPopAttrib();
    
}

4.5.5  Loading Data into a Texture

Although OpenGL can draw its own textures, most textures come from external sources. The data can be loaded from an image file, it can be taken from a BufferedImage, or it can even be computed by your program on-the-fly. Using Java's Texture class is certainly the easiest way to load existing images. However, it's good to also know how to load texture data using only basic OpenGL commands.

To load external data into a texture, you have to store the color data for that texture into a Java nio Buffer (or, if using the C API, into an array). The data must specify color values for each texel in the texture. Several formats are possible, but the most common are GL.GL_RGB, which requires a red, a blue, and a green component value for each texel, and GL.GL_RGBA, which adds an alpha component for each texel. You need one number for each component, for every texel. When using GL.GL_RGB to specify a texture with n texels, you need a total of 3*n numbers. Each number is typically an unsigned byte, with a value in the range 0 to 255, although other types of data can be used as well.

(The use of unsigned bytes is somewhat problematic in Java, since Java's byte data typed is signed, with values in the range −128 to 127. Essentially, the negative numbers are re-interpreted as positive numbers. Usually, the safest approach is to use an int or short value and type-cast it to byte.)

Once you have the data in a Buffer, you can load that data into a 2D texture using the glTexImage2D method:

gl.glTexImage2D(target, mipmapLevel, internalFormat, width, height, border,
                            format, dataType, buffer);

The first six parameters are similar to parameters in the glCopyTexImage2D method, as discussed in the previous subsection. The other three parameters specify the data. The format is GL.GL_RGB if you are providing RGB data and is GL.GL_RGBA for RGBA data. Other formats are also possible. Note that the format and the internalFormat are often the same, although they don't have to be. The dataType tells what type of data is in the buffer and is usually GL.GL_UNSIGNED_BYTE. Given this data type, the buffer should be of type ByteBuffer. The number of bytes in the buffer must be 3*width*height for RGB data and 4*width*height for RGBA data.

It is also possible to load a one-dimensional texture in a similar way. The glTexImage1D method simply omits the height parameter. As an example, here is some code that creates a one-dimensional texture consisting of 256 texels that vary in color through a full spectrum of color:

ByteBuffer textureData1D = BufferUtil.newByteBuffer(3*256);
for (int i = 0; i < 256; i++) {
    Color c = Color.getHSBColor(1.0f/256 * i, 1, 1); // A color of the spectrum.
    textureData1D.put((byte)c.getRed());  // Add color components to the buffer.
    textureData1D.put((byte)c.getGreen()); 
    textureData1D.put((byte)c.getBlue()); 
}
textureData1D.rewind();
gl.glTexImage1D(GL.GL_TEXTURE_1D, 0, GL.GL_RGB, 256, 0,
                          GL.GL_RGB, GL.GL_UNSIGNED_BYTE, textureData1D);

This code is from the sample program TextureLoading.java, which also includes an example of a two-dimensional texture created by computing the individual texel colors. The two dimensional texture is the famous Mandelbrot set. Here is an applet version of the program that allows you to display both the one-dimensional and the two-dimensional texture on a variety of objects. (Note, by the way, that when you display the Mandelbrot set on a curved object, the strong specular highlights on the black parts of the set add to the three-dimensional appearance of the object.)


4.5.6  Texture Coordinate Generation

Texture coordinates are typically specified using the glTexCoord* family of methods or by using texture coordinate arrays with glDrawArrays and glDrawElements. However, computing texture coordinates can be tedious. OpenGL is capable of generating certain types of texture coordinates on its own. This is especially useful for so-called "reflection maps" or "environment maps," where texturing is used to imitate the effect of an object that reflects its environment. OpenGL can generate the texture coordinates that are needed for this effect. However, environment mapping is an advanced topic that I will not cover here. Instead, we look at a simple case: object-linear coordinates.

With object-linear texture coordinate generation, OpenGL uses texture coordinates that are computed as linear functions of object coordinates. Object coordinates are just the actual coordinates specified for vertices, with glVertex* or in a vertex array. The default when object-linear coordinate generation is turned on is to make the object coordinates equal to the texture coordinates. For two-dimensional textures, for example,

gl.glVertex3f(x,y,z);

would be equivalent to

gl.glTexCoord2f(x,y);
gl.glVertex3f(x,y,z);

However, it is possible to compute the texture coordinates as arbitrary linear combinations of the vertex coordinates x, y, z, and w. Thus, gl.glVertex4f(x,y,z,w) becomes equivalent to

gl.glTexCoord2f(a*x + b*y + c*z + d*w, e*x + f*y + g*z + h*w);
gl.glVertex4f(x,y,z,w);

where (a,b,c,d) and (e,f,g,h) are arbitrary arrays.

To use texture generation, you have to enable and configure it for each texture coordinate separately. For two-dimensional textures, you want to enable generation of the s and t texture coordinates:

gl.glEnable(GL.GL_TEXTURE_GEN_S);
gl.glEnable(GL.GL_TEXTURE_GEN_T);

To say that you want to use object-linear coordinate generation, you can use the method glTexGeni to set the texture generation "mode" to object-linear for both s and t:

gl.glTexGeni(GL.GL_S, GL.GL_TEXTURE_GEN_MODE, GL.GL_OBJECT_LINEAR);
gl.glTexGeni(GL.GL_T, GL.GL_TEXTURE_GEN_MODE, GL.GL_OBJECT_LINEAR);

If you accept the default behavior, the effect will be to project the texture onto the surface from the xy-plane (in the coordinate system in which the coordinates are specified, before any transformation is applied). If you want to change the equations that are used, you can specify the coordinates using glTexGenfv. For example, to use coefficients (a,b,c,d) and (e,f,g,h) in the equations:

gl.glTexGenfv(GL.GL_S, GL.GL_OBJECT_PLANE, new float[] { a,b,c,d }, 0);   
gl.glTexGenfv(GL.GL_T, GL.GL_OBJECT_PLANE,  new float[] { e,f,g,h }, 0);

The sample program TextureCoordinateGeneration.java demonstrates the use of texture coordinate generation. It allows the user to enter the coefficients for the linear equations that are used to generate the texture coordinates. The same program also demonstrates "eye-linear" texture coordinate generation, which is similar to the object-linear version but uses eye coordinates instead of object coordinates in the equations; I won't discuss it further here. Here's an applet version:

Note how, with the default settings, the texture is projected from one direction onto the surface. This is especially clear with the cone and with the cylinder. For the cylinder, single texels are smeared out along the side of the cylinder. The projection is from the direction of the xy-plane, in object coordinates. Since most of the objects have been rotated by 90 degrees about the x-axis for viewing, the project seems to be from the direction of the xz-plane from the viewer's point of view. The cube is a special case. it is formed from planes that are drawn in the xy-plane then rotated into position. The result is that the texture shows up nicely on all six sides. On the other hand, if you change the coefficients for s to (0,1,0,0) and for t to (0,0,1,0), which gives projection from the yz-plane in object coordinates, then you only get single smeared-out texels on all six sides of the cube.

When using object-linear texture coordinate generation, the texture is "attached" to the surface and moves along with the surface when it is transformed. For eye-linear coordinates, this is not the case. The textures are specified relative to the view. When using the default coefficients with eye-linear coordinate generation in the applet, this gives an interesting effect. The texture stays in the same place even as the object rotates.


4.5.7  Texture Objects

For our final word on textures, we look briefly at texture objects. Texture objects are used when you need to work with several textures in the same program. The usual method for loading textures, glTexImage*, transfers data from your program into the graphics card. This is an expensive operation, and switching among multiple textures by using this method can seriously degrade a program's performance. Texture objects offer the possibility of storing texture data for multiple textures on the graphics card and to switch from one texture object to another with a single, fast OpenGL command. (Of course, the graphics card has only a limited amount of memory for storing textures, and texture objects that don't fit in the graphics card's memory are no more efficient than ordinary textures.)

Note that if you are using Java's Texture class to represent your textures, you won't need to worry about texture objects, since the Texture class handles them automatically. If tex is of type Texture, the associated texture is actually stored as a texture object. The method tex.bind() tells OpenGL to start using that texture object. (It is equivalent to gl.glBindTexture(tex.getTarget(),tex.getTextureObject()), where glBindTexture is a method that is discussed below.) The rest of this section tells you how to work with texture objects by hand.

Texture objects are similar in their use to vertex buffer objects, which were covered in Subsection 3.4.3. Like a vertex buffer object, a texture object is identified by an integer ID number. Texture object IDs are managed by OpenGL, and to obtain a batch of valid textures IDs, you can call the method

gl.glGenTextures(n, idList, 0);

where n is the number of texture IDs that you want, idList is an array of length at least n and of type int[] that will hold the texture IDs, and the 0 indicates the starting index in the array where the IDs are to be placed. When you are done with the texture objects, you can delete them by calling gl.glDeleteTextures(n,idList,0).

Every texture object has its own state, which includes the values of texture parameters such as GL_TEXTURE_WRAP_S as well as the color data for the texture itself. To work with the texture object that has ID equal to texID, you have to call

gl.glBindTexture(target, texID)

where target is a texture target such as GL.GL_TEXTURE_1D or GL.GL_TEXTURE_2D. After this call, any use of glTexParameter*, glTexImage*, or glCopyTexImage* with the same texture target will be applied to the texture object with ID texID. Furthermore, if the texture target is enabled and some geometry is rendered, then the texture that is applied to the geometry is the one associated with that texture ID. A texture binding for a given target remains in effect until another texture object is bound to the same target. To switch from one texture to another, you simply have to call glBindTexture with a different texture object ID.


[ Previous Section | Chapter Index | Main Index ]