CS424 Notes, 6 February 2012
- Image Textures in WebGL
- The previous class introduced image textures and texture coordinates. Now, we look at how they are implemented in WebGL.
- Image textures are a fairly complicated topic. There are several things you
have to do to use one:
- load an image
- create and configure a WebGL "texture object"
- provide texture coordinates for your primitives
- use a uniform variable of type sampler2D in the fragment shader to access the texture
- We won't get through all of this today!
- Loading an image
- First of all, where to get an image? In WebGL, you can just load one,
using the standard JavaScript technique: Create an object of type Image,
add a handler for its onload method, and set the src property of
the image object to the URL of the image that you want to load. To use the
image as a texture, the onload handler can set up the texture for
WebGL. A JavaScript function to do this might look like
function loadImage(imageURL) { var img = new Image(); img.onload = function() { . . // Set up the texture! . } img.src = imageURL; }
We'll see what it means to "set up a texture."
- The URL for the image would be a relative URL, often just a file name of an image file that is in the same directory as the HTML file. Although you can load an image from a different web site, you wouldn't be able to use that image as a texture because of JavaScript's security policies.
- There are other ways to get images for WebGL. It's possible to define the pixel colors directly with numerical data. You can read an image that was drawn by WebGL. You should be able to use an image from an <img> element on the web page. You should even be able to use the picture from another <canvas> element.
- First of all, where to get an image? In WebGL, you can just load one,
using the standard JavaScript technique: Create an object of type Image,
add a handler for its onload method, and set the src property of
the image object to the URL of the image that you want to load. To use the
image as a texture, the onload handler can set up the texture for
WebGL. A JavaScript function to do this might look like
- Texture Objects
- Texture images are stored on the GPU. To do this, you have to create a texture object and then load the image into that texture object. This is similar to creating an array buffer and then loading vertex coordinates into that buffer.
- The function gl.createTexture() creates a texture object and returns a handle that is used to refer to the object. This function takes no parameters.
- The function gl.texImage2D() is used to load an image into a texture
object. This function has lots of parameters, but the texture object is not
one of them. To say which texture object to use, you have to bind the
texture object before calling gl.texImage2D; this is done by calling
gl.bindTexture(gl.TEXTURE_2D,tex) where tex is the handle to the texture
object that was returned by gl.createTexture. There are other functions
besides gl.texImage2D that act on the "currently bound texture object."
(It's also possible
to call
gl.bindTexture(gl.TEXTURE_2D,null), which means that no texture is bound.) - Here is a typical example, where img is an HTML Image object:
texid = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, texid); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img); . . // configure the texture .
The first five parameters to gl.textImage2D will always be the same, at least for now. The sixth parameter is the Image object. - For a texture to show up on a primitive, the texture object must be bound at the time the image is drawn. It is possible to have several texture objects. You have to make sure that the one that is bound, using gl.bindTexture, is that one that you want.
- Texture Configuration and Mipmaps
- Curiously, if you leave a texture set to its default configuration, it won't show up at all! This has to do with something called "mipmaps," which can be used in the process of mapping textures to surfaces. The default configuration says to use mipmaps, but doesn't provide them! You have to either create the mipmaps or change the configuration.
- To tell the GL to create mipmaps for a texture, you just have to
call
gl.generateMipmap(gl.TEXTURE_2D);
This applies to the currently bound texture, as set by gl.bindTexture. - Mipmaps are reduced-size versions of a texture that can be used when the texture has to be shrunk to fit a surface. A complete set of mipmaps consists of the full-size image, a half-size version, a quarter-size version, and so on. Here is a texture and its first few mipmaps: The memory required for a full set of mipmaps is only a little larger (by about 33%) than the memory for the original image.
- To change the configuration of a texture, use gl.texParameteri.
There are four texture parameters that you can set:
gl.TEXTURE_MAG_FILTER, gl.TEXTURE_MIN_FILTER,
gl.TEXTURE_WRAP_S, and gl.TEXTURE_WRAP_T. A typical
configuration is:
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST_MIPMAP_LINEAR); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.REPEAT); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.REPEAT);
Again, this applies to the current texture object, as set by gl.bindTexture. All the values given here are the defaults, which work for most purposes, and you don't need to use this function at all if you are happy with the defaults. - A "MAG_FILTER" is used when a texture has to be stretched to fit an image. The value gl.LINEAR does some interpolation to give a smoother result. The alternative, gl.NEAREST just uses one texel to color a pixel; it is faster but usually not as pretty.
- A "MIN_FILTER" is used when a texture has to be shrunk to fit an image. The setting gl.LINEAR_MIPMAP_LINEAR does averaging across mipmaps and within each mipmap to give the highest quality result. The default, gl.NEAREST_MIPMAP_LINEAR is faster and should give decent results. The setting gl.LINEAR does not use mipmaps, so would work without using gl.generateMipmap; you might use this setting if your texture fits all your surfaces fairly well. There are a few other possible settings as well.
- A texture WRAP mode tells what happens when texture (s,t) coordinates outside the range 0 to 1 are used. TEXTURE_WRAP_S applies to the s coordinate; TEXTURE_WRAP_T to the t coordinate. The default setting, gl.REPEAT means that the image is repeated to cover the entire (s,t) plane. The alternatives are gl.CLAMP_TO_EDGE and gl.MIRRORED_REPEAT. CLAMP_TO_EDGE extends the colors of the texels on the border of the image to cover the entire plane. MIRRORED_REPEAT reflects the image horizontally and vertically, to produce a picture twice as big in each direction, and then repeats that picture to cover the plane.
- Here, then, is a complete function for loading and configuring a texture image:
function loadTexture( textureURL ) { var img = new Image(); img.onload = function() { texid = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, texid); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img); gl.generateMipmap(gl.TEXTURE_2D); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_LINEAR); draw(); // Assuming this function draws the WebGL image. }; img.src = textureURL; }
This is only an example. There might be more calls to gl.texParameteri. If the MIN_FILTER were set to gl.LINEAR, the call to gl.generateMipmap would not be needed. The variable texid might be local to the function, if the program only uses one texture image and you never change the texture binding again. - Image loading is asynchronous. You start the loading process, and it completes some unpredictable time later. What happens to your WebGL canvas in the meantime? It's easiest to ignore the problem and draw the picture without waiting for the texture. If you do that, you'll probably just get black areas where the image should be. Once the image finishes loading, you want to redraw the picture with the texture. That's the point of the draw function in the above example. Alternatively, you could postpone drawing until the image is loaded. Or you might have some way of turning off texturing (with a bool uniform variable in the fragment shader for example) until the image loads.
- Texture Coordinates
- To apply a texture to a primitive, you need to have texture coordinates. That is, in the fragment shader, you need a pair of (s,t) coordinates for each fragment, to determine which point in the texture maps to that fragment.
- The most common thing is to provide texture coordinates for each vertex as an attribute variable. In this case, you need one pair of (s,t) coordinates per vertex. You also need a varying variable in the shader program to pass vertex coordinates from the vertex to the fragment shader. We'll see examples of this next time.
- In fact, you can get texture coordinates from anywhere you like. One possibility is simply to reuse the vertex coordinates as texture coordinates. In 2D, you could use the x-coordinate of the vertex as the s-coordinates for the texture and the y-coordinate as t. In 3 dimensions, you could use other combinations of x, y, or z coordinates. You would still need a varying variable to pass on the texture coordinates to the fragment shader.
- One other possibility is to use world coordinates, the vertex coordinates after they have been transformed into OpenGL's default coordinate system. This has the effect of fixing the image to the canvas rather that to the primitive. An example of this can be found in the sample program texture-coords-example.html from Lab 4. (Actually, it would be better to use s = (x/2.0)+0.5 and t = (y/2.0)+0.5 to get texture coordinates that go from 0 to 1 instead of from -1 to 1. This would make one copy of the texture fill the canvas.)
- The upside-down image problem: OpenGL assumes that the y-axis points up; usual image coordinates assume that the y-axis points down. This means that your images will probably be upside down! You can fix this by modifying the t-coordinates, but an easier way is to call gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL,true). This will flip texture images vertically. You only need to do this once, probably in an initialization function.
- Point Primitives and Their Texture Coordinates
- I haven't said previously how to use the gl.POINTS primitive. With this primitive, each vertex corresponds to a single point. Each point is actually rendered as a square. To be sure that the point will render, the vertex shader program must assign a value to the special built-in variable gl_PointSize. If this variable is not assigned a value, its value is undefined, which means it could theoretically be anything and the points might not be rendered at all.
- Point size is measured in pixels. It is not affected by any transforms.
- Point size has a maximum value that is implementation dependent. The maximum can be determined in JavaScript by calling gl.getParameter(gl.ALIASED_POINT_SIZE_RANGE). The return value of this function is an array of two numbers giving the minimum and maximum point sizes. (The minimum is probably 1. I have seen a maximum of 64 on one machine and 8192 on another!)
- When rendering one of the points in a gl.POINTS primitive, the fragment shader is called once for each pixel in the square centered at that point. The fragment shader has a special built-in variable named gl_PointCoord that gives texture coordinates for the fragment. This variable is a vec2, whose s coordinates ranges from 0 at the left of the square to 1 at the right and from 0 at the bottom of the square to 1 at the top. If you use gl_PointCoord as texture coordinates for an image texture, you will map one copy of the image onto the square.
- Of course, you could also make every point in the square the same color. Or you could use gl_PointCoord to compute a color for the square. If you do the later, you basically are using a procedural texture on the square.
- Here are some examples of point primitives rendered by WebGL,
using computed colors for the pixels:
From left to write, here is how the fragment shader computed the color:
(1) gl_FragColor = vec4(1,0,0,1); // All points are red (2) float dist = distance( gl_PointCoord, vec2(0.5) ); if (dist > 0.5) discard; // Means discard this fragment entirely. gl_FragColor = vec4(1,0,0,1); // Pixels closer to center than 0.5 are red. (3) float dist = distance( gl_PointCoord, vec2(0.5) ); float alpha = 1.0 - smoothstep(0.45, 0.5, dist); gl_FragColor = vec4(1,0,0,alpha); // Pixels farther from center than 0.5 are fully transparent. // Pixels closer than 0.45 are a fully opaque red. // Pixels between 0.45 and 0.5 from center are partly transparent, // giving a smoother, "antialiased" appearance to the edge. // (For this to work, blending must be enabled in WebGL.) (4) float dist = distance( gl_PointCoord, vec2(0.5) ); float alpha = 1.0 - smoothstep(0.0, 0.5, dist); gl_FragColor = vec4(1,0,0,alpha); // Like previous, but transparency decreases all the way to the center. (5) float dist = distance( gl_PointCoord, vec2(0.5) ); float alpha = 0.2 + 0.8*pow( sin(11*dist), 2.0); gl_FragColor = vec4(1,0,0,alpha); // Transparency is a more complex function of distance from the center.