Texture coordinates are often specified as a vertex attribute. However, in some cases it can make sense to compute texture coordinates based on other data. This demo textures objects using texture coordinates that are computed from object coordinates or, in one case, eye coordinates. (Object coordinates are simply the coordinates specified as vertex attributes, before any transformation.) Note that a texture transformation by (0.5,0.5) is applied to textures, since the generated texture coordinates in this demo tend to run from −0.5 to 0.5 instead of from 0 to 1.
The first option simply uses the x and y coordinates from the object coordinate system as the texture coordinates. This works OK for texturing the front face of a cube. In this demo, the cube is of size 1, so one copy of the texture image is mapped onto the front face. On the back face, the texture is mirror-reversed. On the other four faces, x or y is constant, so you get one pixel from the edge of the image smeared out across the face. The second option uses the z and x coordinates instead of the x and y coordinates. This is similar to the first option, but it projects the texture from a different direction.
The "Cubical" option might be the most useful for general purposes. It projects a copy of the texture onto each face of a cube. The direction of projection is along one of the coordinate axes. The axis is chosen depending on the normal vector to the surface.
The cylindrical option wraps a copy of the texture once around a cylinder whose axis is the z-axis. It gives an undistorted copy of the texture on the side of a cylinder.
The final option is to take the x and y coordinates from the eye coordinate system. Using eye coordinates effectively pins the texture to the screen instead of to the object. The texture doesn't move as you rotate the object. The effect is interesting, but maybe not very useful.