A final type of LOD that deserves mention deals with simplification not of the geometry of an object, but instead of its texture. The reason we wish to do this is to improve the appearance of texture mapped polygons which are viewed from a great distance. In contrast to geometry LOD techniques, which are intended to speed up processing by rendering fewer polygons, this texture LOD scheme mainly aims to improve visual quality. (In some cases this texture LOD scheme can also cause more coherent memory accesses, thus also improving performance, but this is not typically the main goal.)
The problem is that textures viewed at a great distance get mapped onto very few pixels on-screen. With ordinary texture mapping, for each rasterized pixel we only choose exactly one texel from the texture map to display. This causes unwanted moire artifacts, because we are effectively sampling the texture image with too low of a frequency. That is, for a distant texture, several texels should all contribute to the final pixel color on-screen, because several texels all project to the same pixel; but ordinary texture mapping uses just one texel. The plotted color is therefore not representative of the actual light contribution of all texels.
One solution to this problem is to use filtering. This method always dynamically averages the contributions of surrounding texels to arrive at the final pixel color on-screen. This is an excellent solution, but is slow in the absence of hardware support.
Another solution to the problem is to pre-filter the textures into smaller versions, which are used when the polygon is farther away. Then, when performing the texture mapping, we use a texel from the appropriate smaller, pre-filtered texture. We can do this on a per-pixel basis or per-polygon basis. Per-pixel, we compute the distance of each pixel to the camera, select the appropriate texture based on the distance (greater distance causes selection of a smaller texture), then select the appropriate texel from the selected texture. Per-polygon, we simply compute one average distance value for the entire polygon, and then select and apply one entire pre-filtered texture to the polygon, based on the distance. Per-pixel produces better results; per-polygon is faster, but can result in popping artifacts as the entire polygon suddenly shifts between using a smaller and a larger version of the texture.
The smaller, pre-filtered versions of the textures are called MIP maps, although the collection of all such MIP maps is also often referred to as the MIP map. The idea was introduced by Lance Williams in his article "Pyramidial Parametrics" [WILL83]. The MIP in MIP mapping stands for the Latin multum in parvo, meaning many things in a small place, which in this case refers to the storage of the original texture and all of its smaller version in a space 4/3 of the original texture's size. We arrive at this 4/3 factor as follows. Each smaller version of the texture is typically exactly 1/4 of the size of the original image, having exactly half of the horizontal and vertical dimensions of the next larger version. We can compute such smaller versions simply by averaging the colors from a 4 4 region of the larger version. We continue successively reducing the size of the MIP maps until it is just one pixel wide and/or high. Then, the space requirement expressed as a factor of the original texture size is 1+1/4+1/16+1/64+1/256 and so forth, a series whose limit is 1+1/3, or 4/3.
NOTE OpenGL supports MIP mapping; the function gluBuild2DMipMaps constructs the smaller mipmaps.
Was this article helpful?