0% found this document useful (0 votes)
13 views13 pages

Report 6

This document provides a comprehensive overview of texture mapping in computer graphics, detailing its importance in enhancing visual realism by applying 2D textures to 3D surfaces. It covers various aspects including texture lookup processes, coordinate functions, antialiasing techniques, and applications such as normal mapping and bump mapping. The report emphasizes the challenges of minimizing distortions and ensuring accurate texture representation across different shapes and rendering conditions.

Uploaded by

miriveranito
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views13 pages

Report 6

This document provides a comprehensive overview of texture mapping in computer graphics, detailing its importance in enhancing visual realism by applying 2D textures to 3D surfaces. It covers various aspects including texture lookup processes, coordinate functions, antialiasing techniques, and applications such as normal mapping and bump mapping. The report emphasizes the challenges of minimizing distortions and ensuring accurate texture representation across different shapes and rendering conditions.

Uploaded by

miriveranito
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Report 6

Texture mapping

Miriam de Bengoa Aletta


INDEX

1. Introduction

2. Looking Up Texture Values

3. Texture Coordinate Functions

4. Antialiasing Texture Lookups

5. Applications of Texture Mapping

6. Procedural 3D Textures

7. References
Introduction
Texture mapping is a technique in computer graphics
that applies 2D images, known as textures, onto 3D
surfaces to enhance visual realism. By simulating
spatially varying surface properties such as color,
patterns, and roughness, texture mapping provides an
efficient way to create intricate surface details
without increasing geometric complexity. This
technique is essential for rendering realistic
environments and objects in both real-time and pre-
rendered applications.

Looking up textures values


The process of texture mapping begins with a texture lookup. This involves determining the texture
value at a specific point on a 3D surface, which corresponds to a point in 2D texture space. A shader,
during the rendering process, calculates this value based on texture coordinates (u, v) assigned to
the surface. The result of the lookup influences the shading calculations, determining the final color,
reflectivity, or other visual properties of the surface fragment. Accurate lookups are essential for
achieving visually compelling results, as they ensure the texture aligns correctly with the geometry.

Texture coordinate functions


Designing the texture coordinate function φ properly is essential for achieving good results in texture
mapping. This involves determining how to deform a flat, rectangular image so it conforms to a 3D
surface without wrinkles, tears, or folds. While this task is straightforward for simple surfaces, like a
flat rectangle, it becomes significantly more challenging for complex shapes, such as the body of a
character.
This problem is analogous to cartography, where projecting the curved surface of the Earth onto a flat
map inevitably introduces distortions in areas, angles, and distances. Similarly, texture mapping seeks
to minimize distortions while covering a large surface continuously.
This problem is analogous to cartography, where projecting the curved surface of the Earth onto a
flat map inevitably introduces distortions in areas, angles, and distances. Similarly, texture mapping
seeks to minimize distortions while covering a large surface continuously.
Designing φ requires balancing several key goals:
• Bijectivity: Each point on the surface should map to a unique point in texture space to avoid
unintentional overlaps. Intentional overlaps, like repeating patterns, can be used for effects such
as tiling.
• Size Distortion: The scale of the texture should remain consistent across the surface, ensuring
nearby points on the surface map to similarly spaced points in the texture.
• Shape Distortion: Shapes should retain their proportions. For example, a small circle on the
surface should map to a reasonably circular shape in the texture space, rather than an elongated
or squashed one.
• Continuity: Neighboring points on the surface should map to neighboring points in texture space
to avoid visible seams. While some discontinuities are often unavoidable, they should be placed in
inconspicuous areas.
For surfaces defined by parametric equations, texture coordinates can be derived directly by
inverting the parametric function. However, this approach may not always yield desirable properties.
For implicitly defined surfaces or those based on triangle meshes, other methods are needed. Broadly,
texture coordinates can be determined either:
Geometrically: Using spatial coordinates of the surface points.
Vertex-based: Storing texture coordinates at vertices and interpolating them across the surface.
These methods provide flexibility to adapt texture mapping to a variety of shapes and requirements.

Geometrically Determined Coordinates

Geometrically determined texture coordinates are used as a quick solution for simple shapes or as a
foundation for more refined mappings. Different mapping techniques adapt textures to various
shapes with varying levels of distortion:

• Planar projection: applies a texture by projecting it onto


a surface as if from a flat plane. This method works well
for flat objects but can cause overlapping in texture
space when used on closed shapes, as points from the
front and back of the object may map to the same
texture coordinates. An alternative is projective texture
mapping, which replaces planar projection with
perspective projection, improving alignment and making
it particularly useful for effects like shadow mapping.
• Spherical coordinates: This method maps points on a 3D surface to a sphere using latitude and
longitude values. While it effectively covers spherical shapes, it introduces distortion near the
poles, where texture elements can become overly stretched or compressed.
• Cylindrical coordinates: provide a better option for
columnar shapes by projecting the texture outward from
an axis, avoiding the severe distortions seen with spherical
mapping.
• Cubemaps: Using spherical coordinates to parameterize
spherical or sphere-like surfaces leads to significant
distortion of shape and area near the poles, causing visible
artifacts at two problematic points in the texture. A popular alternative is to project the texture
onto a cube, using six separate square textures, one for each face. This method, called a cubemap,
introduces discontinuities along the cube edges but keeps distortion of shape and area low.
Additionally, computing texture coordinates for a cubemap is more efficient than for spherical
coordinates, as projecting onto a plane requires only a simple division, similar to perspective
projection.
In a cubemap, texture coordinates are calculated based on the face to which a point projects. The
face is determined by the coordinate with the largest absolute value, and conventions are used to
standardize the orientation of the
u and v axes on each face.
For example, in OpenGL, the conventions ensure consistency in
texture rendering, especially when cubemaps are used for
environment mapping, where the texture is viewed from the
inside of the cube. Textures for cubemaps typically consist of six
square sections, often packed together in a single image
resembling an unwrapped cube. This approach minimizes distortion
while maintaining computational efficiency.

Interpolated textures Coordinates

For precise control of texture mapping on a triangle mesh surface, texture coordinates can be
explicitly assigned to each vertex. These coordinates are then interpolated across the triangles
using barycentric interpolation, similar to how smoothly varying attributes like colors or
normals are handled.
In an example with a single triangle, the texture coordinates of the vertices determine how the
texture maps onto the surface. For instance, given vertices with texture coordinates (0.2, 0.2),
(0.8, 0.2), and (0.2, 0.8), the texture mapping is defined, and interpolation handles the areas
within the triangle.
A common way to visualize texture coordinates is by drawing triangles in texture space, positioning
the vertices at their assigned coordinates. This helps identify which parts of the texture are used by
specific triangles and aids in debugging texture-mapping issues.
The quality of a texture mapping depends on how well the mesh is laid out in texture space. Key
factors include:
• Continuity: Ensured when triangles share vertices, as neighboring triangles agree on texture
coordinates along shared edges.
• Injectivity: Prevents overlapping triangles in texture space, ensuring no point in the texture is
mapped to multiple surface locations.
• Size Distortion: Low when triangle areas in texture space are proportional to their areas in 3D.
For example, in a character’s face, small texture areas for the nose can cause textures to appear
enlarged, while larger areas for the temple can make textures appear smaller.
• Shape Distortion: Low when triangle shapes in 3D closely match their shapes in texture space.
High shape distortion occurs in cases like spheres near the poles.
Effective texture mapping minimizes these distortions to maintain accurate and visually appealing
results.

Tiling, Wrapping Modes, and Texture Transformations

Allowing texture coordinates to extend beyond the bounds of a texture image can be useful for both
practical and creative purposes. This approach can handle minor calculation errors or be used as a
modeling tool to scale and position textures effectively.
If a texture covers only part of a surface, instead of creating a high-resolution texture with blank
areas, one can scale the texture coordinates to a larger range. For example, scaling to
[−4.5,5.5]×[−4.5,5.5] positions the texture in the center at a smaller scale.
Out-of-bounds texture lookups can be handled in two main ways:
• Clamping: Coordinates outside the texture are clamped to the nearest edge, returning the color
at the edge. This is ideal when extending a constant background, such as a logo on a white field.
• Wrapping: For repeating patterns (e.g., checkerboards or tiles), wraparound indexing repeats the
texture by wrapping coordinates around the edges using the modulus operation.
The choice between clamping and wrapping is determined by the texture's wrapping mode, which
defines how to handle out-of-bounds coordinates. This allows the texture to act as a function over an
infinite 2D plane.
Texture scaling and positioning are often adjusted using matrix transformations applied to texture
coordinates. An affine or projective transformation, represented by a 3×3 matrix, can scale,
translate, or otherwise transform the coordinates without modifying the underlying texture or mesh
data. Most rendering systems support such transformations, simplifying the process of texture
mapping.

Perspective correct interpolation


Correct perspective interpolation in texture mapping ensures textures appear accurate under
perspective projection, avoiding distortions. Naïve screen-space interpolation fails because it does
not account for the non-linear compression of objects farther from the viewer. To address this,
texture coordinates are treated as attributes of 3D points in homogeneous space and interpolated
alongside spatial attributes like x,y,z. After interpolation, perspective division adjusts the
coordinates by dividing them by the interpolated reciprocal depth (wr ) to ensure proper scaling.
This process involves interpolating (u/wr ,v/wr ,1/wr ) in screen space, recovering (u,v) by dividing
u/wr and v/wr by 1/wr . Because perspective projection preserves lines and planes, linear
interpolation in homogeneous space remains valid. For efficiency, precomputations outside the
rasterization loop reduce computational overhead.
By interpolating 1/wr correctly, texture coordinates align with world-space geometry, ensuring
accurate texture mapping. This method guarantees consistent perspective effects across triangles,
avoiding artifacts and producing visually accurate renders.

Continuity and seams

Discontinuities in texture coordinates are often unavoidable due to the limitations of mapping a
closed 3D surface onto a 2D texture image. Topologically, it is impossible to create a continuous,
bijective mapping for such surfaces without introducing seams—curves where texture coordinates
abruptly change. Seams help minimize distortion across the rest of the surface and are a common
feature in mappings like spherical, cylindrical, and cubemaps.
In interpolated texture coordinates, seams require special handling because they are not
naturally continuous. Typically, texture coordinates are shared across mesh vertices, ensuring
smooth transitions. However, when a triangle spans a seam, the interpolation process creates
severe distortions or folds. For instance, a globe mapped with spherical coordinates may produce
unrealistic textures across the 180th meridian if interpolation treats discontinuous longitudes
(e.g., 167°E to -179°W) as a continuous range, compressing or folding the texture.

The solution is to duplicate vertices at the seams. By assigning separate texture coordinates to
each duplicate vertex—differing by an appropriate adjustment (e.g., 360° in longitude)—triangles
on opposite sides of the seam interpolate independently, avoiding distortions. While this approach
increases the number of vertices, it ensures clean transitions and accurate texture mapping
across seams.

Antialiasing texture lookups


Antialiasing in texture mapping addresses aliasing caused by sampling detailed textures during
rendering. The solution involves area averaging, where each pixel represents the average color
over its area instead of a single point. Supersampling, taking multiple texture samples per pixel
and averaging, achieves high-quality results but can be computationally expensive. Additionally,
reconstruction filters are used to interpolate between texels for smooth transitions. Efficient
methods for area averaging and reconstruction are essential to minimize artifacts and maintain
texture quality.

The Footprint of a Pixel


Antialiasing textures is complex because the mapping between image space and texture space
changes dynamically. Each pixel should represent the average color of its corresponding
footprint in texture space, which varies in size and shape depending on viewing conditions and the
texture coordinate function. For instance, pixels closer to the camera have smaller footprints,
while oblique viewing angles or distorted texture mappings can elongate or deform footprints.
To handle antialiasing efficiently, the mapping from image to texture space, denoted as ψ, is
approximated linearly. This linearization uses a derivative matrix J that describes how texture
coordinates change with pixel positions. The matrix helps estimate the pixel's footprint in
texture space as a parallelogram, capturing its size and shape.

The ideal antialiasing solution computes the average texture value over the parallelogram
footprint, which ensures high-quality rendering. However, calculating this exactly is
computationally expensive. Instead, practical methods approximate the process, balancing
performance and quality. These approaches focus on efficiently estimating texture averages
over pixel footprints while maintaining visual fidelity.

Reconstruction
When magnifying a texture (footprints smaller than a texel), the challenge is to produce a smooth
image without revealing the texel grid. This is achieved through interpolation using a reconstruction
filter, similar to image upsampling. However, unlike image resampling, texture lookups are irregular,
making large, high-quality filters impractical. Instead, bilinear interpolation is commonly used for
its efficiency.
Bilinear interpolation calculates the value at a sample point by blending the four nearest texels,
weighted based on their proximity. Although effective, this process can become a performance
bottleneck due to the memory latency of fetching these texel values. High-performance systems
often include specialized hardware to handle interpolation and optimize memory access with
caching.
While bilinear interpolation may not be smooth enough for all applications, textures can be
preprocessed at higher resolutions with better filters, ensuring sufficient smoothness for effective
bilinear interpolation.

Mipmapping
When a pixel footprint covers many texels, good antialiasing requires averaging the texture values
over the footprint to prevent aliasing. Calculating this average directly is computationally expensive,
so a more efficient approach is to precompute averages for various areas using a technique called
Mipmapping.
It involves creating a sequence of textures (a mipmap) where each level represents the original
texture at progressively lower resolutions. The base level (level 0) is the full-resolution texture. Each
subsequent level is downsampled by a factor of 2 in both dimensions, with texels in each level
representing averages of 2x2 areas in the previous level. For example, a 1024×1024 texture generates
a mipmap with 11 levels, ending in a single texel at level 10.
This hierarchical structure, known as an image pyramid, allows efficient sampling by selecting or
interpolating between appropriate mipmap levels based on the pixel footprint size, significantly
improving performance and reducing aliasing.

Basic texture filtering with mipmaps

Mipmap filtering significantly improves texture sampling efficiency by using precomputed averages
at various levels. For a square pixel footprint of width D, the appropriate mipmap level is chosen
based on 2 k D, where k is the logarithm base 2 of D. This leads to a choice between the nearest
~

integer levels for sampling, which may cause artifacts due to abrupt transitions between levels.
When the footprint is elongated, simple mipmapping struggles to handle anisotropic shapes
effectively. In these cases, a practical method is to use the length of the longest edge of the
footprint (D = max || ux ||, || uy ||) to guide sampling between two mipmap levels. This trilinear
interpolation helps smooth out elongated footprints, but still introduces blurring in anisotropic
cases, particularly on large surfaces viewed at grazing angles.
Applications of texture mapping
Controlling shading parameters
Texture mapping is commonly used to introduce variation in color for shading, simulating
different materials like wood or stone. Beyond diffuse color, specular reflectance and roughness
can also be textured. In many cases, different texture maps are correlated, affecting multiple
parameters like roughness, color, and shininess together. For example, a printed logo on a
ceramic cup can change both its color and reflectance properties.

Normal maps and bump maps

Normal mapping enhances shading by using textures


to define the surface normals, which can differ from
the geometric normals. These normals are stored in
a texture, usually with three components
representing 3D coordinates of the normal vector.
Object-space normal maps are simple but tied to
surface geometry and cannot adapt to deformations.
To address this, tangent space-based normal maps
are used, where an orthonormal basis is defined using
tangent vectors. These normals vary less and are
near the surface normal, making them suitable for
smooth surfaces. Bump maps can be used to create
normal maps by providing height information, with
brighter areas corresponding to protrusions and
darker areas to recessions.
For example, in creating realistic textures like
woodgrain, normal maps can simulate surface
roughness and imperfections, enhancing visual
fidelity.

Displacement maps

Normal maps provide a shading effect without altering the underlying surface, which becomes
apparent in still images and animations where parallax and surface deformations are absent. To
address this, displacement maps are used to physically alter the surface geometry.
A displacement map defines how much each point on the surface should be moved along the
normal direction, creating a more realistic 3D effect. This is commonly achieved by tessellating
the surface and displacing the vertices using the displacement map.
Shadow maps
Shadow maps are a powerful tool for generating shadows in rasterized renderings. They represent
the illuminated volume from a point light source, storing the distance to the closest surface for all
points in the scene. During rendering, these depth values are compared to determine whether a
fragment is illuminated or shadowed.
The process involves creating a depth map in a separate pass, then using that map to evaluate
visibility for each fragment. To minimize artifacts, a shadow bias (ε) is applied to handle precision
issues and prevent artifacts near shadow boundaries. Additionally, percentage closer filtering can be
used to reduce aliasing by averaging multiple shadow samples.

Environment maps
Environment maps are used to introduce detailed illumination in scenes without needing complex
light source geometry. By assuming that illumination depends solely on the direction of view,
environment maps represent this dependency as a function over a unit sphere, just like color
variation on spherical objects.
In ray tracing, environment maps provide colors for rays that don’t intersect with any objects
by mapping directions directly to texture coordinates. This allows shiny objects to reflect both
the scene and the background environment seamlessly. A similar approach is used in
rasterization with reflection mapping, where the environment map is sampled for mirror
reflections.
More advanced techniques include environment lighting, where the entire illumination is
computed from the environment map using methods like Monte Carlo integration or by
simulating point sources with shadow maps. Environment maps are typically stored using
spherical coordinates or cubemaps, with cubemaps being more efficient and widely used in
interactive applications.
Procedural 3D textures
3D stripe textures
There are various approaches to creating a striped texture using two colors, c_0 and c_1. One
approach involves using a sine function to alternate between the two colors. Another approach adds
a width parameter, w, to adjust the stripe width. Additionally, a third method employs a parameter,
t, to provide a smooth interpolation between the two colors. These methods offer different levels of
control and smooth blending for creating striped textures.

Solid noise
To create more complex and realistic textures like mottled patterns found on bird eggs, a
technique called Perlin noise is commonly used. Unlike simple random noise, Perlin noise provides
a smoother, more controlled random quality by interpolating between random points in a lattice
using techniques such as Hermite interpolation and the use of random vectors. These adjustments
reduce the visibility of the underlying grid structure, making the noise appear more organic.
Perlin's method involves a cubic weighting function and pseudorandom tables to generate random
unit vectors, which contribute to a more refined and visually appealing texture. Additionally, solid
noise, which ranges from -1 to 1, can be transformed into colors by adjusting the values smoothly,
resulting in more visually consistent and contrast-rich textures.

Turbulence
Many natural textures exhibit a variety of feature sizes, which can be achieved through Perlin's
pseudofractal turbulence function. This function iteratively adds scaled copies of the noise
function, creating a layered effect of increasing complexity. Turbulence can also be applied to
distort simple patterns like stripes by incorporating the turbulence into the texture
computation, blending between different scaled noise levels. This results in a more dynamic and
complex texture.

References
http://repo.darmajaya.ac.id/5422/1/
Fundamentals%20of%20Computer%20Graphics%2C%20Fourth%20Edition%20%28%20PDFDrive%
20%29.pdf

You might also like