0% found this document useful (0 votes)
28 views21 pages

Unit Iv

Uploaded by

MOHAN PRATHAP
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views21 pages

Unit Iv

Uploaded by

MOHAN PRATHAP
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

UNIT IV: MODELING

3.1 Modeling

Modeling is describing (mathematically) a situation in reality for the purpose of


solving a problem or question in that situation.

 Modeling is both a way of working and a way of thinking.


 It includes an iterative process that demands creativity and inventiveness and
in which mathematical, scientific and technical knowledge is applied to
describe new situations.
 This includes determining a strategy, analyzing or getting to the bottom of the
problem, choosing variables, setting up connections, and deploying
mathematical and computational tools.
 The approach depends on the objective, the question, or the problem to be
solved.
 The objective may be a better understanding of the problem situation itself,
but also the development of new conceptual knowledge.
 Modern computer technology and advanced digital applications play a central
role in complex problems.

Fig 3.1 Three categories of activities that characterize modeling

 Figure 3.1 gives an impression of the activities that are important in


modeling.
 On the left-hand side are activities related to empirical research, such as
collecting data that are used in the model and/or can be used to assess the
modeling results.
 On the right-hand side are conceptual activities that must lead to the
development of a model, including creative thinking and formulating
hypotheses to be tested.
 The modeling process is from this point of view almost synonymous with the
process of ‘doing research’.
 Modeling has a long tradition in physics, but in recent years’ model-based
research is more and more applied in other fields.

1
 Models such as climate and weather models, statistical models and ecological
models have a big impact on our society.

3.2 Geometric Modeling

Geometric modeling is one of the major uses of the CAD systems. It


uses mathematical descriptions of geometric elements to facilitate the
representation and manipulation of graphical images on the computer's screen.

 While the central processing unit (CPU) provides the ability to quickly make
the calculations specific to the element, the software provides the instructions
necessary for efficient transfer of information between user and the CPU.
 There are three types of commands used by the designer in CAD geometric
modelling.
 It first allows the user to input the variables needed by the computer to
represent basic geometric elements such as points, lines, arcs, circles, splines,
and ellipses.
 The second is used to transform these elements that include scaling, rotation,
and translation.
 The third allows the various elements previously created by the first two
commands to be joined into a desired shape.
 During the whole geometric modelling process, mathematical operations are
at work that can be easily stored as computerized data and retrieved as
needed for review, analysis, and modification.
 There are different ways of displaying the same data on the CRT (cathode ray
tube) screen, depending on the needs or preferences of the designer.
 One method is to display the design as a 2-D representation of a flat object
formed by interconnecting lines.
 Another method displays the design as a 3-D view of the product.

Four types of Geometric modeling

Four Types are; 1 wireframe modeling, 2 surface, 3 solid modeling, and 4 hybrid
solid modeling.

1. Wireframe Modeling:
 The wireframe model is a skeletal description of a 3-D part.
 It consists only of points, lines, and curves that describe the geometric
boundaries of the object.
 There are no surfaces in a wireframe model.

2. Surface modeling:
 one defines not only the edge of the 3-D part, but also
its surface.
 One of its major benefits is that it allows mass-related
properties to be computed for the product model
(volume, surface area, moment of inertia, etc.)

2
 allows section views to be automatically generated.
 The surface modelling is more sophisticated than wireframe modelling.
In surface modelling, there are the two different types of surfaces that
can be generated: faceted surfaces using a polygon mesh and true
curve surfaces.
 It can exactly represent a wide range of curves such as arcs and cones.
 The greater flexibility for controlling continuity.
 It can precisely model nearly all kinds of surfaces more robustly than
the polynomial-based curves that were used in earlier surface models.

3. Solid modeling
 that defines the surfaces of a product with the added advantages of
volume and mass.
 It takes the surface model one step further in that it assures that the
product being modeled is valid and realizable.
 This allows image data to be used in calculating the physical properties
of the final product.
 Solid modeling software uses one of two methods: constructive solid
geometry (CSG) or boundary representation (B-rep).
 CSG method uses engineering Boolean operations (union, subtraction,
and intersection) on two sets of objects to define composite models.
 B-rep is a representation of a solid model that defines a product in terms
of its surface boundaries that are faces, edges, and vertices.

4. Hybrid solid modeling allows the user to represent a product with a mixture
of wireframe, surface modeling, and solid geometry.

Hybrid solid Modelling

 Shading removes hidden lines and assigns flat colors to visible surfaces.
 Rendering adds and adjusts lights and materials to surfaces to produce
realistic effects. Shading and rendering can greatly enhance the realism of
the 3-D image.

3.3 Virtual Object Shape

3
Virtual Object Shape describe the shape of an object in a virtual or computer-
generated environment. In various fields like computer graphics, virtual reality, and
computer-aided design, creating and manipulating virtual objects with realistic
shapes is crucial for an immersive and effective user experience.

Here are a few key aspects related to virtual object shape:

Mesh Representation: In computer graphics, 3D objects are often represented


using meshes. A mesh is a collection of vertices, edges, and faces that define the
shape of the object. The arrangement of these elements creates the surface
geometry of the virtual object.

Ex;

Volumetric Representation: Another approach involves representing the object as


a volume, often using voxels (3D pixels). This method provides a way to represent
both the surface and interior of an object in a more detailed manner.

 Volumetric representations of detected interest points for examples of the six


KTH actions.
 The top row shows experimental finding actions and the bottom row shows the
corresponding distributions of interest points within experimental finding
Action Cuboids.
 Response strengths are depicted by colour.
 It can be seen that the distribution of interest points and their response
strengths differ across actions.

4
Parametric Shapes: Virtual objects can be described using mathematical equations
that define their shapes. Example is given below

These equations may represent simple geometric shapes like spheres, cubes, or more
complex parametric surfaces.

Procedural Generation: In some cases, virtual object shapes are generated


procedurally, meaning algorithms are used to create shapes based on certain rules
or parameters. This is commonly used in generating landscapes, textures, and
complex structures.

Physical-based Shape Modelling: In simulations and virtual environments, the


shape of objects may be influenced by physical properties like elasticity, deformation,
and collisions. This is especially important in physics-based simulations.

Texture Mapping: Apart from the geometric shape, the surface appearance of
virtual objects is also crucial for realism. Texture mapping involves applying 2D
images (textures) onto the surfaces of 3D objects to simulate details like color,
pattern, and reflectivity.

Animation and Deformation: Virtual objects may not only have a static shape but
can also be animated or bent dynamically. Rigging and skeletal animation are
common techniques used for character animation, while techniques like morphing
and skinning are used for shape deformation.

5
3.4 Object Visual Appearance

The visual appearance of objects is given by the way in which they reflect and
transmit light. The colour of objects is determined by the parts of the spectrum of
(incident white) light that are reflected or transmitted without being absorbed.

Appearance of reflective objects

Figure 3.4.1: Manifestation of surface properties upon reflection of light.

Structures with dimensions, λ, above 0,1 mm can be seen directly by the


unaided eye (focus on surface), smaller structures become manifest by their effect
on the directional distribution of the reflected light (focus on source). Structures at
and below 0,1 mm reduce the distinctness of image (DOI), structures in the range
of 0,01 mm induce haze and even smaller structures affect the gloss of the

Appearance of trans missive objects

Figure 3.4.2 Scattering of light during transmission with classification of diffuse


transmission into wide and small-angle scattering domains, resulting in haze and
reduction of clarity, respectively.

6
3.5 Kinematics Modelling
It is a branch of mechanics that deals with only the motion of objects but not
the forces that cause the motion. For example, moving trains, and moving water in
a river.
 We can find objects in motion all around us.
 Even when the person is resting, the heart pumps blood through the veins.
There is the motion of atoms and molecules in all the objects.
 There is motion when the ball is hit by a player with his bat.
 The branch of classical mechanics that deals with the study of the motion of
points, objects and a group of objects without considering the causes of motion
are called Kinematics.
 Kinematics has its application in astrophysics to study the motion of celestial
objects.
 It is also used in robotics and biomechanics to explain the motion of objects
with joint parts, such as engines, human skeletons, robotic arms and much
more.
 In kinematics, we study the trajectories of the objects, as well as their
differential properties like velocity and acceleration.

Reference Frames
 The position of the object relative to the reference frame has to be described
in order to understand the motion of the object.
 Mathematically, the variable ‘x’ is used to represent the position of the object.
The position variable x can be described by making two choices.
 We can decide where x = 0 has to be put and which direction has to be taken
as the positive direction. This is known as choosing the frame of reference or
the coordinate system.
 Therefore, choosing the coordinate system or the set of axes within which the
position, orientation and other properties of the object are being measured is
called the frame of reference.

Displacement
 The change in the position of the object with respect to the frame of reference
is called displacement.
 For example, if a person walks from his house to the market, the displacement
is the relative distance of the market from his house (frame of reference).

Velocity and Acceleration


 The velocity of the object is defined as the displacement by the time taken.
 It is a vector quantity, and has both magnitude and direction.
 The rate of change of velocity is called acceleration.

Motion Graph
There are three types of motion graphs that are studied in kinematics.
1. Displacement-time graph

7
2. Velocity-time graph
3. Acceleration-time graph

Motion Diagram
 The pictorial
representation of the
motion of the object is called the motion diagram.
 In the same diagram, various positions of the object at equally spaced
intervals are represented in a motion diagram.
 From the diagram, we can see if the object has accelerated, retarded or is at
rest.
 We can understand that the object is getting accelerated if there is an increase
in the space between the objects as time passes, and the object is getting
retarded if the space between the object decreases with time.

Kinematic Equations

There are four kinematic equations when the initial starting point is taken as
the origin and the acceleration of the object is constant.

1. v = v0 + at
2. d = (½) (v0 + v)t
3. d = v0t + (at2/2)
4. v2 = v02 + 2ad

Where v is the final velocity, v0 is the initial velocity, a is the constant


acceleration, t is the time interval, d is the displacement

Each of the above equations has only four of the five variables. If we know the value
of three variables in an equation, the fourth variable can be determined.

Rotational Kinematics Equations


8
 In the translational motion, we saw there are five important variables.
 Each of these variables will have a corresponding variable in rotational motion.
 The position variable x is replaced by the angle θ in a rotational motion.
 The initial and the final velocity is given by the angular velocity (ω), and it is
measured in radians per second.
 The acceleration is replaced by the angular acceleration (α), which describes
the rate of change of angular velocity with respect to time.
 Angular acceleration is measured in radians per second square.
 The time is represented as t even in rotational motion.
 The rotational kinematics equations are as follows:
1. ω = ω0 + αt
2. θ = θ0 + (½) (ω0 + ω)t
3. θ = θ0 + ω0t + (αt2/2)
4. ω 2 = ω02 + 2α (θ – θ0)

3.6 Transformation Matrices


The transformation matrix transforms a vector into another vector, which can
be understood geometrically in a two-dimensional or a three-dimensional space. The
frequently used transformations are stretching, squeezing, rotation, reflection, and
orthogonal projection

Types of Transformation Matrix

The various types of Transformation matrix are given below:

1. Translation Matrix
2. Rotation Matrix
3. Scaling Matrix
4. Combined Matrix
5. Reflection Matrix
6. Shear Matrix
7. Affine Transformation Matrix

1 Translation Matrix

A translation matrix simply moves an object along with one or more of the three
axes.

9
A transformation matrix representing only translations has a simple form. If we have
to translate a point P (x, y, z) by T_x on the X axis, T_y on the Y axis and T_z on the
Z axis. We can define the translation matrix by:

To shorten this process, we have to use a 4×4 transformation matrix instead of a


3×3 transformation matrix. To convert a 3×3 matrix to a 4×4 matrix, we have to
add an extra dummy coordinate W. The W component of a vector is also known as a
homogeneous coordinate. Using homogeneous coordinates has several advantages:
it allows us to do matrix translations on 3D vectors. Also, whenever the
homogeneous coordinate is equal to 0, the vector is specifically known as a direction
vector since a vector with a W coordinate of 0 cannot be translated.

2 Rotation Matrix

10
3 Scaling Matrix

A scaling transform changes the size of an object by expanding or contracting


all voxels or vertices along the three axes by three scalar values specified in the
matrix. When we’re scaling a vector we are increasing the length of the arrow by the
amount we’d like to scale, keeping its direction the same. Scaling can be achieved
by multiplying the original coordinates of the object with the scaling factor to get the
desired result.

4 Combined Matrix

Combined Matrix is used for a combination of any two or all three operations.
Such a transformation matrix is formed by the sequential multiplication of the
individual matrices for each operation. Here the sequence is important and mostly
non-commutative.

Suppose, we have a vector (x,y,z) and we want to scale it by 2 and then


translate it by (1,2,3). We need a translation and a scaling matrix for our required
steps. The resulting transformation matrix would then look like this:

5 Reflection Matrix

11
It is also called a flip matrix. Reflection is the mirror image of the original object. In
other words, we can say that it is a rotation operation with 180°. In reflection
transformation, the size of the object does not change. It can be represented in
matrix form as

6 Shear Matrix

Shearing is also termed Skewing. A transformation that slants the shape of an object
is called the shear transformation. There are two shear transformations X-Shear and
Y-Shear. X-Shear shifts X coordinate values and the Y-Shear shifts Y coordinate
values. In both cases, only one coordinate changes its coordinates and the other
preserves its values.

X-Shear: X-Shear shifts X coordinates values and preserve the Y coordinate. It can
be represented in the matrix as follows

Y-Shear: Y-Shear shifts Y coordinates values and preserves the X coordinate. It can
be represented in the matrix as follows

7 Affine Transformation Matrix

An affine transformation, or an affinity, is a geometric transformation that preserves


lines and parallelism. It is used in modern design software.

12
To represent affine transformations with matrices, we can use homogeneous
coordinates. This means representing a 2-vector (x, y) as a 3-vector (x, y, 1), and
similarly for higher dimensions. Using this system, translation can be expressed
with matrix multiplication.

Uses of Transformation Matrices

Transformation Matrices are used to perform a sequence of matrix operations like

o Translate the coordinates,


o Rotate the translated coordinates, and then
o Scale the rotated coordinates to complete the composite transformation.
o Or transformation such as translation followed by rotation and scaling.

3.7 Object Position


In the context of 3D modeling and computer graphics, "object position" refers
to the spatial coordinates of an object within a virtual 3D environment. These
coordinates typically consist of three values: (x,y,z), representing the object's
position along the horizontal (x-axis), vertical (y-axis), and depth (z-axis) axes,
respectively.

Understanding and manipulating object positions is fundamental in 3D modeling for


several reasons:

Placement within the scene: Object position determines where an object is


located within the 3D scene relative to other objects or the scene's origin point.

Animation: Animating objects often involves changing their positions over time. By
modifying the object's position coordinates, you can create movement effects like
translation (moving from one point to another).

Interaction: In interactive applications such as games or simulations, object


positions are crucial for detecting collisions, implementing physics simulations, and
enabling user interaction.

Camera positioning: The position of the virtual camera that renders the scene is
also defined in terms of its (x,y,z) coordinates. Proper positioning of the camera is
essential for achieving desired perspectives and views of the scene.

Coordinate systems: Object positions are defined within a specific coordinate


system, which may be global (world coordinates) or local (relative to a parent object).
Understanding coordinate systems is essential for accurate positioning and
transformations.

Hierarchical modelling: In complex scenes, objects are often organized into


hierarchies where the position of child objects is relative to their parent objects. This
hierarchical structure allows for easier manipulation and organization of objects
within the scene.

Manipulating object positions can involve techniques such as translation (moving an


object along one or more axes), rotation (changing its orientation), and scaling
(resizing). These transformations are typically applied using matrix operations or
quaternion rotations, depending on the specific requirements of the application or
framework being used for 3D modelling.

13
Overall, understanding and effectively managing object positions are essential skills
for 3D modellers and computer graphics practitioners, enabling them to create
realistic and immersive virtual environments.

3.8 Transformation Invariants


Transformation invariants are properties of an object or a system that remain
unchanged under specific transformations. In the context of modelling,
particularly in computer graphics and geometry processing, understanding
transformation invariants is crucial for various applications, including shape
analysis, recognition, and synthesis. Here are some key concepts related to
transformation invariants in modelling.

Geometric Invariants: Geometric properties of objects that remain


unchanged under certain transformations. Examples include:
Length: The distance between two points remains the same under
translation.
Angle: The angle between two lines remains invariant under rotation.
Area: The area of a polygon remains invariant under translation and rotation.

Topological Invariants: Properties that are preserved under topological


transformations, such as stretching or bending, but not under cutting or
gluing. Examples include:
Euler Characteristic: For a connected planar graph, the Euler characteristic
(V−E+F) remains invariant under continuous deformations.
Genus: The number of handles or holes in a surface remains invariant under
deformations.
Symmetry Invariants: Properties related to symmetry that remain
unchanged under specific transformations. Examples include:
Mirror Symmetry: Objects that are symmetric with respect to a mirror plane
remain invariant under reflections.
Rotational Symmetry: Objects that exhibit rotational symmetry around a
central axis remain invariant under rotations.
Invariant Descriptors: Quantitative measures derived from objects that
remain invariant under specific transformations. These descriptors are often
used for shape analysis and recognition. Examples include:
Moment Invariants: Numerical measures computed from the moments of
an object's shape that remain invariant under translation, rotation, and
scaling.
Histogram of Oriented Gradients (HOG): Descriptor used in computer
vision that captures local shape information invariant to geometric
transformations.
14
Scale Invariants: Properties that remain unchanged under scaling
transformations. Examples include:
Curvature: Certain curvature-based measures, such as mean or Gaussian
curvature, remain invariant under uniform scaling.
Understanding transformation invariants in modeling is essential for
developing robust algorithms and techniques for shape analysis, recognition,
and processing.

3.9 Object Hierarchies


Object hierarchies, also known as class hierarchies or inheritance hierarchies, are a
fundamental concept in object-oriented modelling and programming. They represent
the relationships between classes in a system by organizing them into a hierarchical
structure based on their common characteristics and behaviours.

Here's an overview of how object hierarchies work:

Classes and Objects: In object-oriented programming (OOP), a class is a blueprint


for creating objects, which are instances of that class. Classes define the properties
(attributes) and behaviours (methods) that objects of that class will have.

Inheritance: Inheritance is a mechanism that allows a new class (derived class or


subclass) to inherit properties and behaviour’s from an existing class (base class or
superclass). This means that the subclass automatically has all the attributes and
methods of the superclass, in addition to any new attributes or methods defined
specifically for the subclass.

Superclass-Subclass Relationship: The superclass-subclass relationship forms


the basis of object hierarchies. Subclasses inherit from their superclasses, creating a
hierarchical structure where subclasses are more specialized versions of their
superclasses.

Polymorphism: Polymorphism is another key concept enabled by object hierarchies.


It allows objects of different classes in the hierarchy to be treated as objects of a
common superclass. This means that a method defined in a superclass can be
overridden in a subclass to provide specialized behavior, and the correct method will
be called based on the actual type of the object at runtime.

Abstract Classes and Interfaces: In some object-oriented languages like Java,


you can define abstract classes and interfaces to represent common behavior shared
15
by multiple classes in a hierarchy. Abstract classes cannot be instantiated directly
and may contain abstract methods that must be implemented by subclasses.
Interfaces define a contract that classes must adhere to by implementing the
methods declared in the interface.

Example: Consider a class hierarchy representing different types of vehicles. At the


top of the hierarchy, you might have a Vehicle superclass with subclasses such as
Car, Truck, and Motorcycle. Each subclass inherits common properties and behaviors
from the Vehicle class while also having its own unique characteristics.

Object hierarchies are essential for organizing and modelling complex systems,
promoting code reuse, and facilitating abstraction and encapsulation in object-
oriented design.

They help developers create more maintainable, extensible, and scalable software
systems by representing real-world relationships and hierarchies in the codebase.

3.9.1 Example Diagram for object hierarchies

3.10 Viewing The 3D World


Viewing the 3D world involves various concepts and techniques used in
computer graphics and computer vision to represent and interact with 3D
environments.

Viewing the 3D world involves a combination of mathematical principles,


algorithms, and software techniques to create immersive and interactive experiences
in computer graphics and computer vision applications. Here's a broad overview of
some key aspects involved

16
3D Representation: In computer graphics, 3D objects are represented using
mathematical models such as polygons, meshes, or parametric surfaces. These
models define the geometry (shape and structure) of objects in the 3D world.

Viewing Transformation: Viewing transformation is the process of mapping 3D


objects in a scene to a 2D image that can be displayed on a computer screen. This
involves specifying the position and orientation of a virtual camera (viewpoint)
relative to the objects in the scene.

Projection: Projection is the process of converting 3D coordinates of objects into 2D


coordinates on the screen. There are different types of projections used, including
perspective projection (which simulates the way objects appear in the real world)
and orthographic projection (which preserves the relative sizes of objects regardless
of their distance from the viewer).

Rendering: Rendering is the process of generating a 2D image from a 3D scene


using lighting, shading, and other effects to simulate the appearance of materials
and surfaces in the scene. This involves techniques such as rasterization, ray tracing,
and shading to calculate the colour and intensity of pixels in the final image.

Interaction: Interaction in a 3D world allows users to navigate, manipulate, and


interact with objects in the scene. This can involve techniques such as camera control
(panning, zooming, rotating), object manipulation (translation, rotation, scaling),
and user interface elements (buttons, menus, sliders).

Depth Perception: Depth perception is important for understanding the spatial


relationships between objects in a 3D scene. Techniques such as depth buffering,
depth testing, and occlusion culling are used to determine which objects are visible
and how they are layered in the scene.

Virtual Reality (VR): Virtual reality technologies immerse users in a 3D virtual


environment, often using head-mounted displays and motion tracking devices to
provide a sense of presence and interaction within the virtual world.

Augmented Reality (AR): Augmented reality overlays virtual objects onto the real-
world environment, combining computer-generated imagery with the user's view of
the physical world.

3.11 Physical Modelling


A physical model is a constructed copy of an object that is designed to represent that
object.

 A physical model can be smaller, larger, or the same size as the actual object
it represents.
 Can take advantage of one-dimensional paths in many systems.
 Strings, narrow pipes, and other such paths can often be replaced with delay
lines (waveguides).
 Any losses and some non-linearity’s along these paths can be lumped into
calculations at connection points.

Plucked String Model

 Delay models round-trip time around string, filters model effects of instrument
body.

17
 Excitation can be as simple as a burst of noise, or more elaborate for more
realistic sound synthesis.

Clarinet Model

Simple clarinet wind instrument model. Delay-line models round-trip


time around tube. Filters model effects of tone holes and bell.
Non-linear "reed" function is the heart of most wind instrument models.

Voice Model

18
Delay-Based Effects

3.12 Collision Detection


Collision detection is the computational problem of detecting the intersection of two or more objects.
Collision detection is a classic issue of computational geometry and has applications in various
computing fields, primarily in computer graphics, computer games, computer
simulations, robotics and computational physics.

Types of Collision Detection:

Bounding Volume Hierarchies (BVH):


Concept: Objects are enclosed in simplified shapes (bounding volumes), such as
spheres, boxes, or capsules. These bounding volumes form a hierarchy to quickly
eliminate potential collisions.
Pros: Fast and efficient, especially for large scenes with many objects.
Cons: Less accurate than other methods.

Mesh-Based Collision Detection:


Concept: Involves using the actual geometry (meshes) of objects for collision
detection.
Pros: More accurate, suitable for complex shapes.
Cons: Can be computationally expensive, especially with high-polygon models.

Ray Casting:
Concept: Involves casting rays or line segments to check for intersections with
objects.
Pros: Efficient for specific scenarios, like shooting or visibility tests.
Cons: Limited to detecting collisions along the ray path.

Sweep and Prune:


Concept: Objects are sorted along one axis, and collisions are only checked between
neighbouring objects.
Pros: Efficient for dynamic scenes with moving objects.
Cons: Limited to certain types of motion.
Continuous vs. Discrete Collision Detection:

Discrete Collision Detection:


Concept: Checks for collisions at specific time intervals.
Pros: Simpler to implement.
Cons: May miss fast-moving objects or objects passing through each other in
between intervals.
19
Continuous Collision Detection:
Concept: Detects collisions at any point in time, considering the entire trajectory of
objects.
Pros: More accurate for fast-moving objects.
Cons: Can be computationally expensive.
Implementation Tips:
Bounding Volume Optimization:

Choose bounding volumes that closely fit the object shapes to balance accuracy and
efficiency.
Collision Response:
After detecting a collision, determine how objects should respond (e.g., bounce off
each other, trigger an event).

Update Frequency:
Adjust the frequency of collision checks based on the dynamics of the scene to
optimize performance.

Parallelization:
Utilize parallel processing techniques to enhance the speed of collision detection,
especially for large datasets.

Hierarchical Structures:
Implement hierarchical structures like Octrees or KD-trees for organizing and
optimizing collision checks in complex scenes.

Use Physics Engines:


Employing physics engines can simplify collision detection and response, providing
pre-built solutions for common scenarios.

3.13 Surface Deformation


Surface deformation refers to the process of altering the shape of a surface in a
controlled manner. This can be achieved using various mathematical equations that
describe the desired deformation. The choice of the equation depends on the specific
effect or deformation pattern you want to achieve. Here are a few examples:
1. Linear Deformation:
 Simple linear deformation along one or more axes.
 Equation for deformation along the x-axis: x′=x+a⋅f(x)
 Here, a is the deformation factor, and f(x) is an optional function that varies
the deformation across the surface.

2. Radial Deformation:
 Deformation based on radial distance from a central point.
 Equation for radial deformation: r′=r+a⋅f(r)
 Here, r is the radial distance from the center, a is the deformation factor, and
f(r) is an optional function.

3. Sinusoidal Deformation:
 Deformation using sinusoidal functions for periodic patterns.
 Equation for sinusoidal deformation along the x-axis: x′=x+a⋅sin(b⋅x)
 Here, a is the amplitude, b is the frequency, and sin is the sine function.

20
4. Twisting Deformation:
 Creates a twisting effect along the surface.
 Equation for twisting deformation around the z-axis: θ′=θ+a⋅f(θ)
 Here, θ is the angle around the z-axis, a is the twisting factor, and f(θ) is an
optional function.

5. Perlin Noise Deformation:


 Perlin noise can be used for more natural and random-looking deformations.
 Equation for Perlin noise deformation along the x-axis: Perlin
x′=x+a⋅Perlin(b⋅x)
 Here, a is the amplitude, b is the frequency, and Perlin is the Perlin noise
function.

6. Bulge Deformation:
 Creates a bulging effect in a localized region.
 Equation for bulge deformation: d′=d+a⋅e−b⋅r2
 Here, d is the original surface height, a is the bulge factor, b controls the
spread, and r is the radial distance from a central point.

These equations are simple representations, and you can customize them based on
the specific characteristics you want in the surface deformation. Experimenting with
different mathematical functions and parameters will help you achieve the desired
visual effect.

21

You might also like