0% found this document useful (0 votes)
18 views14 pages

ARVRVR

Uploaded by

soulinachanda10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views14 pages

ARVRVR

Uploaded by

soulinachanda10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Unit 1

1 Marks
Easy Questions
1. What does GPU stand for?
2. What does VR stand for in computer graphics?
3. Name one basic geometric transformation.
4. What is the main purpose of the rasterization stage in the graphics pipeline?
5. In perspective projection, do distant objects appear smaller or larger?
6. Which part of the graphics pipeline applies textures to fragments?
Moderate Questions
1. What is foveae rendering?
2. What does LOD stand for in real-time rendering?
3. What does the term clipping mean in the context of the graphics pipeline?
4. In which stage of the graphics pipeline are transformations like translation and rotation applied?
5. What is the main difference between orthographic and perspective projection?
6. Name one type of geometric modelling technique used for creating smooth surfaces.
Hard Questions
1. What role does parallel processing play in real-time VR rendering?
2. What are subdivision surfaces used for in geometric modelling?
3. Explain the difference between implicit modelling and polygonal modelling.
4. Why is real-time rendering critical for VR experiences?
5. What is the primary challenge of maintaining high-quality graphics in real-time rendering for
VR?
6. What is the purpose of output merging in the graphics pipeline?
5 Marks
Easy Questions
1. Explain the difference between orthographic and perspective projection with examples.
2. What are the three basic geometric transformations, and how do they affect objects in a 3D
scene?
3. Describe the rasterization process in the graphics pipeline.
Moderate Questions
1. How does real-time rendering in VR work, and what challenges must be addressed to achieve
smooth user experiences?
2. Explain the process of geometric modelling using polygons and compare it with NURBS.
3. What are the steps involved in fragment processing, and why is it critical for realistic rendering?
Hard Questions
1. What is the role of culling in the graphics pipeline, and how does it optimize rendering
performance in real-time applications?
2. Compare and contrast different shading techniques used in fragment processing.
3. Explain how implicit modelling works and discuss its advantages and disadvantages compared to
polygonal modelling.
15 Marks
Easy Questions
1. Explain the complete graphics pipeline from 3D models to the final image displayed on the
screen.
2. Describe the various types of projections used in computer graphics and their practical
applications.
3. Define geometric modelling and explain different approaches used in creating 3D models,
including advantages and disadvantages.
Moderate Questions
1. Discuss the importance of transformations in computer graphics and explain how matrix
multiplication is used to perform transformations like translation, rotation, and scaling.
2. How do lighting models, such as the Phong reflection model, contribute to realistic rendering in
3D graphics?
3. Describe the process and challenges involved in implementing real-time ray tracing in modern
graphics engines.
4. What is the role of the vertex shader in the graphics pipeline?
5. Describe the process and purpose of rasterization in the graphics pipeline.
6. What is the role of the Fragment Shader, and how does it differ from the Vertex Shader?
7. What is back-face culling, and why is it important in real-time rendering?
Unit 1
1 Marks
Easy Questions
1. What does GPU stand for?
Graphics Processing Unit.

2. What does VR stand for in computer graphics?


Virtual Reality.

3. Name one basic geometric transformation.


Translation.

4. What is the main purpose of the rasterization stage in the graphics pipeline?
Converting 2D projections into pixels.

5. In perspective projection, do distant objects appear smaller or larger?


Smaller.

6. Which part of the graphics pipeline applies textures to fragments?


Fragment processing.

Moderate Questions
1. What is foveae rendering?
A technique where the system renders only the part of the image the viewer is directly looking at in high
resolution, while the rest is rendered at a lower quality.

2. What does LOD stand for in real-time rendering?


Level of Detail.

3. What does the term clipping mean in the context of the graphics pipeline?
Removing parts of objects that are outside the viewing area or view frustum.

4. In which stage of the graphics pipeline are transformations like translation and rotation
applied?
Vertex Processing.

5. What is the main difference between orthographic and perspective projection?


In orthographic projection, objects remain the same size regardless of their distance from the camera,
whereas in perspective projection, distant objects appear smaller.

6. Name one type of geometric modelling technique used for creating smooth surfaces.
NURBS (Non-Uniform Rational B-Splines).
Hard Questions
1. What role does parallel processing play in real-time VR rendering?
It distributes rendering tasks across multiple cores in the GPU to handle complex calculations and
maintain high frame rates.

2. What are subdivision surfaces used for in geometric modelling?


They iteratively refine polygonal meshes to create smooth, high-resolution surfaces.

3. Explain the difference between implicit modelling and polygonal modelling.


Implicit modelling uses mathematical functions to define surfaces, while polygonal modelling uses
discrete polygons, usually triangles, to represent surfaces.

4. Why is real-time rendering critical for VR experiences?


To achieve high frame rates (usually 90 FPS or more) and low latency, which are essential for
preventing motion sickness and ensuring immersive, smooth interactions.

5. What is the primary challenge of maintaining high-quality graphics in real-time rendering for
VR?
Balancing between rendering quality and performance to ensure smooth frame rates and low latency
while rendering complex, detailed scenes.

6. What is the purpose of output merging in the graphics pipeline?


To combine the final rendered fragments into a complete image and handle transparency and blending
before displaying it.

5 Marks
Easy Questions
1. Explain the difference between orthographic and perspective projection with examples.
Orthographic projection is a technique where the size of objects remains constant regardless of their
distance from the camera. Parallel lines remain parallel, and there is no sense of depth. This projection is
commonly used in engineering drawings where accurate measurements are necessary. Perspective
projection, on the other hand, simulates how the human eye perceives the world. Objects further away
from the camera appear smaller, and parallel lines converge at a vanishing point. This projection is
typically used in video games, movies, and VR to create a realistic sense of depth.
Example: In orthographic projection, a cube will have the same size whether it's near or far from the
camera. In perspective projection, a cube that is farther from the viewer will appear smaller, creating a
sense of depth.
2. What are the three basic geometric transformations, and how do they affect objects in a 3D
scene?
The three basic geometric transformations are:
 Translation: Moves an object from one position to another by shifting all its points by a certain
distance along a given direction (x, y, z).
 Rotation: Rotates an object around a specific axis (x, y, or z) by a certain angle. This changes the
orientation of the object in the 3D space.
 Scaling: Changes the size of an object by expanding or contracting its dimensions. Uniform scaling
increases or decreases the object’s size in all directions, while non-uniform scaling changes its size
along specific axes.
Effect: These transformations allow positioning, orienting, and resizing objects in a 3D scene, crucial
for defining how objects interact within the environment.

3. Describe the rasterization process in the graphics pipeline.


Rasterization is the stage in the graphics pipeline where 2D vertices (after projection) are converted into
pixels or fragments. It involves the following steps:
 Vertex projection: The 3D vertices of an object are transformed into 2D screen coordinates using
perspective or orthographic projection.
 Scan conversion: The 2D shape is mapped onto the pixel grid. For each triangle or polygon, the
pixels covered by it are identified.
 Interpolation: Values such as colour, texture coordinates, and lighting information are interpolated
across the pixels to create smooth transitions.
Rasterization is fast and efficient, making it ideal for real-time applications such as video games and
VR.

Moderate Questions
1. How does real-time rendering in VR work, and what challenges must be addressed to achieve
smooth user experiences?
Real-time rendering in VR involves creating visual scenes that respond instantly to user inputs and
movements. This requires rendering at high frame rates (typically 90 frames per second or more) to
prevent latency and motion sickness.
Key techniques used include:
 Foveated rendering: Reduces rendering workload by focusing high detail only in the user’s direct
line of sight.
 Parallel processing: Distributes rendering tasks across multiple GPU cores to handle the complex
computations in real time.
 Level of Detail (LOD): Simplifies distant objects to reduce computational load without sacrificing
visual quality up close.
Challenges: The main challenges include maintaining high frame rates, minimizing latency (below
20ms), managing the performance vs. quality trade-off, and ensuring synchronization between the user’s
head movement and the visuals.
2. Explain the process of geometric modelling using polygons and compare it with NURBS.

Polygonal Modeling: In polygonal modelling, objects are represented by a mesh of polygons, usually
triangles. Each triangle is defined by its vertices, edges, and faces. The more polygons an object has, the
smoother its surface appears. Polygonal models are popular in games and real-time applications due to
their simplicity and compatibility with hardware rendering.
NURBS (Non-Uniform Rational B-Splines): NURBS uses mathematical functions to represent curves
and surfaces. It is highly accurate for representing smooth, curved surfaces and is often used in CAD
and industrial design. Unlike polygons, NURBS can represent both curved and linear surfaces
seamlessly.
Comparison:
 Efficiency: Polygonal models are easier to manipulate and render in real-time, while NURBS
requires more computational power.
 Flexibility: NURBS can create more complex, smooth surfaces with fewer control points, but are
less intuitive to edit compared to polygons.
 Application: Polygonal models are widely used in interactive applications like games, whereas
NURBS are used in industries requiring precision, like automotive and aerospace design.

3. What are the steps involved in fragment processing, and why is it critical for realistic
rendering?

Fragment processing is the stage in the graphics pipeline where fragments (potential pixels) are
processed to determine their final color and other attributes (e.g., depth, transparency). The steps
include:
 Shading: Calculating lighting and color for each fragment using shading models. (e.g.: Phong,
Gouraud).
 Texturing: Mapping texture images onto fragments to add detail and realism.
 Depth Testing: Ensuring only the visible fragments are rendered by comparing depth values.
 Blending: Combining fragments from different objects to handle transparency effects.
Importance: Fragment processing is critical because it determines the final appearance of the scene,
including how materials react to light and how realistic textures look. It's essential for achieving photo-
realistic graphics and believable environments in applications like VR.
Hard Questions
1. What is the role of culling in the graphics pipeline, and how does it optimize rendering
performance in real-time applications?

Culling is a process in the graphics pipeline that eliminates parts of the scene that do not need to be
rendered, optimizing performance. The two main types of culling are:
 Back-face culling: This removes faces of objects that are not visible to the camera (i.e., the back-
facing triangles).
 Frustum culling: This eliminates objects or parts of objects that are outside the camera’s view
frustum (the visible area).
Optimization: By discarding unnecessary data early in the pipeline, culling reduces the number of
vertices and fragments that need to be processed, significantly improving performance in real-time
applications like games and VR, where every frame must be rendered in milliseconds.

2. Compare and contrast different shading techniques used in fragment processing.

The three most common shading techniques used in fragment processing are:
 Flat Shading: Applies a single color to each polygon based on its normal vector. It is
computationally inexpensive but results in faceted, unrealistic visuals.
 Gouraud Shading: Interpolates colors across the vertices of a polygon and then across the surface of
the polygon. It produces smooth shading but can lead to inaccuracies in lighting.
 Phong Shading: Interpolates the surface normals and calculates the color for each pixel individually,
providing much smoother results and more accurate lighting effects, especially for shiny surfaces.
Comparison:
 Performance: Flat shading is the fastest, followed by Gouraud, while Phong is the most
computationally intensive.
 Visual Quality: Flat shading produces the least realistic results, Gouraud improves realism but can
miss highlights, while Phong shading offers the most realistic lighting effects.
 Use Cases: Flat shading is often used in early development stages for simplicity, Gouraud in real-
time applications with limited resources, and Phong for higher-quality rendering in games and
simulations.
3. Explain how implicit modelling works and discuss its advantages and disadvantages compared
to polygonal modelling.

Implicit Modeling represents objects using mathematical functions that define surfaces implicitly, such
as using signed distance fields or isosurfaces. A surface is defined as the set of points where a function
evaluates to zero, and the volume inside or outside the object is determined by the sign of the function.
Advantages:
 Smooth Surfaces: Implicit models can represent very smooth surfaces without requiring dense
polygonal meshes.
 Flexibility: It allows for complex shapes and smooth transitions between objects.
 Good for Simulations: Implicit models are often used in simulations where continuous changes
occur, such as fluid dynamics.
Disadvantages:
 Performance: Implicit surfaces can be computationally expensive to render, requiring conversion to
polygons or specialized algorithms like ray marching.
 Less Intuitive: Creating and manipulating implicit models can be less intuitive than polygonal
modelling, which is more common and widely supported in 3D software.
Compared to polygonal modelling, implicit modelling excels in smoothness and handling complex
surfaces, but it requires more processing power and can be harder to use in real-time applications.

15 Marks
Easy Questions
1. Explain the complete graphics pipeline from 3D models to the final image displayed on the
screen.
The graphics pipeline is the sequence of steps through which 3D models are transformed into a 2D
image displayed on a screen. It consists of the following stages:
(1) Application Stage: This is where the CPU processes the scene, inputs, physics, and game logic. It
determines what objects are visible and their properties (color, texture, etc.).
(2) Geometry Processing:
 Vertex Shading: This stage applies transformations to vertices (translation, rotation, scaling) to
position them in 3D space. The vertices are also shaded to calculate basic color values.
 Clipping: Portions of objects that are outside the view frustum are discarded.
 Projection: The 3D coordinates are transformed into 2D screen coordinates, either through
perspective or orthographic projection.
 Screen Mapping: The transformed coordinates are then mapped to pixel positions on the display.
(3) Rasterization:
Converts 2D shapes into a set of fragments (potential pixels). This involves determining which
pixels correspond to which parts of each object’s surface.
(4) Fragment Processing:
 Shading: Complex lighting calculations, texturing, and other effects are applied to each fragment
to determine its final color.
 Depth Testing: Ensures that only the fragments closest to the camera are visible (removing hidden
surfaces).
 Blending: This step handles transparency by blending the colors of overlapping fragments.
(5) Output Merging: The final pixels are written to the screen buffer, and the image is displayed on the
screen.
The graphics pipeline is crucial for rendering 3D scenes efficiently, especially in real-time applications
like video games or VR.

2. Describe the various types of projections used in computer graphics and their practical
applications.

Projections are used to map 3D objects onto a 2D screen. The two main types of projections are
orthographic and perspective projection, each with its variants and applications.

 Orthographic Projection: This projection method displays objects without perspective distortion.
Objects remain the same size regardless of their distance from the camera.
 Applications: Orthographic projection is commonly used in technical drawings, CAD software,
and architecture because it preserves the exact dimensions of objects, which is important for
design and measurement.
 Perspective Projection: This projection simulates how the human eye perceives depth. Objects
further away from the camera appear smaller, while closer objects appear larger.
 Applications: Perspective projection is widely used in video games, simulations, virtual reality,
and animations to create a sense of depth and realism.
 Variants:
a) One-Point Perspective: Lines converge to a single point on the horizon, used when looking
straight down a hallway or tunnel.
b) Two-Point Perspective: Lines converge to two points, often used in architectural renderings.
c) Three-Point Perspective: Used when looking at an object from an extreme angle, where
vertical lines converge as well as horizontal lines.
 Oblique Projection: This projection skews objects along the x or y-axis to give a distorted yet useful
view. It is less common in modern 3D graphics but still used for certain engineering or cartographic
drawings.
 Applications: Oblique projection is often used in early stages of design, such as game level
design, because it shows all three axes, which helps with layout and navigation.
In summary, projection methods are essential tools for converting 3D objects into 2D views, each with
specific applications depending on the need for realism or precision.
3. Define geometric modelling and explain different approaches used in creating 3D models,
including advantages and disadvantages.
Geometric modelling refers to the process of representing 3D objects mathematically in computer
graphics. It is fundamental to creating 3D models for games, movies, simulations, and CAD
applications.
The main approaches to geometric modelling include:
a) Polygonal Modeling: Objects are represented as a collection of polygons, typically triangles or
quadrilaterals.
 Advantages:
o Easy to manipulate and modify.
o Supported by nearly all 3D rendering hardware.
o Efficient for real-time rendering, ideal for video games and interactive applications.
 Disadvantages:
o Surfaces can appear faceted unless a large number of polygons are used.
o High-poly models can be computationally expensive to render.
b) NURBS (Non-Uniform Rational B-Splines): Uses curves and surfaces defined by control points to
create smooth, accurate shapes.
 Advantages:
o Excellent for representing smooth, curved surfaces with fewer control points.
o Provides great precision, making it ideal for industrial design (e.g., automotive, aerospace).
 Disadvantages:
o More complex to manipulate than polygons.
o Less commonly used in real-time applications due to computational overhead.
c) Subdivision Surfaces: A polygonal model is repeatedly subdivided to create a smoother surface.
 Advantages:
o Generates very smooth surfaces from relatively simple base meshes.
o Can create both organic and mechanical shapes efficiently.
 Disadvantages:
o Subdivision increases the polygon count exponentially, which can be expensive in terms of
processing power.
d) Implicit Modeling: Surfaces are defined by implicit functions, often used for fluid and soft-body
simulations.
 Advantages:
o Very good for representing complex surfaces and volume.
o Can smoothly handle intersections and blending between objects.
 Disadvantages:
o Difficult to render directly, usually requires conversion to polygons for visualization.
o More complex to manipulate and compute.

Each modelling technique is suited to different applications, with polygonal modelling dominating real-
time applications, while NURBS and subdivision surfaces are used in industries requiring high
precision or complex organic forms.
Moderate Questions
1. Discuss the importance of transformations in computer graphics and explain how matrix
multiplication is used to perform transformations like translation, rotation, and scaling.

Transformations are fundamental in computer graphics for positioning, orienting, and resizing objects in
3D space. The three basic transformations—translation, rotation, and scaling—are all performed using
matrix multiplication. Each transformation can be represented by a transformation matrix, which, when
multiplied by the object's vertex coordinates, results in the desired transformation.
 Translation: Translation moves an object by adding values to its x, y, and z coordinates. The
translation matrix is a 4x4 matrix used in homogeneous coordinates:
1 0 0 𝑡𝑥
0 1 0 ty
𝑇=[ ]
0 0 1 tz
0 0 0 1
When this matrix is multiplied by a vertex position vector, it moves the object by 𝒕𝒙 , 𝒕𝒚 and 𝒕𝒛 .

 Rotation: Rotation rotates an object around one of the axes. For example, rotating around the z-
axis uses the following matrix:
cos 𝜃 − sin 𝜃 0 0
sin 𝜃 cos 𝜃 0 0
𝑇=[ ]
0 0 1 0
0 0 0 1
This matrix rotates the object by an angle θ around the z-axis. Similar matrices are used for
rotation around the x and y axes.
 Scaling: Scaling changes the size of an object. The scaling matrix is:
sx 0 0 0
0 sy 0 0
𝑇=[ ]
0 0 sz 0
0 0 0 1
This matrix scales the object by factors sx, sy, and sz along the x, y, and z axes, respectively.
Matrix Multiplication is used to combine these transformations. For example, if an object needs to be
translated and then rotated, the two transformation matrices are multiplied together, and the resulting
matrix is applied to the object. This allows multiple transformations to be applied in a single step,
ensuring efficient and consistent movement, rotation, and scaling of objects in the scene.
Importance: Transformations are essential for animating objects, setting camera perspectives, and
placing objects in scenes. Without transformations, 3D graphics would be static and unrealistic.
2. How do lighting models, such as the Phong reflection model, contribute to realistic rendering in
3D graphics?

Lighting models in computer graphics are used to simulate how light interacts with surfaces, giving
objects realistic appearances. The Phong reflection model is one of the most commonly used lighting
models, consisting of three components:
 Ambient Lighting: This represents the general light that is scattered throughout the scene,
ensuring that all objects are visible even if no direct light hits them. It is a constant value added
to all surfaces.
 Diffuse Reflection: Diffuse reflection occurs when light hits a rough surface and scatters in all
directions. The intensity of the diffuse reflection is determined by Lambert’s cosine law, where
the intensity depends on the angle between the light source and the surface normal. The diffuse
intensity I_d is calculated as:

𝐼𝑑 = 𝑘𝑑 (𝐿. 𝑁)𝐼𝐿

where 𝒌𝒅 is the diffuse reflection coefficient, L is the light direction, N is the surface normal,
and 𝑰𝑳 is the light intensity.

 Specular Reflection: Specular reflection simulates the shiny highlights on smooth surfaces
where light is reflected in a particular direction. This effect is most noticeable on materials like
metal or polished surfaces. The specular intensity I_s is calculated using:

𝐼𝑠 = 𝑘𝑠 (𝑅. 𝑉)𝑛 𝐼𝐿

where 𝒌𝒔 is the specular reflection coefficient, R is the reflection direction, V is the view
direction, n is the shininess exponent, and 𝑰𝑳 is the light intensity.

Contribution to Realism:
 The ambient component ensures that objects are always visible, preventing harsh shadows from
making objects completely black.
 The diffuse component simulates how light behaves on rough surfaces, ensuring that objects
appear illuminated even when the light source is at an angle.
 The specular component creates realistic highlights, adding depth and realism to shiny surfaces.
The Phong reflection model provides a good balance between computational efficiency and visual
realism, which is why it’s widely used in real-time rendering applications such as video games and
simulations. Other advanced models, such as the Blinn-Phong and Cook-Torrance, further improve
realism by better simulating surface materials.
3. Describe the process and challenges involved in implementing real-time ray tracing in modern
graphics engines.

Real-time ray tracing is a rendering technique that simulates how rays of light interact with objects in a
scene to produce highly realistic images with accurate lighting, shadows, and reflections. Unlike
rasterization, which approximates lighting effects, ray tracing follows the actual paths of light rays.

Process:
1. Ray Casting: Rays are cast from the camera into the scene to determine what objects the rays
intersect. This step involves complex geometric calculations to check where the rays hit surfaces.
2. Shading: For each ray-object intersection, lighting models (like Phong) are used to determine how
light interacts with the object’s material, calculating the final color of the pixel.
3. Reflection and Refraction: Rays can bounce off reflective surfaces or pass through transparent
materials. For each bounce, additional rays are cast to determine the new intersections and their
contributions to the final pixel color.
4. Shadows: Secondary rays are cast from the intersection points towards light sources to check
whether objects are in shadow or illuminated.
5. Final Image Composition: After computing the colors for all the rays, the final image is composed
and displayed.

Challenges:
1. Performance: Ray tracing is computationally expensive due to the large number of rays and the
complex calculations needed for each intersection. Real-time ray tracing requires efficient algorithms,
such as Bounding Volume Hierarchies (BVH), to accelerate ray-object intersection tests.
2. Hardware Requirements: Real-time ray tracing demands powerful GPUs capable of parallel
processing. Modern GPUs, such as those with NVIDIA RTX technology, incorporate specialized ray
tracing cores to handle these computations more efficiently.
3. Balancing Quality and Speed: To achieve real-time performance, compromises must be made in
terms of the number of rays cast, the number of light bounces, and the accuracy of reflections.
Techniques like denoising are often used to smooth out noisy results caused by low ray counts.
4. Integration with Rasterization: Many modern engines, like Unreal Engine and Unity, use a hybrid
approach that combines rasterization with ray tracing. This allows for fast rendering of basic elements
while using ray tracing only for specific effects like reflections or global illumination.
Future Prospects: Real-time ray tracing is a major advancement in graphics, but it is still evolving. As
hardware becomes more powerful and algorithms more efficient, real-time ray tracing will continue to
improve, offering even more realistic and immersive visual experiences in games and VR.
4. What is the role of the vertex shader in the graphics pipeline?
The vertex shader is the first programmable stage in the graphics pipeline, responsible for processing
individual vertices. Its primary role includes:
 Transforming vertex positions from object space to clip space using transformation matrices
(model, view, projection).
 Computing lighting calculations like vertex normals for smooth shading.
 Passing additional data (e.g., texture coordinates, colors) to subsequent pipeline stages.
 Applying deformations (such as skeletal animation or vertex skinning).
By doing this, the vertex shader ensures that each vertex is properly transformed and ready for further
processing in later pipeline stages.

5. Describe the process and purpose of rasterization in the graphics pipeline.


Rasterization is the process of converting vector-based graphics (such as triangles) into fragments
(potential pixels) for the pixel shader. It transforms the 3D representation into a 2D image:
 Vertices of 3D polygons are projected onto the 2D screen and filled with fragments.
 Each fragment contains position, color, texture coordinates, and depth. Rasterization ensures
each triangle is accurately represented as pixels for shading, texturing, and visibility testing.

6. What is the role of the Fragment Shader, and how does it differ from the Vertex Shader?
The Fragment Shader operates after rasterization and is responsible for computing the color and other
attributes of each fragment (potential pixel) that will be rendered to the screen. Its role includes:

 Applying lighting models (such as Phong or Blinn-Phong) to simulate realistic surface lighting.
 Texturing, which involves mapping 2D textures onto 3D surfaces.
 Implementing advanced effects like reflections, shadows, or transparency.
The Fragment Shader differs from the Vertex Shader in that:
 The Vertex Shader processes vertices (positions, normals), transforming them in 3D space, whereas
the Fragment Shader works at the pixel level to determine color and lighting effects for each pixel.
 The Vertex Shader operates earlier in the pipeline, while the Fragment Shader is later and
primarily concerned with rendering.

7. What is back-face culling, and why is it important in real-time rendering?


Back-face culling is an optimization technique used in real-time rendering to improve performance by
not rendering polygons (usually triangles) that face away from the camera. When a polygon's normal
vector points away from the camera, it is considered a back-face and is discarded:
 The algorithm determines whether a triangle is facing the camera by checking the orientation
of its vertices.
 If the polygon is facing away (based on the winding order of vertices), it is culled (not rendered).
Back-face culling is important because it reduces the number of polygons processed by the
rendering pipeline, saving computational resources and improving rendering speed,
particularly in scenes with a large number of hidden surfaces (e.g., the inside of a closed object).

You might also like