0% found this document useful (0 votes)
20 views

CG Lectures Stack

The document provides an introduction to computer graphics, covering its definition, applications, history, and types, including 2D and 3D graphics. It discusses the graphics pipeline, hardware like GPUs, and graphics libraries, as well as color models and rendering techniques. The content also explores future trends in computer graphics, such as real-time ray tracing and AI applications, concluding with a homework assignment to create a simple 2D image using learned concepts.

Uploaded by

kiran.bsit520
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

CG Lectures Stack

The document provides an introduction to computer graphics, covering its definition, applications, history, and types, including 2D and 3D graphics. It discusses the graphics pipeline, hardware like GPUs, and graphics libraries, as well as color models and rendering techniques. The content also explores future trends in computer graphics, such as real-time ray tracing and AI applications, concluding with a homework assignment to create a simple 2D image using learned concepts.

Uploaded by

kiran.bsit520
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Week#1

Installation Guide:
https://www.youtube.com/watch?v=bi-NvsFKcZg

Setting Up OpenGL for Windows. OpenGL( Open Graphics Library) is a… | by Navraj khanal |
The Startup | Medium

Lecture 1: Introduction to Computer


Graphics

1. Overview of Computer Graphics


What is Computer Graphics?

Definition:
Computer graphics involves creating and manipulating visual content with computers. It’s how
we make pictures, animations, and interactive visual content.

Where is it used?

● Video Games: Creating characters, environments, and effects.

● Movies: Think about the amazing visuals in animated films or CGI effects.

● Virtual Reality (VR) & Augmented Reality (AR): Immersive experiences like VR
games or AR apps.

● Medical Imaging: Visualizing things like MRIs or CT scans.

● Simulations: Flight simulators or architectural visualizations.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 1 of 53
:
Activity:
Can you think of other areas where computer graphics are used?
(Some answers might include web design, mobile apps, education, automotive design, etc.)

2. History of Computer Graphics


How Did It All Start?

Early Beginnings:
The field of computer graphics began in the 1960s. Back then, the graphics were pretty basic
—just simple lines and shapes.

Important Milestones:

● Sketchpad (1963): The first program that allowed users to draw on a computer
screen using a light pen. It was created by Ivan Sutherland, often called the "father of
computer graphics."

● OpenGL (1992): An important graphics library that allows developers to create


complex 3D graphics.

● Pixar's Toy Story (1995): The first full-length movie made entirely with computer-
generated imagery (CGI).

Interactive Timeline:
Imagine we have a timeline where we place these milestones. As we go through each one,
you see how the field evolved from simple line drawings to the realistic 3D worlds we have
today.

Activity:
Which of these technologies do you think had the biggest impact on modern computer
graphics?
(I would give you a poll or ask you to discuss, and then we would talk about the answers.)

3. Types of Computer Graphics


2D vs. 3D Graphics

2D Graphics:
These are flat images—think of things like icons, diagrams, or sprites in 2D video games. They

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 2 of 53
:
don’t have depth.

3D Graphics:
These images have depth and perspective. You see 3D graphics in movies like Toy Story or
games like Minecraft. They’re made up of models that can be rotated and viewed from different
angles.

Example:
Imagine showing two images: one is a simple drawing of a circle (2D), and the other is a 3D
model of a sphere. You can see how the 3D model looks more realistic because it has shadows
and can be rotated.

Activity:
Can you identify which images in the slides are 2D and which are 3D?
We’d go through a series of images together, and you’d tell me which ones are which.

Raster vs. Vector Graphics

Raster Graphics:
These images are made up of tiny pixels (dots of color). Common formats include JPEG, PNG,
and BMP. If you zoom in on a raster image, you’ll see it gets blurry or pixelated.

Vector Graphics:
These are made up of paths, which are defined by mathematical formulas. They can be scaled
up or down without losing quality. Formats include SVG and EPS.

Example:
Imagine I draw a simple shape, like a star, in both a raster editor (like Photoshop) and a vector
editor (like Illustrator). You’d see that when I zoom in, the raster star gets blurry, while the
vector star stays sharp.

Activity:
What do you think will happen if we scale up these images?
I’d ask you to predict and then show you the results, so you see the difference between raster
and vector graphics.

4. Graphics Systems and Hardware


The Graphics Pipeline

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 3 of 53
:
What’s the Graphics Pipeline?
It’s the process a computer uses to turn data into an image on the screen. It’s like an assembly
line where each stage adds something to the final image.

Stages of the Pipeline:

● Modeling: Creating the 3D shapes or models.

● Transformation: Moving, rotating, or scaling those models.

● Lighting: Adding light sources to the scene.

● Rendering: Turning the models into the final image you see.

Interactive Diagram:
Imagine a flowchart showing these stages. As we click on each stage, we learn about what
happens in that part of the pipeline.

Activity:
Can you match the stages of the pipeline with their descriptions?
I’d show you descriptions and ask you to match them with the correct stage.

Graphics Hardware

What’s a GPU?
The Graphics Processing Unit (GPU) is a special part of your computer designed to handle
complex images quickly. It’s like the brain for graphics.

Why Do We Need It?


Creating realistic images requires a lot of calculations. The GPU does these really fast, which
is why games and videos look so good today.

Activity:
Who knows the difference between a CPU and a GPU?
We’d discuss how the CPU is the general brain of the computer, while the GPU specializes in
graphics.

5. Basic Graphics Programming


Graphics Libraries

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 4 of 53
:
What Are Graphics Libraries?
They’re tools that make it easier for developers to create graphics. Popular ones include
OpenGL (for 3D graphics), DirectX (used in Windows games), and WebGL (used in web
browsers).

Why Are They Important?


These libraries provide pre-built functions, so developers don’t have to write everything from
scratch.

Live Coding Example:


Imagine we’re writing a simple program to draw a triangle on the screen using WebGL. We’d
go step by step, and you’d see how each line of code adds to the final image.

Activity:
What do you think will happen if we change this line of code?
I’d ask you to predict, then we’d change it and see what happens—maybe the triangle moves,
changes color, or something else.

Coordinate Systems

What’s a Coordinate System?


It’s a way of describing where things are in space. In 2D, we have x (horizontal) and y (vertical)
axes. In 3D, we add a z-axis (depth).

Example:
If I tell you to plot the point (2, 3) on a graph, you’d find the spot where x=2 and y=3 intersect.
In 3D, you’d also consider the z value to place the point in space.

Activity:
Let’s plot some points together!
I’d give you coordinates, and you’d place them on a graph or use an online tool to see where
they land.

Week#2

6. Color Models and Perception


Understanding Color Models

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 5 of 53
:
What Are Color Models?
They’re ways of representing colors using numbers. The most common is RGB, where colors
are made by mixing Red, Green, and Blue light.

Other Models:

● CMYK: Used in printing, stands for Cyan, Magenta, Yellow, and Black.

● HSV: Represents color using Hue (color), Saturation (intensity), and Value
(brightness).

Interactive Demo:
Imagine using a color picker tool. As you adjust the RGB values, you see the color change on
the screen.

Activity:
What color do you get with RGB values (255, 0, 0)?
(Answer: Pure red. We’d explore different combinations to see what colors they create.)

7. Rendering Concepts
Shading, Lighting, and Texture Mapping

Shading:
How light interacts with surfaces. Different shading techniques make surfaces look flat, smooth,
or shiny.

Lighting:
Adds realism by showing how light sources affect objects. For example, a lamp casts light and
shadows.

Texture Mapping:
Applying images (textures) to 3D models, like putting a picture of wood grain on a virtual table.

Interactive Demo:
Imagine we have a 3D model of a sphere. We’d apply different shading techniques (flat,
smooth) and textures (like wood or metal) to see how they change its appearance.

Activity:
Which shading technique looks most realistic?
I’d show you examples, and you’d vote on which one you think looks the most real.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 6 of 53
:
Rasterization

What’s Rasterization?
It’s the process of turning vector shapes into pixels on the screen. This is how most graphics
are rendered, especially in video games.

Visualization:
Imagine we have a simple shape, like a triangle. We’d use an online tool to rasterize it,
zooming in to see how it’s made up of tiny squares (pixels).

8. Applications and Future Trends


Where Is Computer Graphics Going?

Emerging Technologies:

● Real-Time Ray Tracing: Allows for super realistic lighting in games.

● Virtual Reality (VR) & Augmented Reality (AR): Immersive experiences where
graphics play a key role.

● AI in Graphics: AI can create graphics, enhance images, and even generate entire
scenes automatically.

Discussion:
Where do you think computer graphics will be most influential in the future?
We’d brainstorm ideas, like in gaming, movies, education, or even medicine.

Challenges:

● Realism vs. Performance: Balancing how real graphics look with how fast they
render.

● Computation Power: Graphics are getting more realistic, but they require more
powerful hardware.

Activity:
Any questions or predictions about the future of computer graphics?
I’d open the floor for your thoughts and questions.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 7 of 53
:
9. Conclusion and Q&A
Recap:
We’ve covered what computer graphics are, their history, different types, how they’re created,
and where they’re going in the future.

10. Homework Assignment


Task:
Create a simple 2D image using any graphics software (like Paint, GIMP, or even PowerPoint).
Try to apply concepts we learned today, like understanding color models or choosing between
raster and vector graphics.

Submission:
We’ll look at everyone’s work next time and discuss how you applied the concepts.

This concludes our interactive introduction to computer graphics! I hope you enjoyed the
session and learned a lot. Don’t hesitate to reach out if you have any questions as you explore
more about computer graphics!

Week#3

Lecture 2: Fundamental Concepts in


Computer Graphics with OpenGL
Examples
1. Introduction to Computer Graphics
● Definition and Importance: Computer graphics is the study of techniques for
creating and manipulating visual content using a computer. It encompasses a wide
range of applications, from simple 2D graphics to complex 3D environments and virtual
reality experiences.

● Brief History: Discuss the evolution from early vector graphics to the modern era of
real-time 3D rendering. Mention pioneers like Ivan Sutherland (Sketchpad) and the

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 8 of 53
:
development of the OpenGL and DirectX APIs.

2. Rendering Techniques
● 2.1 Forward Rendering (Ray-Casting)

○ Concept: In ray-casting, rays are cast from the camera into the scene. The
color of the pixel is determined by the first object the ray intersects.

○ OpenGL Example:

// Pseudo-code for Ray-Casting

for each pixel in screen:

generate ray from camera through pixel

for each object in scene:

if ray intersects object:

compute color at intersection point

break

set pixel color

○ Application: Non-real-time rendering scenarios like offline renderers (e.g.,


Blender’s Cycles).

○ Advantages: Produces highly realistic images with accurate reflections,


shadows, and lighting.

○ Disadvantages: Computationally intensive, not suitable for real-time


applications.

● 2.2 Backward Rendering (Rasterization)

○ Concept: Rasterization converts 3D models into a 2D image. It involves


determining which pixels are covered by which triangles and computing their
color based on lighting and texture.

○ OpenGL Example:

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 9 of 53
:
// OpenGL code snippet for setting up a basic render pipeline

glBegin(GL_TRIANGLES);

glVertex3f(-0.5f, -0.5f, 0.0f); // Vertex 1

glVertex3f(0.5f, -0.5f, 0.0f); // Vertex 2

glVertex3f(0.0f, 0.5f, 0.0f); // Vertex 3

glEnd();

○ Applications: Used in video games, simulations, and VR where real-time


rendering is crucial.

○ Advantages: Efficient, suitable for real-time applications.

○ Disadvantages: Less realistic lighting compared to ray-casting.

3. Applications of Computer Graphics


● 3.1 Game Engines

○ Role: Used for creating interactive 3D worlds. They manage rendering,


physics, AI, and more.

○ Examples: Unity, Unreal Engine.

○ Features: Real-time rendering, object interaction, dynamic lighting.

○ OpenGL Example: Basic setup for a game engine-like window.

// Initialize GLFW window

if (!glfwInit()) {

return -1;

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 10 of 53
:
GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL Game Window", NULL,
NULL);

if (!window) {

glfwTerminate();

return -1;

glfwMakeContextCurrent(window);

while (!glfwWindowShouldClose(window)) {

// Rendering code here

glClear(GL_COLOR_BUFFER_BIT);

// Swap buffers

glfwSwapBuffers(window);

glfwPollEvents();

glfwTerminate();

● 3.2 CAD (Computer-Aided Design)

○ Role: Used for designing mechanical parts, architectural structures, and more.

○ Applications: AutoCAD, SolidWorks.

○ OpenGL Example: Rendering a basic 3D cube that can be used in a CAD


application.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 11 of 53
:
// Function to render a simple 3D cube

void renderCube() {

glBegin(GL_QUADS);

// Front face

glVertex3f(-1.0f, -1.0f, 1.0f);

glVertex3f(1.0f, -1.0f, 1.0f);

glVertex3f(1.0f, 1.0f, 1.0f);

glVertex3f(-1.0f, 1.0f, 1.0f);

// (Other faces omitted for brevity)

glEnd();

● 3.3 Visualization

○ Role: Representing complex data graphically to aid understanding.

○ Applications: Scientific visualization, business analytics.

○ Example: Heatmaps, bar graphs, 3D data plots.

● 3.4 Virtual Reality (VR)

○ Role: Creating immersive environments.

○ Applications: Oculus Rift, HTC Vive.

○ Example: VR simulations for training and education.

○ OpenGL Example: Setting up a basic VR scene with two viewports (for left
and right eyes).

// Pseudo-code for VR rendering

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 12 of 53
:
glViewport(0, 0, windowWidth / 2, windowHeight);

renderScene(LEFT_EYE);

glViewport(windowWidth / 2, 0, windowWidth / 2, windowHeight);

renderScene(RIGHT_EYE);

4. Polygonal Representation
● Concept: 3D objects are represented as meshes made up of polygons (triangles).

● Advantages: Easy to render, good for hardware acceleration.

● Disadvantages: Complex shapes require many polygons.

● OpenGL Example: Rendering a 3D model from an array of vertices and indices.

GLfloat vertices[] = {

-0.5f, -0.5f, -0.5f,

0.5f, -0.5f, -0.5f,

0.5f, 0.5f, -0.5f,

-0.5f, 0.5f, -0.5f,

// Other vertices

};

GLuint indices[] = {

0, 1, 2, 2, 3, 0,

// Other indices

};

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 13 of 53
:
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_INT, indices);

Week#4

5. Basic Radiometry
● Radiometric Quantities:

○ Radiance: Measures light emitted or reflected in a given direction.

○ Irradiance: Measures light incident on a surface.

○ Importance in Graphics: Accurate lighting calculations are crucial for realism.

○ OpenGL Example: Basic lighting using the Phong model.

// Setup basic Phong lighting

glEnable(GL_LIGHTING);

glEnable(GL_LIGHT0);

GLfloat light_position[] = { 1.0, 1.0, 1.0, 0.0 };

glLightfv(GL_LIGHT0, GL_POSITION, light_position);

6. Geometric Concepts
● Similar Triangles:

○ Definition: Triangles that have identical angles and proportional sides.

○ Importance: Used in perspective projections and texture mapping.

○ OpenGL Example: Using similar triangles for perspective correction in


texturing.

// Applying perspective correction for textures

glEnable(GL_TEXTURE_2D);

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 14 of 53
:
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);

glBindTexture(GL_TEXTURE_2D, textureId);

● Projection Models:

○ Orthographic Projection:

■ Definition: Parallel projection where size does not diminish with


distance.

■ Applications: Technical drawings, CAD.

■ OpenGL Example:

glMatrixMode(GL_PROJECTION);

glLoadIdentity();

glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);

○ Perspective Projection:

■ Definition: Objects appear smaller as they are further from the


camera.

■ Applications: Realistic rendering, video games, VR.

■ OpenGL Example:

glMatrixMode(GL_PROJECTION);

glLoadIdentity();

gluPerspective(45.0f, (GLfloat)width / (GLfloat)height, 0.1f, 100.0f);

7. Conclusion
● Summary: Recap of key concepts such as rendering techniques, polygonal
representation, radiometry, and projection models.

● Q&A Session: Open floor for students to ask questions or clarify doubts.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 15 of 53
:
8. Interactive Demonstration
● Practical Example: Implement a simple 3D scene using OpenGL that includes
multiple polygons, basic lighting, and textures.

● Software Tools: Use OpenGL with C++ to demonstrate live coding.

9. Homework Assignment
● Research Task: Explain the difference between forward and backward rendering, and
provide real-world examples of where each is used.

● Practical Task: Write a program in OpenGL to create a 3D scene with at least one
light source and a textured object.

Week#5

Lecture 3:Standard Graphics APIs and GUI

1. Introduction to Standard Graphics APIs and GUI


Construction
● Overview:

○ Standard graphics APIs like OpenGL, DirectX, and Vulkan are used for
rendering graphics in a wide range of applications.

○ GUI frameworks like Qt, GTK, and Swing are used for constructing user
interfaces in applications.

● Code Example:

○ We will use Qt with OpenGL to create a basic interactive GUI.

#include <QApplication>

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 16 of 53
:
#include <QWidget>

#include <QOpenGLWidget>

#include <QOpenGLFunctions>

class OpenGLWidget : public QOpenGLWidget, protected QOpenGLFunctions {

protected:

void initializeGL() override {

initializeOpenGLFunctions();

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);

void paintGL() override {

glClear(GL_COLOR_BUFFER_BIT);

// Draw a simple triangle

glBegin(GL_TRIANGLES);

glColor3f(1.0f, 0.0f, 0.0f); glVertex2f(0.0f, 0.5f);

glColor3f(0.0f, 1.0f, 0.0f); glVertex2f(-0.5f, -0.5f);

glColor3f(0.0f, 0.0f, 1.0f); glVertex2f(0.5f, -0.5f);

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 17 of 53
:
glEnd();

};

int main(int argc, char *argv[]) {

QApplication app(argc, argv);

QWidget window;

OpenGLWidget *glWidget = new OpenGLWidget();

window.setCentralWidget(glWidget);

window.resize(800, 600);

window.show();

return app.exec();

2. Basic Rendering Concepts


● Emission and Scattering of Light:

○ Emission refers to objects that emit light, like the sun or a lamp.

○ Scattering is how light interacts with particles, affecting its path and color.

● Numerical Integration in Rendering:

○ Numerical integration is used to calculate light's behavior when it hits a


surface, contributing to realistic lighting and shading.

● Code Example:

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 18 of 53
:
○ Basic ray tracing for rendering a scene with light emission and scattering.

#include <iostream>

#include <cmath>

// Define a simple vector class

struct Vec3 {

float x, y, z;

Vec3() : x(0), y(0), z(0) {}

Vec3(float x, float y, float z) : x(x), y(y), z(z) {}

Vec3 operator+(const Vec3& v) const { return Vec3(x + v.x, y + v.y, z + v.z); }

Vec3 operator-(const Vec3& v) const { return Vec3(x - v.x, y - v.y, z - v.z); }

Vec3 operator*(float scalar) const { return Vec3(x * scalar, y * scalar, z * scalar); }

Vec3 normalize() const {

float len = std::sqrt(x * x + y * y + z * z);

return Vec3(x / len, y / len, z / len);

};

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 19 of 53
:
// A function to simulate ray-sphere intersection

bool raySphereIntersect(const Vec3& origin, const Vec3& direction, const Vec3& center, float
radius, float& t) {

Vec3 oc = origin - center;

float a = direction.x * direction.x + direction.y * direction.y + direction.z * direction.z;

float b = 2.0f * (oc.x * direction.x + oc.y * direction.y + oc.z * direction.z);

float c = oc.x * oc.x + oc.y * oc.y + oc.z * oc.z - radius * radius;

float discriminant = b * b - 4 * a * c;

if (discriminant < 0) {

return false;

} else {

t = (-b - std::sqrt(discriminant)) / (2.0f * a);

return true;

Vec3 rayTrace(const Vec3& origin, const Vec3& direction) {

// Define a light emitting sphere

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 20 of 53
:
Vec3 sphereCenter(0, 0, -5);

float sphereRadius = 1.0;

float t;

if (raySphereIntersect(origin, direction, sphereCenter, sphereRadius, t)) {

return Vec3(1, 0, 0); // Red color for the sphere

return Vec3(0, 0, 0); // Background color

int main() {

// Render a simple 2D image with ray tracing

const int width = 800;

const int height = 600;

Vec3 cameraOrigin(0, 0, 0);

for (int y = 0; y < height; ++y) {

for (int x = 0; x < width; ++x) {

float u = (2.0f * (x + 0.5f) / width - 1.0f) * width / height;

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 21 of 53
:
float v = (2.0f * (y + 0.5f) / height - 1.0f);

Vec3 direction = Vec3(u, v, -1).normalize();

Vec3 color = rayTrace(cameraOrigin, direction);

// Output color values (simple grayscale for now)

int grayscale = static_cast<int>(255 * color.x);

std::cout << grayscale << " " << grayscale << " " << grayscale << "\n";

return 0;

3. Practical Application: GUI with Rendering


● Combining GUI and Rendering:

○ Use a GUI framework to control rendering parameters (e.g., light intensity,


material properties) in real-time.

● Code Example:

○ Extend the previous OpenGL example to control the triangle's color using Qt
sliders.

#include <QSlider>

#include <QVBoxLayout>

#include <QWidget>

#include <QApplication>

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 22 of 53
:
#include <QOpenGLWidget>

#include <QOpenGLFunctions>

class OpenGLWidget : public QOpenGLWidget, protected QOpenGLFunctions {

Q_OBJECT

private:

float red = 1.0f, green = 0.0f, blue = 0.0f;

public:

void setColor(float r, float g, float b) {

red = r;

green = g;

blue = b;

update();

protected:

void initializeGL() override {

initializeOpenGLFunctions();

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 23 of 53
:
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);

void paintGL() override {

glClear(GL_COLOR_BUFFER_BIT);

glBegin(GL_TRIANGLES);

glColor3f(red, green, blue); glVertex2f(0.0f, 0.5f);

glColor3f(green, blue, red); glVertex2f(-0.5f, -0.5f);

glColor3f(blue, red, green); glVertex2f(0.5f, -0.5f);

glEnd();

};

class MainWindow : public QWidget {

Q_OBJECT

public:

MainWindow() {

QVBoxLayout *layout = new QVBoxLayout(this);

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 24 of 53
:
OpenGLWidget *glWidget = new OpenGLWidget();

layout->addWidget(glWidget);

QSlider *redSlider = new QSlider(Qt::Horizontal);

redSlider->setRange(0, 100);

connect(redSlider, &QSlider::valueChanged, [glWidget](int value) {

float red = value / 100.0f;

glWidget->setColor(red, 0.0f, 0.0f);

});

layout->addWidget(redSlider);

};

int main(int argc, char *argv[]) {

QApplication app(argc, argv);

MainWindow window;

window.show();

return app.exec();

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 25 of 53
:
} CG Lectures Stack

#include "main.moc"

4. Conclusion and Advanced Topics


● Advanced Rendering Techniques:

○ Discuss more complex topics like global illumination, path tracing, and photon
mapping.

● Hands-On Project:

○ Create a mini-project to integrate GUI elements with rendering, allowing real-


time control of rendering parameters.

Lecture 4:
1. Affine and Coordinate System Transformations
Concepts:

● Affine Transformations: Operations that preserve points, straight lines, and planes.
Examples include translation, scaling, rotation, and shearing.

● Coordinate System Transformations: Transformations that change the frame of


reference for an object. Includes transforming points and objects from one coordinate
system to another, such as model to world, world to view, and view to screen
coordinates.

Examples:

● Translation: Moving an object from position (x,y)(x, y)(x,y) to (x+dx,y+dy)(x+dx, y+dy)


(x+dx,y+dy).

python

Copy code

# Translation matrix in 2D

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 26 of 53
:
| 1 0 dx |

| 0 1 dy |

|00 1|

● Rotation: Rotating a point (x,y)(x, y)(x,y) around the origin by angle θ\thetaθ.

python

Copy code

# Rotation matrix in 2D

| cosθ -sinθ 0 |

| sinθ cosθ 0 |

| 0 0 1|

2. Ray Tracing
Concepts:

● Simulates the way light interacts with objects by tracing the path of rays from the
camera through each pixel and into the scene.

● Calculates intersections with objects and determines the color based on material
properties and lighting.

Example:

● Single Ray Tracing: Calculate intersection of a ray with a sphere.

python

Copy code

# Ray-sphere intersection formula

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 27 of 53
:
# Ray: P = O + tD, where O = origin, D = direction, t = distance

# Sphere: (P - C) . (P - C) = r^2, where C = center, r = radius

3. Visibility and Occlusion


Concepts:

● Determines which objects, or parts of objects, are visible from a certain viewpoint.

● Solutions:

○ Depth Buffering: Uses a buffer to keep track of the closest depth value for
each pixel.

○ Painter’s Algorithm: Draws objects from back to front, painting over objects
behind others.

○ Ray Tracing: Handles visibility inherently by checking intersections along the


ray’s path.

Examples:

● Depth Buffering: Store the depth value of each pixel and update it only if a new
object has a smaller depth (closer to the camera).

● Painter’s Algorithm: Sort objects by depth and render them in order from farthest to
closest.

4. Forward and Backward Rendering Equation


Concepts:

● Forward Rendering Equation: Calculates the outgoing radiance based on the sum
of emitted and reflected radiance.

● Backward Rendering (Ray Tracing): Starts from the camera and traces rays
backward to the light sources.

Example:

● Forward Rendering:

python

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 28 of 53
:
Copy code

L_o(x, ω_o) = L_e(x, ω_o) + ∫_Ω f_r(x, ω_i, ω_o) L_i(x, ω_i) (n . ω_i) dω_i

where LoL_oLo is outgoing radiance, LeL_eLe is emitted radiance, and frf_rfr is the BRDF.

5. Simple Triangle Rasterization


Concepts:

● Converts triangles into pixel values. Determines which pixels lie inside the triangle
and calculates their color values.

Example:

● Barycentric Coordinates: Determines if a point lies inside a triangle and interpolates


attributes like color.

python

Copy code

# Barycentric coordinates calculation

P = A + u*(B-A) + v*(C-A), where u + v <= 1

6. Rendering with a Shader-Based API


Concepts:

● Vertex Shaders: Transform vertices to screen space.

● Fragment Shaders: Calculate the final color of pixels.

Example:

● GLSL Vertex Shader:

glsl

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 29 of 53
:
Copy code

#version 330 core

layout(location = 0) in vec3 position;

uniform mat4 modelViewProjection;

void main() {

gl_Position = modelViewProjection * vec4(position, 1.0);

7. Texture Mapping
Concepts:

● Mapping a 2D image onto a 3D surface. Includes handling minification and


magnification using techniques like mipmapping.

Examples:

● Trilinear MIP-Mapping: Combines linear interpolation in both the texture and mipmap
levels.

glsl

Copy code

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR_MIPMAP_LINEAR);

8. Application of Spatial Data Structures to Rendering


Concepts:

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 30 of 53
:
● Data structures like BSP trees, Octrees, and KD-Trees to optimize rendering by
reducing the number of objects to consider for each pixel.

Example:

● Bounding Volume Hierarchy (BVH): Efficiently tests which objects might intersect a
ray.

python

Copy code

# BVH node example

class BVHNode:

def __init__(self, boundingBox, left, right):

self.boundingBox = boundingBox

self.left = left

self.right = right

9. Sampling and Anti-Aliasing


Concepts:

● Reducing aliasing artifacts by averaging colors over a pixel or subpixels.

Example:

● Super-sampling Anti-Aliasing (SSAA): Render the scene at a higher resolution and


average the results down to the display resolution.

10. Scene Graphs and the Graphics Pipeline


Concepts:

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 31 of 53
:
● Scene Graphs: Hierarchical structure to manage and render complex scenes. Each
node represents a transformation or object.

● Graphics Pipeline: Stages like vertex processing, rasterization, and fragment


processing.

Example:

● Scene Graph:

python

Copy code

class Node:

def __init__(self, transform, children=[]):

self.transform = transform

self.children = children

Summary
Each of these concepts forms the foundation of computer graphics. Understanding
transformations, rendering equations, rasterization, and the use of shaders is critical for
developing sophisticated graphics applications. Practical application of these techniques
through examples like ray tracing, depth buffering, and texture mapping helps build an intuition
for solving visibility and rendering challenges.

Home Assignment on Computer Graphics Topics


Instructions: This assignment is divided into several questions, each focusing on a particular
topic covered in the lecture. Attempt all questions with detailed explanations and code samples
where applicable. You are encouraged to use graphics programming libraries such as OpenGL,
WebGL, or any other tool you're comfortable with.

1. Affine Transformations and Coordinate Systems

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 32 of 53
:
● Task: Implement a 2D transformation pipeline that allows the user to apply a
sequence of transformations (translation, rotation, and scaling) to a shape (e.g., a
triangle or rectangle) using a graphical interface.

● Requirements:

○ Implement functions for translation, rotation, and scaling.

○ Display the transformed shape on a canvas after each transformation.

○ Provide an option to reset the shape to its original position.

● Deliverable: A code implementation with a brief description of how the transformation


matrices are applied sequentially.

2. Ray Tracing

● Task: Implement a simple ray tracer that renders a scene with at least one sphere and
one light source. Calculate the color of each pixel based on the intersection of rays with
the sphere and use Phong shading for lighting.

● Requirements:

○ Define the sphere using its center and radius.

○ Implement ray-sphere intersection logic.

○ Calculate ambient, diffuse, and specular components for shading.

● Deliverable: A rendered image of the scene and the code used to generate it, along
with a brief explanation of the shading calculations.

3. Visibility and Occlusion

● Task: Create a simple 3D scene with multiple overlapping objects (e.g., cubes or
spheres). Implement both the depth buffering technique and the painter’s algorithm to
handle visibility.

● Requirements:

○ Render the scene using the depth buffer to correctly display overlapping
objects.

○ Implement the painter’s algorithm by sorting objects based on their depth and
rendering them back to front.

○ Compare and contrast the results of both methods.

● Deliverable: Screenshots of the scene rendered with both techniques, along with an
explanation of how each method was implemented.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 33 of 53
:
4. Rasterization and Shaders

● Task: Implement a basic triangle rasterizer that interpolates vertex colors across the
triangle. Additionally, write a simple fragment shader that applies a lighting effect (e.g.,
diffuse shading) to the triangle.

● Requirements:

○ Write code for barycentric interpolation of vertex colors.

○ Create a vertex shader to transform the triangle and a fragment shader for
color interpolation and lighting.

● Deliverable: A screenshot of the rendered triangle with smooth color transitions and
the code for both the rasterizer and shaders.

5. Texture Mapping

● Task: Map a texture image onto a 3D object (e.g., a cube). Implement both nearest-
neighbor and bilinear filtering for texture sampling. Allow the user to switch between
different filtering methods.

● Requirements:

○ Implement texture coordinates for the object.

○ Apply the texture using nearest-neighbor and bilinear interpolation.

○ Include a user interface option to switch between filtering methods.

● Deliverable: Screenshots showing the object with both texture filtering methods, and
the code used to implement texture mapping and filtering.

6. Sampling and Anti-Aliasing

● Task: Implement super-sampling anti-aliasing (SSAA) for a 2D scene with several


shapes (e.g., circles, rectangles). Compare the results with and without anti-aliasing.

● Requirements:

○ Implement a function to render the scene at a higher resolution.

○ Downsample the high-resolution image to the display resolution.

○ Display the scene with and without anti-aliasing for comparison.

● Deliverable: Screenshots of the scene with and without SSAA, and a brief
explanation of how anti-aliasing was implemented.

7. Spatial Data Structures

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 34 of 53
:
● Task: Implement a simple 2D scene containing multiple objects and use a spatial data
structure (e.g., QuadTree) to optimize rendering and collision detection.

● Requirements:

○ Define a set of objects with positions and sizes.

○ Implement a QuadTree to manage these objects.

○ Use the QuadTree to efficiently detect collisions and only render visible
objects.

● Deliverable: A brief report on how the QuadTree improves performance, along with
code and screenshots demonstrating the optimization.

8. Scene Graphs and Graphics Pipeline

● Task: Build a simple scene graph that includes a hierarchical structure of nodes
representing a robot arm with rotating joints. Render the scene using a basic graphics
pipeline.

● Requirements:

○ Define a scene graph with nodes for each part of the robot arm.

○ Implement transformations for each node based on user input (e.g., rotate the
arm or move the base).

○ Render the scene using a basic graphics pipeline.

● Deliverable: An interactive application or video demonstrating the robot arm


movement, along with the code and a brief explanation of the scene graph structure.

Submission: Submit your code, along with a document explaining each task, the approach
used, and any challenges you faced. Include screenshots or videos wherever possible to
demonstrate your implementation.

Deadline: One week from the date of assignment.

Tools /Bots List

Yes, there are several online platforms and bots that can assist you with OpenGL coding. While
there's no dedicated "OpenGL bot" per se, the following platforms provide tools, compilers, and

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 35 of 53
:
real-time code assistance for OpenGL development:

1. Replit (replit.com)
● Replit is an online IDE that supports OpenGL development. You can write, run, and
debug OpenGL code directly in your browser.

● It supports multiple programming languages, including C/C++, which are typically


used for OpenGL development.

● You can also collaborate in real time, making it easier to share code with others and
get assistance.

Steps to Use OpenGL on Replit:

1. Create a new C++ project.

2. Add OpenGL libraries (GLEW, GLFW) if needed, and start coding your OpenGL
application.

3. Run the code directly in the browser.

Visit: https://replit.com

2. GitHub Copilot
● GitHub Copilot is an AI-powered coding assistant that can help you write OpenGL
code by suggesting code snippets, autocompleting lines, and even writing entire
functions based on comments.

● It's available as an extension for IDEs like Visual Studio Code and JetBrains.

To Use Copilot for OpenGL:

1. Install Visual Studio Code or another supported IDE.

2. Install the GitHub Copilot extension.

3. Start coding OpenGL in C/C++, and Copilot will offer code suggestions based on your
input.

Visit: GitHub Copilot

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 36 of 53
:
3. Codingame
● Codingame provides a platform for solving coding challenges, including ones related
to OpenGL or graphics programming. It's more challenge-based, but it offers a fun way
to practice OpenGL logic and algorithms.

Visit: https://www.codingame.com

4. ShaderToy (for OpenGL shaders)


● ShaderToy is an online community and tool specifically designed for writing and
sharing GLSL shaders, which is a key component of OpenGL.

● You can write and test shader code (like fragment and vertex shaders) in real-time,
view live output, and get community feedback.

Visit: https://www.shadertoy.com

5. StackOverflow and OpenGL Forums


● While not a "bot," StackOverflow and various OpenGL forums offer instant access to a
vast number of OpenGL coding questions and answers. These are great for
troubleshooting specific issues you encounter.

● You can ask specific OpenGL-related questions and get fast responses from the
community.

These platforms and tools can be very helpful when working with OpenGL. You can choose any
of them based on your development environment or specific requirements. Let me know if you

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 37 of 53
:
need help with any specific platform or tool!

Mega Lecture 5

Part 1: Introduction to Geometric Modeling

What is Geometric Modeling?

Geometric modeling involves creating, analyzing, and manipulating mathematical


representations of shapes in 2D and 3D space. It is essential in various fields like:

● Engineering: CAD (Computer-Aided Design) for product design.

● Entertainment: Animation and game development.

● Medicine: 3D imaging for surgeries and diagnostics.

● Architecture: Virtual building models for planning and simulation.

Overview of Basic Geometric Operations

• Intersection Calculation: Determining where two geometric entities


intersect.

• Proximity Tests: Checking how close two entities are in space.

• Polynomial Curves and Surfaces: Representing curves and surfaces using


polynomial equations.

• Approximation Techniques: Techniques like Bézier curves and spline


surfaces to model smooth and complex shapes.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 38 of 53
:
Importance of Geometric Modeling

● Enables precision in design and manufacturing.

● Provides tools to simulate physical phenomena.

● Facilitates creativity in digital media industries. AI videos

● Key in scientific visualization, where datasets need to be presented graphically. Avatar

Part 2: Basic Geometric Operations

1. Intersection Calculation

Real-Life Example:

● Calculating where a ray of light intersects a window surface in ray tracing for realistic
rendering.

Mathematics:

For a line-plane intersection, a line is represented as:

L(t)=P+tDL(t)=P+tD

Where:

● PP: Point on the line.

● DD: Direction vector.

The plane equation is:

N⋅(X−Q)=0N⋅(X−Q)=0

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 39 of 53
:
Where:

● NN: Plane's normal.

● QQ: A point on the plane.

Intersection Calculation

● Definition: Determining where geometric entities such as lines, planes, or surfaces


intersect.

● Applications:

○ Ray tracing in rendering (finding where a ray intersects a 3D object to calculate


lighting).

○ Collision detection in video games and simulations.

Theory: Surface-Surface Intersection

● The intersection of two surfaces often produces a curve.

● Common in CAD for generating blending curves between two objects.

Example:

#include <GL/glut.h>

#include <iostream>

using namespace std;

void linePlaneIntersection() {

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 40 of 53
:
float linePoint[3] = {1, 2, 3};

float lineDir[3] = {1, 1, 1};

float planePoint[3] = {0, 0, 0};

float planeNormal[3] = {0, 1, 0};

float t = (planeNormal[1] * (planePoint[1] - linePoint[1])) / (planeNormal[1] *


lineDir[1]);

float intersection[3] = {

linePoint[0] + t * lineDir[0],

linePoint[1] + t * lineDir[1],

linePoint[2] + t * lineDir[2]

};

cout << "Intersection: (" << intersection[0] << ", " << intersection[1] << ", " <<
intersection[2] << ")" << endl;

2. Proximity Tests

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 41 of 53
:
Real-Life Example:

● Detecting how close two cars are in an autonomous vehicle system.

● Applications:

○ Autonomous vehicles: Determining how close an obstacle is.

○ Pathfinding algorithms in robotics.

Theory: Point-to-Line Distance

1. The shortest distance from a point P to a line

2. For higher dimensions, proximity tests involve solving optimization problems.

Theory: Point-to-Plane Distance

1. For a point P and a plane N⋅(X−Q)=0

2. This test is commonly used in simulations and physics engines.

Mathematics:

For the distance between a point and a line:

d=∣(P−A)×D∣∣D∣d=∣D∣∣(P−A)×D∣

Where:

● PP: Point.

● AA: A point on the line.

● DD: Line direction vector.

Example:

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 42 of 53
:
void pointLineDistance() {

float point[3] = {1, 1, 1};

float linePoint[3] = {0, 0, 0};

float lineDir[3] = {1, 1, 0};

float cross[3] = {

lineDir[1] * (point[2] - linePoint[2]) - lineDir[2] * (point[1] - linePoint[1]),

lineDir[2] * (point[0] - linePoint[0]) - lineDir[0] * (point[2] - linePoint[2]),

lineDir[0] * (point[1] - linePoint[1]) - lineDir[1] * (point[0] - linePoint[0])

};

float distance = sqrt(cross[0] * cross[0] + cross[1] * cross[1] + cross[2] * cross[2]) /


sqrt(lineDir[0] * lineDir[0] + lineDir[1] * lineDir[1] + lineDir[2] * lineDir[2]);

cout << "Distance: " << distance << endl;

Part 3: Polynomial Curves and Surfaces

● Definition: Curves represented using polynomial equations.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 43 of 53
:
● Common forms:

○ Linear: y=mx+by = mx + b

○ Quadratic: y=ax2+bx+cy = ax^2 + bx + c

○ Cubic: y=ax3+bx2+cx+dy = ax^3 + bx^2 + cx + d

● Applications:

○ Designing smooth trajectories for robots.

○ Fitting curves to data points in machine learning.

1.

Bézier Curves

● Developed by Pierre Bézier for designing car bodies.

● Properties:

○ Start and end at the first and last control points.

○ Fully determined by the control points.

● Applications:

○ Drawing tools in software like Adobe Illustrator.

○ Animating smooth camera movements.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 44 of 53
:
Real-Life Example:

● Designing smooth roads or roller coaster paths in simulation software.

Mathematics:

Bézier curves use control points:

B(t)=∑i=0n(ni)(1−t)n−itiPiB(t)=i=0∑n(in)(1−t)n−itiPi

Example:

void drawBezierCurve() {

glBegin(GL_LINE_STRIP);

for (float t = 0.0; t <= 1.0; t += 0.01) {

float x = (1 - t) * (1 - t) * 0.0 + 2 * (1 - t) * t * 0.5 + t * t * 1.0;

float y = (1 - t) * (1 - t) * 0.0 + 2 * (1 - t) * t * 1.0 + t * t * 0.0;

glVertex2f(x, y);

glEnd();

2. Spline Curves

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 45 of 53
:
• A spline is a piecewise polynomial function that maintains smoothness at the
joints.

• Types:

Linear Splines: Straight-line segments.

o Cubic Splines: Smooth curves with continuous first and second derivatives.

Applications:

o Animation: Smooth transitions between keyframes.

o CAD: Generating smooth paths for machining.

Real-Life Example:

● Modeling car paths in racing games.

Example:

void drawSpline() {

float controlPoints[4][2] = {{-1, -1}, {-0.5, 1}, {0.5, -1}, {1, 1}};

glMap1f(GL_MAP1_VERTEX_3, 0.0, 1.0, 3, 4, &controlPoints[0][0]);

glEnable(GL_MAP1_VERTEX_3);

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 46 of 53
:
glBegin(GL_LINE_STRIP);

for (int i = 0; i <= 30; ++i)

glEvalCoord1f((GLfloat)i / 30.0);

glEnd();

Part 4: Approximation Techniques

1. Bézier Surfaces

● Extension of Bézier Curves:

○ Created by tensor products of Bézier curves.

● Applications:

○ Designing car surfaces, airplane wings, and consumer products.

● Mathematics:

○ Control points arranged in a grid.

Real-Life Example:

● Designing car bodies or aircraft surfaces.

Example:

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 47 of 53
:
void drawBezierSurface() {

float controlPoints[4][4][3] = {

{{-1, -1, 0}, {-0.5, -1, 0}, {0.5, -1, 0}, {1, -1, 0}},

{{-1, -0.5, 0}, {-0.5, -0.5, 0.5}, {0.5, -0.5, 0.5}, {1, -0.5, 0}},

{{-1, 0.5, 0}, {-0.5, 0.5, 0.5}, {0.5, 0.5, 0.5}, {1, 0.5, 0}},

{{-1, 1, 0}, {-0.5, 1, 0}, {0.5, 1, 0}, {1, 1, 0}}

};

glMap2f(GL_MAP2_VERTEX_3, 0, 1, 3, 4, 0, 1, 12, 4, &controlPoints[0][0][0]);

glEnable(GL_MAP2_VERTEX_3);

glMapGrid2f(20, 0.0, 1.0, 20, 0.0, 1.0);

glEvalMesh2(GL_FILL, 0, 20, 0, 20);

Spline Surfaces

● Formed by blending spline curves in two dimensions.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 48 of 53
:
● Widely used in 3D modeling and graphics.

● Example: NURBS (Non-Uniform Rational B-Splines) surfaces.

Why Approximation?

● Many shapes cannot be represented exactly with simple polynomials.

● Approximation provides computationally efficient solutions.

Bézier Approximation

● Used when precision is not critical.

● Example:

○ Representing the profile of a bottle for a manufacturing design.

Spline Approximation

● Provides better accuracy for complex shapes.

● Example:

○ Modeling the trajectory of celestial bodies.

Part 5: Animation as a Sequence of Still Images

● Definition

○ Animation is created by displaying a sequence of still images rapidly to create


the illusion of motion.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 49 of 53
:
What is Animation?

● Animation involves displaying a sequence of still images to create the illusion of


motion.

● Life book on speed swap exmple

Key Concepts

1. Frames per Second (FPS):

○ Determines the smoothness of animation.

○ Typical values: 24 FPS (film), 60 FPS (video games).

2. Interpolation:

○ Smoothly transitioning between two states.

○ Example: Linear interpolation (LERP) for position changes.

○ Sunset 4pm to 6pm.

3. Real-Life Examples:

○ Animating a bouncing ball.

○ Simulating water ripples.

Key Techniques

● Keyframing: Animators define key positions, and software interpolates intermediate


frames.

Motion Capture: Recording real-world movements for realistic animations.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 50 of 53
:
Example:

void displayAnimation() {

for (int frame = 0; frame < 60; ++frame) {

glClear(GL_COLOR_BUFFER_BIT);

float angle = frame * 6.0; // Rotate by 6 degrees per frame

glPushMatrix();

glRotatef(angle, 0.0, 0.0, 1.0);

glColor3f(1.0, 0.0, 0.0);

glBegin(GL_TRIANGLES);

glVertex2f(-0.5, -0.5);

glVertex2f(0.5, -0.5);

glVertex2f(0.0, 0.5);

glEnd();

glPopMatrix();

glutSwapBuffers();

usleep(100000); // 100ms delay for 10 FPS

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 51 of 53
:
}

Real-Life Applications and Discussion

1. CAD (Computer-Aided Design)

● Examples:

○ Designing industrial machinery.

○ Creating architectural blueprints.

2. Gaming

● Collision detection using proximity tests.

● Real-time animation of characters using Bézier curves.

3. Film and Media

● Smooth camera paths for filming using splines.

● Generating realistic models of characters using surfaces.

4. Robotics

● Path planning using polynomial curves.

● Obstacle avoidance with proximity calculations.

5. Virtual Reality

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 52 of 53
:
● Designing immersive environments with surfaces and animations.

Conclusion

Geometric modeling is the backbone of modern technology, enabling creativity and precision
across industries. Understanding its mathematical and computational principles provides the
tools to innovate in areas like CAD, gaming, animation, and robotics. Encourage students to
experiment with these concepts using OpenGL or other graphical libraries to deepen their
understanding.

● Geometric modeling bridges the gap between mathematical theory and real-world
applications.

● Understanding these techniques is crucial for fields like animation, CAD, and virtual
reality.

● Encourage further exploration of advanced modeling techniques like NURBS and real-
time rendering.

https://docs.google.com/document/u/1/d/1x2B80Az2ekRUJWLTmdmrBCAy8i-uadwWMZMv8J_IPmA/mobilebasic 6/1/25, 19 58
Page 53 of 53
:

You might also like