Graphix
Graphix
1.1 Background
The field of computer graphics has seen remarkable advancements, enabling the creation of
immersive 3D environments and realistic visualizations. The 3DEngine project exemplifies this
progress by implementing a comprehensive 3D rendering engine using modern OpenGL. This engine
demonstrates fundamental concepts in computer graphics, including 3D transformations, camera
manipulation, lighting, shader programming, and various rendering modes. The project leverages a
combination of technologies, including OpenGL for rendering, custom implementations for core
functionalities, and industry-standard libraries for enhanced capabilities.
1.2 Motivation
The primary motivation behind the 3DEngine project is to gain an in-depth understanding of the core
principles underpinning modern 3D graphics programming. By constructing a rendering engine from
the ground up, developers can explore the intricacies of 3D visualization, including:
1.3 Objectives
The key objectives of the 3DEngine project are:
• To implement a flexible shader system for managing vertex and fragment shaders
• To create a robust 3D rendering pipeline using modern OpenGL techniques
• To develop efficient structures for representing 3D models, including vertices and textures
• To incorporate real-time lighting and shading effects with customizable parameters
• To implement multiple rendering modes, including solid and wireframe representations
• To optimize performance for smooth real-time rendering of complex 3D models
• To provide a platform for experimenting with different 3D models, textures, and shaders
• To implement efficient texture loading and management systems
1
• Optimizing performance to handle large scenes with multiple objects and effects
• Implementing robust error handling for shader compilation and asset loading
• Balancing between visual quality and performance for real-time rendering
• Handling diverse 3D model formats and texture types efficiently
• A custom Shader class for compiling, linking, and managing GLSL shaders
• Structures for representing vertices and textures, supporting various attribute types
• A texture loading system using the stb_image library for handling multiple image formats
• Support for loading and rendering complex textured 3D models
• Efficient management of OpenGL buffer objects (VAO, VBO, EBO) for optimal rendering
• Implementation of uniform setters in the Shader class for easy manipulation of shader
parameters
• Error handling and logging for shader compilation and asset loading processes
• A flexible system for representing and managing different texture types (e.g., diffuse, specular)
While not aiming to compete with professional game engines, 3DEngine serves as a comprehensive
educational tool for understanding the fundamentals of 3D graphics programming. It provides a solid
foundation for further exploration into advanced topics such as complex shading techniques, physics
simulations, and advanced rendering algorithms.
2 Literature Review
2
2.4 3D Model Representation and Rendering
The project's approach to 3D model representation, using custom Mesh and Model classes, reflects
current practices in computer graphics. The integration of the Assimp library for model loading
demonstrates an understanding of the complexities involved in handling various 3D file formats and
the importance of standardized approaches to model importing.
3
3 THEORETICAL BACKGROUND
3.2 3D Transformations
3.2.1 Translation
Translation involves moving an object from one position to another in 3D space. It's represented by
adding a translation vector to each vertex of the object.
4
3.2.2 Rotation
Rotation involves rotating an object around an axis in 3D space. It is typically represented using
rotation matrices derived from trigonometric functions.
Rotation matrices can be defined for rotations around the X, Y, and Z axes:
Rotation around X-axis:
5
Rotation around Z-axis:
3.2.3 Scaling
Scaling involves changing the size of an object by multiplying its vertices by a scaling factor.
6
Mathematically, scaling can be expressed as:
7
Fig: example of combined translation, scaling and rotation
The view transformation positions and orients the camera in the scene.
• Flat Shading: Applies a single lighting calculation to an entire polygon, resulting in a faceted
appearance.
• Gouraud Shading: Calculates lighting at each vertex and interpolates the results across the
polygon, resulting in smoother shading.
9
• Phong Shading: Interpolates the surface normal across the polygon and calculates lighting at
each pixel, resulting in the most accurate and realistic shading.
10
4. METHODOLOGY
Advantages of RAD
1. Faster Development Time: The iterative nature of RAD and the use of prototypes allow for
quicker development cycles compared to traditional approaches like the Waterfall model. This
leads to faster delivery of working software.
2. User-Centered Development: RAD involves users directly in the development process,
ensuring the final product aligns with user expectations. Continuous feedback helps in refining
features and making adjustments based on real-world usage.
3. Flexibility to Changes: RAD’s iterative process makes it easier to accommodate changes in
requirements, even late in the development process. New features or modifications can be
introduced without disrupting the entire development cycle.
4. Lower Costs: By delivering functional prototypes early in the development process, RAD
reduces the need for rework. Additionally, the use of pre-built components and rapid iteration
can lower overall development costs.
5. Improved Quality: Continuous testing, user feedback, and refinements lead to a better-quality
product. Issues are often identified early, and the software can be improved incrementally.
Disadvantages of RAD
12
1. Limited Scalability: RAD is best suited for small to medium-sized projects. Large and complex
systems may not benefit from the RAD approach, as the time and resources required to create
numerous prototypes or manage multiple iterations can become unwieldy.
2. Less Focus on Documentation: RAD often sacrifices comprehensive documentation in favor
of faster development. For teams that need detailed specifications or rely on documentation for
long-term maintenance, this can pose challenges.
3. Requires Highly Skilled Developers: Since RAD relies on rapid iteration and prototyping, it
requires developers to be highly skilled and experienced. They need to be able to quickly build
working software, troubleshoot problems, and handle changes efficiently.
4. User Availability: RAD demands constant user involvement and feedback. If users are not
available or are unable to provide timely feedback, it can delay the development process or
lead to misalignment with user needs.
5. Limited Tooling Support: RAD relies heavily on prototyping tools, and the lack of suitable
tools for specific use cases can slow down development.
Conclusion
Rapid Application Development is a highly effective methodology for delivering software applications
quickly and efficiently. It thrives in environments where user feedback is vital, and the development
cycle needs to be short. RAD may not be suitable for every project, especially large or highly complex
systems, but for many applications, especially prototypes and smaller-scale systems, it offers an
excellent solution for reducing development time and improving customer satisfaction through
continuous user involvement.
A 3D graphics renderer is a system responsible for converting 3D models into visual representations
displayed on a screen. It operates through a series of interconnected modules that work together to
process user input, render graphics, and optimize performance.
The process begins with the User Input Module, which captures input from devices like a keyboard
and mouse. This allows the user to interact with the scene, manipulating the camera and objects in the
3D space. The Application Core manages the main loop, initializing system components, processing
inputs, and updating the scene objects and camera based on user interactions.
The heart of the renderer lies in the Rendering Pipeline. Initially, Model Loading occurs, where 3D
models, often in formats like .obj or .fbx, are parsed and converted into a format suitable for
13
processing. Vertex Processing applies transformations to these models, such as translation, rotation,
and scaling, to position them correctly in the 3D scene. After this, Rasterization takes place,
converting the 3D data into 2D pixel data for display. During Shading, lighting models (e.g., Phong
shading) are applied, and textures are mapped to the surfaces of the models to enhance realism.
Finally, the rendered scene is stored in the Frame Buffer, which holds the final pixel data before it’s
sent to the display. The process is optimized with Clipping & Culling to remove unnecessary
computations and improve rendering performance by excluding off-screen objects or invisible
surfaces.
Below is the high-level system block diagram illustrating the structure of the 3D graphics renderer:
14
15
5. SYSTEM DESIGN
17
5.4 Activity Diagram
18
5.5 Class Diagram for System
19
The class diagram represents the relationships and interactions between major system components.
Key classes include:
20
5.7 Sequence Diagram
Description: The sequence diagram outlines the interaction flow between system components during
key operations. It illustrates sequences such as:
21
5.9 Data Flow Diagram
Description: The data flow diagram represents the movement of data through the system. It includes
processes such as:
22
5.10 Deployment Diagram
Description: This diagram illustrates the physical deployment of the system, including:
23
FOLDER STRUCTURE
24
Main.cpp
Algorithm
1. Initialize GLFW:
a. Initialize GLFW with specified OpenGL version and profile.
b. If initialization fails, output an error message and terminate.
2. Create GLFW Window:
a. Create a GLFW window with the primary monitor's resolution.
b. If window creation fails, output an error message and terminate.
3. Initialize GLAD:
a. Load OpenGL functions using GLAD.
b. If GLAD initialization fails, output an error message and terminate.
4. Initialize ImGui:
a. Initialize ImGui with GLFW and OpenGL bindings.
b. Set ImGui style and colors.
5. Create Grid:
a. Generate vertex data for a grid.
b. Create and configure a Vertex Array Object (VAO) and Vertex Buffer
Object (VBO) for the grid.
6. Set OpenGL States:
a. Enable depth testing and multisampling.
b. Set the viewport to match the window size.
7. Load Shader and Model:
a. Load and compile shaders from specified file paths.
b. Load the initial model from a specified file path.
c. If the model fails to load, output an error message and terminate.
8. Main Rendering Loop:
a. While the window should remain open:
i. Poll for and process input events.
ii. Render ImGui frames.
iii. Display a control panel using ImGui for various settings:
1. Use perspective projection or orthographic projection.
2. Load a new model from a specified path.
3. Adjust projection settings (FOV, near/far planes,
orthographic size).
4. Toggle grid visibility and adjust background color.
5. Adjust model transformations (position, rotation, scale,
flip).
6. Adjust lighting position.
7. Adjust camera orbit parameters (pitch, yaw, distance).
8. Toggle wireframe mode.
iv. Render the 3D view:
1. Set the viewport and clear the color and depth buffers.
25
2. Set the polygon mode based on wireframe mode.
3. Use the shader program.
4. Update the camera vectors.
5. Calculate the projection and view matrices.
6. Apply model transformations (position, rotation, scale,
flip).
7. If the grid is enabled, draw the grid.
8. Draw the model.
v. Render ImGui elements.
vi. Swap buffers to display the rendered frame.
9. Cleanup:
a. Shut down ImGui.
b. Terminate GLFW.
10. End Program:
a. Return from the main function.
FLOWCHART
26
SOURCE CODE
#include <glad/glad.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
#include <iostream>
#include <imgui.h>
#include <imgui_impl_glfw.h>
#include <imgui_impl_opengl3.h>
#include "model.h"
#include "shader.h"
#include "imgui_impl.h"
#include "camera.h"
#include "input.h"
#include "globals.h"
// Grid data
unsigned int gridVAO, gridVBO;
const float gridSize = 10.0f;
const int gridDivisions = 40;
void createGrid() {
std::vector<float> vertices;
const float step = gridSize * 2 / gridDivisions;
glGenVertexArrays(1, &gridVAO);
glGenBuffers(1, &gridVBO);
27
glBindVertexArray(gridVAO);
glBindBuffer(GL_ARRAY_BUFFER, gridVBO);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(float),
vertices.data(), GL_STATIC_DRAW);
glBindVertexArray(0);
}
int main() {
if (!glfwInit()) {
std::cerr << "Failed to initialize GLFW" << std::endl;
return -1;
}
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_SAMPLES, 4);
if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) {
std::cerr << "Failed to initialize GLAD" << std::endl;
return -1;
}
InitImGui(window);
createGrid();
Shader shader("D:/3DEngine/shaders/vertex.glsl",
"D:/3DEngine/shaders/fragment.glsl");
Model model(modelPath.c_str());
if (model.isEmpty()) {
std::cerr << "Failed to load initial model" << std::endl;
return -1;
}
while (!glfwWindowShouldClose(window)) {
glfwPollEvents();
processInput(window);
RenderImGui();
ImGui::SetNextItemOpen(true, ImGuiCond_Once);
if (ImGui::CollapsingHeader("Projection Settings")) {
if (usePerspective)
ImGui::SliderFloat("FOV", &camera.fov, 1.0f, 120.0f);
else
ImGui::SliderFloat("Ortho Size", &orthoSize, 1.0f, 100.0f);
ImGui::SliderFloat("Near Plane", &nearPlane, 0.1f, 10.0f);
29
ImGui::SliderFloat("Far Plane", &farPlane, 10.0f, 1000.0f);
}
if (ImGui::CollapsingHeader("Appearance")) {
ImGui::Checkbox("Show Grid", &showGrid);
ImGui::SliderFloat("R", &backgroundColor.r, 0.0f, 1.0f);
ImGui::SliderFloat("G", &backgroundColor.g, 0.0f, 1.0f);
ImGui::SliderFloat("B", &backgroundColor.b, 0.0f, 1.0f);
}
if (ImGui::CollapsingHeader("Model Transforms")) {
ImGui::Checkbox("Flip X", &flipX);
ImGui::SameLine();
ImGui::Checkbox("Flip Y", &flipY);
if (ImGui::CollapsingHeader("Lighting")) {
ImGui::SliderFloat3("Light Position", &lightPos.x, -10.0f, 10.0f);
}
if (ImGui::CollapsingHeader("Camera Orbit")) {
ImGui::SliderFloat("Vertical Orbit", &camera.pitch, -180.0f,
180.0f);
ImGui::SliderFloat("Horizontal Orbit", &camera.yaw, -180.0f,
180.0f);
ImGui::SliderFloat("Distance", &camera.cameraDistance, 1.0f,
100.0f);
}
30
ImVec2 renderSize = ImGui::GetContentRegionAvail();
glViewport(0, 0, static_cast<GLsizei>(renderSize.x),
static_cast<GLsizei>(renderSize.y));
glClearColor(backgroundColor.r, backgroundColor.g, backgroundColor.b,
1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPolygonMode(GL_FRONT_AND_BACK, wireframeMode ? GL_LINE : GL_FILL);
shader.use();
camera.updateCameraVectors();
float aspect = renderSize.x / renderSize.y;
glm::mat4 projection = usePerspective ?
glm::perspective(glm::radians(camera.fov), aspect, nearPlane,
farPlane) :
glm::ortho(-orthoSize * aspect, orthoSize * aspect, -orthoSize,
orthoSize, nearPlane, farPlane);
glm::vec3 flipScale(
flipX ? -modelScale : modelScale,
flipY ? -modelScale : modelScale,
modelScale
);
model_matrix = glm::scale(model_matrix, flipScale);
glBindVertexArray(gridVAO);
glDrawArrays(GL_LINES, 0, gridDivisions * 4 + 4);
glBindVertexArray(0);
}
31
shader.setMat4("projection", projection);
shader.setMat4("view", view);
shader.setMat4("model", model_matrix);
shader.setVec3("viewPos", camera.cameraPos);
shader.setVec3("lightPos", lightPos);
model.draw(shader);
ImGui::End();
ImGui::Render();
ImGui_ImplOpenGL3_RenderDrawData(ImGui::GetDrawData());
glfwSwapBuffers(window);
}
ShutdownImGui();
glfwTerminate();
return 0;
}
Camera.cpp
Algorithm
1. Initialize Camera:
a. Set initial values for camera attributes:
i. yaw: Initial yaw angle.
ii. pitch: Initial pitch angle.
iii. lastX and lastY: Initial mouse coordinates.
iv. firstMouse: Flag indicating the first mouse movement.
v. cameraDistance: Distance from the camera to the target.
vi. fov: Field of view.
vii. cameraPos: Initial camera position.
viii. cameraFront: Initial camera front direction.
ix. cameraUp: Up vector for the camera.
x. cameraSpeed: Speed of camera movement.
2. Update Camera Vectors:
a. Calculate the direction vector based on yaw and pitch:
i. Use trigonometric functions to determine the x, y, and z components of the
direction vector.
b. Update the camera position (cameraPos) using the direction vector and
cameraDistance:
i. The position is calculated as the inverse of the direction vector multiplied by the
distance.
32
c. Normalize the direction vector to get the camera front (cameraFront).
3. Get View Matrix:
a. Calculate the view matrix using the glm::lookAt function:
i. The view matrix is determined by the camera position (cameraPos), the target
position (origin), and the up vector (cameraUp).
b. Return the calculated view matrix.
FLOWCHART
33
SOURCE CODE
#include "camera.h"
#include <glm/gtc/matrix_transform.hpp>
Camera camera;
Camera::Camera() :
yaw(-90.0f), pitch(0.0f), lastX(400.0f), lastY(300.0f),
firstMouse(true), cameraDistance(20.0f), fov(45.0f),
cameraPos(0.0f, 0.0f, 20.0f), cameraFront(0.0f, 0.0f, -1.0f),
cameraUp(0.0f, 1.0f, 0.0f), cameraSpeed(0.05f) {}
void Camera::updateCameraVectors() {
glm::vec3 direction;
direction.x = cos(glm::radians(yaw)) * cos(glm::radians(pitch));
direction.y = sin(glm::radians(pitch));
direction.z = sin(glm::radians(yaw)) * cos(glm::radians(pitch));
Globals.cpp
Global Variables
1. usePerspective:
a. Type: bool
b. Purpose: Determines whether to use perspective projection (true) or
orthographic projection (false) for rendering.
2. orthoSize:
a. Type: float
b. Purpose: Specifies the size of the orthographic view volume when
orthographic projection is used.
3. modelPosition:
a. Type: glm::vec3
b. Purpose: Stores the position of the model in 3D space. Initialized
to the origin (0.0f, 0.0f, 0.0f).
4. backgroundColor:
a. Type: glm::vec3
b. Purpose: Defines the background color of the rendering view.
Initialized to a dark gray color (0.2f, 0.3f, 0.3f).
34
5. modelRotation:
a. Type: glm::vec3
b. Purpose: Stores the rotation angles of the model around the x, y,
and z axes. Initialized to zero rotation.
6. modelScale:
a. Type: float
b. Purpose: Specifies the scale factor for the model. Initialized to
1.0f, meaning no scaling.
7. wireframeMode:
a. Type: bool
b. Purpose: Determines whether to render the model in wireframe mode
(true) or solid mode (false).
8. flipX and flipY:
a. Type: bool
b. Purpose: Controls whether to flip the model along the x-axis (flipX)
or y-axis (flipY).
9. showGrid:
a. Type: bool
b. Purpose: Determines whether to display a grid in the rendering view.
10. mouseInputEnabled:
a. Type: bool
b. Purpose: Indicates whether mouse input is enabled for controlling
the camera or other interactions.
11. modelPath:
a. Type: std::string
b. Purpose: Stores the file path of the initial model to be loaded.
Initialized to a specific path.
FLOWCHART
35
36
SOURCE CODE
#include "globals.h"
bool usePerspective = true;
float orthoSize = 5.0f;
glm::vec3 modelPosition(0.0f);
glm::vec3 backgroundColor(0.2f, 0.3f, 0.3f);
glm::vec3 modelRotation(0.0f);
float modelScale = 1.0f;
bool wireframeMode = false;
bool flipX = false; // Add this line
bool flipY = false;
bool showGrid = false;
bool mouseInputEnabled = false;
std::string modelPath = "D:/3DEngine/assests/infernus.glb"; // Initial path
Imgui_impl.cpp
Algorithm
1. Initialize ImGui:
a. Check Version: Ensure the ImGui version is compatible using
IMGUI_CHECKVERSION().
b. Create Context: Initialize the ImGui context using
ImGui::CreateContext().
c. Set Style and Colors:
i. Set the ImGui style to dark mode using
ImGui::StyleColorsDark().
ii. Customize the ImGui style properties:
1. Set WindowRounding to 5.0f for rounded window corners.
2. Set FrameRounding to 3.0f for rounded frame corners.
3. Set the window background color to a dark gray using
ImVec4(0.08f, 0.08f, 0.08f, 0.94f).
d. Initialize ImGui for OpenGL: Set up ImGui to work with OpenGL using
ImGui_ImplOpenGL3_Init with the specified GLSL version.
e. Initialize ImGui for GLFW: Set up ImGui to work with GLFW using
ImGui_ImplGlfw_InitForOpenGL.
2. Render ImGui Frame:
Start New OpenGL Frame: Begin a new OpenGL frame using
ImGui_ImplOpenGL3_NewFrame().
a. Start New GLFW Frame: Begin a new GLFW frame using
ImGui_ImplGlfw_NewFrame().
b. Start New ImGui Frame: Begin a new ImGui frame using
ImGui::NewFrame().
37
3. Shutdown ImGui:
a. Shutdown OpenGL ImGui: Clean up ImGui resources for OpenGL using
ImGui_ImplOpenGL3_Shutdown().
b. Shutdown GLFW ImGui: Clean up ImGui resources for GLFW using
ImGui_ImplGlfw_Shutdown().
c. Destroy ImGui Context: Destroy the ImGui context using
ImGui::DestroyContext().
FLOWCHART
38
Source code
#include "imgui_impl.h"
#include <imgui.h>
#include <imgui_impl_glfw.h>
#include <imgui_impl_opengl3.h>
Input.cpp
Algorithm
39
b. Purpose: Handle mouse movement events.
c. Steps:
i. Pass the mouse movement event to ImGui using
ImGui_ImplGlfw_CursorPosCallback.
3. Scroll Callback:
a. Function: scroll_callback
b. Purpose: Handle mouse scroll events.
c. Steps:
i. Pass the mouse scroll event to ImGui using
ImGui_ImplGlfw_ScrollCallback.
4. Key Callback:
a. Function: key_callback
b. Purpose: Handle keyboard events.
c. Steps:
i. Pass the keyboard event to ImGui using
ImGui_ImplGlfw_KeyCallback.
ii. If ImGui is not capturing the keyboard, process additional key
events:
1. Toggle wireframeMode when the 'F' key is pressed.
2. Reset model position, rotation, scale, and camera
settings when the 'R' key is pressed.
5. Process Input:
a. Function: processInput
b. Purpose: Handle continuous keyboard input for controlling the model
and camera.
c. Steps:
i. Close the window if the 'Escape' key is pressed.
ii. Determine the movement speed, doubling it if the 'Left Shift'
key is pressed.
iii. Adjust the model position based on 'W', 'A', 'S', 'D', 'Q',
and 'E' key presses:
1. 'W' and 'S' keys control forward and backward movement.
2. 'A' and 'D' keys control left and right movement.
3. 'Q' and 'E' keys control upward and downward movement.
iv. Adjust the model scale based on 'Z' and 'X' key presses:
1. 'Z' key decreases the scale, ensuring it does not go
below 0.1.
2. 'X' key increases the scale
40
FLOW-CHART
Source code
#include "input.h"
#include "camera.h"
#include "imgui_impl.h"
#include <imgui_impl_glfw.h>
#include <imgui_impl_opengl3.h>
#include <glm/gtc/matrix_transform.hpp>
#include "globals.h"
41
void mouse_callback(GLFWwindow* window, double xpos, double ypos) {
ImGui_ImplGlfw_CursorPosCallback(window, xpos, ypos);
}
void key_callback(GLFWwindow* window, int key, int scancode, int action, int
mods) {
ImGui_ImplGlfw_KeyCallback(window, key, scancode, action, mods);
if (ImGui::GetIO().WantCaptureKeyboard)
return;
Algorithm
1. Initialize Mesh:
a. Constructor: Mesh(const std::vector<Vertex>& vertices, const
std::vector<unsigned int>& indices, const std::vector<Texture>&
textures)
b. Purpose: Initialize a mesh with vertices, indices, and textures.
c. Steps:
i. Store the provided vertices, indices, and textures in member
variables.
ii. Call setupMesh() to configure the mesh for rendering.
2. Setup Mesh:
a. Function: setupMesh
b. Purpose: Configure OpenGL buffers and vertex attributes for the
mesh.
c. Steps:
i. Generate a Vertex Array Object (VAO), a Vertex Buffer Object
(VBO), and an Element Buffer Object (EBO).
ii. Bind the VAO.
iii. Bind the VBO and upload vertex data to the GPU.
iv. Bind the EBO and upload index data to the GPU.
v. Configure vertex attribute pointers for positions, normals,
and texture coordinates.
vi. Unbind the VAO.
3. Draw Mesh:
a. Function: draw(Shader& shader)
b. Purpose: Render the mesh using the provided shader.
c. Steps:
i. Initialize counters for diffuse and specular textures.
ii. For each texture in the mesh:
1. Activate the corresponding texture unit.
2. Determine the texture type (diffuse or specular) and set
the corresponding shader uniform.
3. Bind the texture to the active texture unit.
iii. Bind the VAO to prepare for drawing.
iv. Draw the mesh using glDrawElements with the index data.
v. Unbind the VAO.
vi. Reset the active texture unit to GL_TEXTURE0.
43
FLOWCHART
Source code
#include "mesh.h"
void Mesh::setupMesh() {
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
glGenBuffers(1, &EBO);
44
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(Vertex),
vertices.data(), GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(unsigned
int), indices.data(), GL_STATIC_DRAW);
// Vertex positions
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)0);
// Vertex normals
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex),
(void*)offsetof(Vertex, normal));
// Texture coordinates
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex),
(void*)offsetof(Vertex, texCoords));
glBindVertexArray(0);
}
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES, static_cast<unsigned int>(indices.size()),
GL_UNSIGNED_INT, 0);
45
glBindVertexArray(0);
glActiveTexture(GL_TEXTURE0);
}
Model.cpp
Algorithm
1. Initialize Model:
a. Constructor: Model(const std::string& path)
b. Purpose: Initialize a model by loading it from the specified file
path.
c. Steps:
i. Call loadModel(path) to load the model data.
2. Load Model:
a. Function: loadModel(const std::string& path)
b. Purpose: Load a 3D model from a file using Assimp.
c. Steps:
i. Use Assimp to import the model from the file path with
specified processing flags (triangulate, flip UVs, calculate
tangent space, generate normals).
ii. Check for errors in the import process.
iii. Store the scene pointer globally for accessing embedded
textures.
iv. Extract the directory path from the model file path.
v. Process the root node of the scene to extract mesh data.
3. Process Node:
a. Function: processNode(aiNode* node, const aiScene* scene)
b. Purpose: Recursively process nodes in the scene to extract mesh
data.
c. Steps:
i. For each mesh in the node, process the mesh and add it to the
model's mesh list.
ii. Recursively process each child node.
4. Process Mesh:
a. Function: processMesh(aiMesh* mesh, const aiScene* scene)
b. Purpose: Convert Assimp mesh data to a custom Mesh object.
c. Steps:
i. Extract vertex data (position, normal, texture coordinates)
from the Assimp mesh.
ii. Extract index data from the mesh faces.
iii. Load material textures associated with the mesh.
iv. Create and return a Mesh object with the extracted data.
5. Load Material Textures:
46
a. Function: loadMaterialTextures(aiMaterial* mat, aiTextureType type,
const std::string& typeName)
b. Purpose: Load textures from the material data.
c. Steps:
i. For each texture of the specified type in the material:
1. Check if the texture is already loaded to avoid
duplicates.
2. If the texture is embedded (name starts with '*'), load
it from the scene's embedded textures.
3. If the texture is external, load it from the file system
using TextureFromFile.
4. Store the loaded texture in the textures vector and the
loadedTextures cache.
ii. Return the vector of loaded textures.
6. Draw Model:
a. Function: draw(Shader& shader)
b. Purpose: Render the model using the provided shader.
c. Steps:
i. For each mesh in the model, call the mesh's draw function with
the shader.
7. Check if Model is Empty:
a. Function: isEmpty()
b. Purpose: Check if the model contains any meshes.
c. Steps:
i. Return true if the meshes vector is empty, indicating no
meshes were loaded.
ii. Return false otherwise.
47
Flowchart
48
Source code
#include "model.h"
#include <iostream>
#include <stb_image.h>
#include <cstdlib>
#include <cstring>
gScene = scene;
directory = path.substr(0, path.find_last_of('/'));
processNode(scene->mRootNode, scene);
}
49
Mesh Model::processMesh(aiMesh* mesh, const aiScene* scene) {
std::vector<Vertex> vertices;
std::vector<unsigned int> indices;
std::vector<Texture> textures;
// Process vertices
for (unsigned int i = 0; i < mesh->mNumVertices; i++) {
Vertex vertex;
vertex.position = glm::vec3(mesh->mVertices[i].x, mesh->mVertices[i].y,
mesh->mVertices[i].z);
if (mesh->HasNormals())
vertex.normal = glm::vec3(mesh->mNormals[i].x, mesh->mNormals[i].y,
mesh->mNormals[i].z);
if (mesh->mTextureCoords[0])
vertex.texCoords = glm::vec2(mesh->mTextureCoords[0][i].x, mesh-
>mTextureCoords[0][i].y);
else
vertex.texCoords = glm::vec2(0.0f, 0.0f);
vertices.push_back(vertex);
}
// Process indices
for (unsigned int i = 0; i < mesh->mNumFaces; i++) {
aiFace face = mesh->mFaces[i];
for (unsigned int j = 0; j < face.mNumIndices; j++)
indices.push_back(face.mIndices[j]);
}
Texture texture;
// Handle embedded textures (names starting with '*')
if (str.C_Str()[0] == '*') {
int texIndex = std::atoi(str.C_Str() + 1);
if (gScene && texIndex < static_cast<int>(gScene->mNumTextures)) {
const aiTexture* aiTex = gScene->mTextures[texIndex];
unsigned int textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
if (aiTex->mHeight == 0) {
int width, height, nrComponents;
unsigned char* data = stbi_load_from_memory(
reinterpret_cast<unsigned char*>(aiTex->pcData),
aiTex->mWidth,
&width, &height, &nrComponents, 0
);
if (data) {
GLenum format = (nrComponents == 1) ? GL_RED :
(nrComponents == 3) ? GL_RGB : GL_RGBA;
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height,
0, format, GL_UNSIGNED_BYTE, data);
stbi_image_free(data);
} else {
std::cerr << "Failed to load embedded compressed
texture" << std::endl;
}
} else {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, aiTex->mWidth,
aiTex->mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, aiTex->pcData);
}
glGenerateMipmap(GL_TEXTURE_2D);
51
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,
GL_LINEAR);
texture.id = textureID;
texture.type = typeName;
texture.path = std::string(str.C_Str());
textures.push_back(texture);
loadedTextures.push_back(texture);
continue;
} else {
std::cerr << "Embedded texture index out of bounds: " <<
str.C_Str() << std::endl;
}
}
52
FINAL WORKING FLOWCHART
Shader.cpp
Algorithm
1. Initialize Shader:
a. Constructor: Shader(const std::string& vertexPath, const
std::string& fragmentPath)
b. Purpose: Initialize a shader program by compiling vertex and
fragment shaders from file paths.
c. Steps:
i. Read the vertex and fragment shader code from the specified
file paths.
ii. If reading the files fails, output an error message.
iii. Compile the vertex shader:
1. Create a vertex shader object.
2. Set the shader source code.
3. Compile the shader and check for compilation errors.
iv. Compile the fragment shader:
1. Create a fragment shader object.
2. Set the shader source code.
3. Compile the shader and check for compilation errors.
53
v. Link the shaders into a shader program:
1. Create a shader program object.
2. Attach the compiled vertex and fragment shaders to the
program.
3. Link the program and check for linking errors.
vi. Clean up by deleting the shader objects.
2. Use Shader:
a. Function: use()
b. Purpose: Activate the shader program for rendering.
c. Steps:
i. Call glUseProgram with the shader program ID.
3. Set Uniforms:
a. Functions: setInt, setFloat, setVec3, setMat4
b. Purpose: Set uniform variables in the shader program.
c. Steps:
i. For each uniform type (int, float, vec3, mat4):
1. Get the location of the uniform variable in the shader
program.
2. Set the uniform value using the appropriate OpenGL
function (glUniform1i, glUniform1f, glUniform3fv,
glUniformMatrix4fv).
4. Check Compile Errors:
a. Function: checkCompileErrors(unsigned int shader, const std::string&
type)
b. Purpose: Check for shader compilation or program linking errors.
c. Steps:
i. If the type is not "PROGRAM", check for shader compilation
errors:
1. Get the compilation status of the shader.
2. If compilation failed, retrieve and output the error log.
ii. If the type is "PROGRAM", check for program linking errors:
1. Get the linking status of the program.
2. If linking failed, retrieve and output the error log.
54
Flowchart
Source code
#include "shader.h"
#include <fstream>
#include <sstream>
#include <iostream>
#include <glm/gtc/type_ptr.hpp>
// Clean up
glDeleteShader(vertex);
glDeleteShader(fragment);
}
void Shader::use() {
glUseProgram(ID);
}
Texture.cpp
Algorithm
Flowchart
58
59
source code
#define STB_IMAGE_IMPLEMENTATION
#include <stb_image.h>
#include "structures.h"
#include <glad/glad.h>
#include <iostream>
#include <string>
if (data) {
GLenum format = (nrComponents == 1) ? GL_RED :
(nrComponents == 3) ? GL_RGB : GL_RGBA;
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, format,
GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
stbi_image_free(data);
std::cout << "Texture loaded successfully: " << filename << std::endl;
} else {
std::cerr << "Failed to load texture: " << filename << std::endl;
std::cerr << "STB Error: " << stbi_failure_reason() << std::endl;
60
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1, 1, 0, GL_RGBA,
GL_UNSIGNED_BYTE, whitePixel);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return textureID;
}
Shaders/fragment.glsl
// Specular
float specularStrength = 0.5;
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32);
vec3 specular = specularStrength * spec * vec3(1.0);
// Texture
vec4 texColor = texture(texture_diffuse1, TexCoords);
// Combine
vec3 result = (ambient + diffuse + specular) * texColor.rgb;
FragColor = vec4(result, texColor.a);
61
Shaders/vertex.glsl
CMakeLists.txt
cmake_minimum_required(VERSION 3.31.2)
project(ComputerGraphics)
# Platform-specific configurations
if(WIN32)
message("Configuring for Windows")
target_link_libraries(ComputerGraphics
glfw3
63
opengl32
assimp-vc142-mt
imgui
)
elseif(UNIX)
message("Configuring for Linux")
target_link_libraries(ComputerGraphics
${OPENGL_LIBRARIES}
glfw
dl
X11
Xxf86vm
Xrandr
pthread
Xi
${ASSIMP_LIBRARIES}
imgui
)
endif()
64
6. TOOLS AND TECHNOLOGIES
The Engine3D repository primarily focuses on C-based 3D rendering, but modern 3D graphics engines
often integrate additional technologies for automation, data processing, and web-based interfaces.
• Preprocessing 3D Models → Convert, optimize, or clean .obj, .fbx, or .glTF files before feeding
them into the C-based engine.
• Rendering Automation → Automate tasks like batch rendering multiple camera angles.
• Shader Prototyping → Test and develop shader algorithms in Python before implementing
them in C.
• Web-Based 3D Model Upload System → Users can upload .obj or .glTF models through a web
interface.
• Rendering Job Manager → A web dashboard for submitting rendering tasks, monitoring
progress, and retrieving rendered images.
• Cloud-Based Rendering → Distribute rendering jobs across multiple servers for high-
performance rendering.
• User Authentication & Asset Management → Store and manage user-generated content (3D
models, textures, shaders).
• Matrix Transformations (Rotation, Scaling, Translation) → NumPy can handle vector and
matrix operations efficiently.
• Geometric Computations → Compute distances, angles, and collision detection for objects.
• Performance Logging → Store and analyze FPS (frames per second), render time, and resource
usage.
• Scene Data Management → Handle large scenes with multiple objects and materials in an
organized way.
• Shader & Texture Analysis → Analyze which shaders and textures are used most frequently for
optimization.
• Scene Configuration UI → Let users change rendering settings (e.g., lighting, textures,
materials).
• Dashboard for Rendering Management → Control and visualize rendering jobs through an
interactive web interface.
While Engine3D is a C-based 3D graphics renderer, integrating Python, Django, NumPy, Pandas, and
HTML/CSS can extend its functionality for web-based visualization, automation, and data
management.
• Django → Provides a web-based interface for model uploads and rendering job management.
• HTML/CSS → Builds a user-friendly front-end for viewing and interacting with 3D models.
66
7. IMPLEMENTATION
The implementation of the '3D Renderer' project involves integrating various components of a
3D rendering engine, including model loading, shader management, lighting, and user
interaction. This section provides an overview of the key features implemented in the project,
accompanied by screenshots to illustrate their functionality.
6.1 User Interface (Control Panel)
The project includes a Control Panel that allows users to interact with the 3D scene in real-
time. The panel is built using ImGui (Immediate Mode GUI) and provides intuitive sliders,
checkboxes, and input fields for controlling various parameters.
Features:
• Projection Settings: Users can toggle between perspective and orthographic projections and
adjust parameters like Field of View (FOV), near plane, and far plane.
• Appearance Settings: Allows customization of background color using RGB sliders and toggling
grid visibility.
• Model Transformations: Enables real-time manipulation of the loaded 3D model's position,
rotation, and scale.
• Lighting Controls: Adjusts light position and intensity to dynamically illuminate the scene.
• Camera Orbit: Provides controls for orbiting the camera around the model using vertical and
horizontal sliders.
• Wireframe Mode: A toggle to switch between solid rendering and wireframe mode for
visualizing the model's geometry.
67
Key Features:
• Path Input: Users can specify the file path of the 3D model to load.
• Texture Mapping: The renderer applies textures to models for realistic visual effects.
• Flip Axes: Options to flip the X or Y axes for correcting orientation issues during model loading.
6.3 Lighting System
The lighting system is implemented using OpenGL shaders, enabling dynamic lighting effects.
Users can adjust light position and intensity through the Control Panel to observe how it
interacts with the 3D model in real time.
Lighting Techniques:
• Diffuse Lighting: Simulates light scattering on rough surfaces.
• Specular Highlights: Creates realistic reflections on shiny surfaces.
• Ambient Lighting: Provides overall illumination to ensure no part of the scene is completely
dark.
68
69
Fig: Two different images of the model with different light positions
6.4 Camera System
70
The camera system allows users to navigate through the scene with ease. It supports both
perspective and orthographic projections and includes orbit controls for exploring models
from different angles.
Features:
• Orbit Controls: Sliders for vertical and horizontal orbiting.
• Zoom Functionality: Adjusts camera distance from the object.
• Projection Modes: Toggle between perspective (realistic depth) and orthographic (parallel
projection).
6.5 Wireframe Mode
Wireframe mode provides a structural view of the 3D model by rendering only its edges. This
feature is particularly useful for debugging geometry or understanding a model's topology.
Features:
• Toggle wireframe mode on/off via a checkbox in the Control Panel.
• Visualizes underlying geometry without applying textures or shading.
71
Summary
The implementation of '3D Renderer' combines modern OpenGL techniques with an intuitive
user interface to deliver an interactive experience for exploring 3D graphics concepts. Each
feature has been carefully designed to provide flexibility while maintaining performance,
making this project both educational and functional.
8. CONCLUSION
• Overview of the 3D Engine
The 3D engine in this repository is designed to provide developers with a robust, flexible
platform for building 3D applications. Whether the goal is to develop interactive games,
simulations, or visualizations, this engine offers a solid foundation for a variety of projects.
• Target Audience: The engine is well-suited for indie developers, hobbyists, and even small to
medium-sized studios. Its flexibility and modularity make it an ideal choice for those looking to
build custom solutions without relying on large commercial engines.
The engine strives to provide an accessible but powerful toolkit for creating real-time 3D
applications. With an open-source structure, it encourages customization and community-
driven development.
Technical Implementation
The technical underpinnings of the engine are built to ensure both performance and flexibility.
It is structured in a way that facilitates ease of use without sacrificing power.
• Architecture:
o The engine uses a component-based architecture, allowing for better separation of
concerns and scalability.
o Different systems (rendering, physics, input) are loosely coupled, making it easier to
update or swap components.
• Code Quality:
o The code is clean and well-documented, making it accessible for developers of varying
skill levels.
o The engine adheres to standard design principles, ensuring maintainability and ease of
integration with third-party libraries.
• Platform Support:
o The engine is cross-platform, supporting major operating systems like Windows, Linux,
and macOS.
o However, there may be room for improvement in mobile platform support, especially
for high-performance mobile gaming.
74
Looking ahead, there are several areas where this engine could evolve to meet the growing
demands of the 3D development community:
• Performance Enhancements: Optimizing memory management and performance for larger
scenes, complex simulations, and mobile devices could enhance its appeal.
• Advanced Rendering Features: Integrating ray tracing or other advanced rendering techniques
would significantly improve the visual fidelity of the engine, making it more competitive with
established players.
• Community Contributions: As an open-source project, the community can continue to drive the
engine’s evolution. Increased contributions can lead to a faster pace of development and
broader support for emerging technologies.
Conclusion
In conclusion, this 3D engine offers a robust, flexible platform for a variety of 3D development
needs. Its core features, modularity, and extensibility make it an excellent choice for
developers looking for a customizable solution. While it may not yet compete with the largest
engines in terms of advanced features or scalability, its open-source nature, solid architecture,
and ease of use make it a great option for indie developers, hobbyists, and those working on
smaller projects. With continued development, this engine has the potential to become a
powerful tool in the 3D engine landscape.
9. REFERENCES
➢ Hearn, D., & Baker, M. P. (2011). Computer graphics: C version (2nd ed.). Prentice Hall.
This book is a classic resource for understanding the fundamentals of computer graphics,
including 3D rendering techniques and algorithms.
➢ Foley, J. D., van Dam, A., Feiner, S. K., & Hughes, J. F. (1996). Computer graphics:
Principles and practice (2nd ed.). Addison-Wesley.
A comprehensive reference for computer graphics, providing detailed information on the
mathematics and algorithms used in 3D rendering.
➢ Shirley, P., & Marschner, S. (2016). Fundamentals of computer graphics (4th ed.). CRC
Press.
This textbook covers the core concepts of 3D rendering, shading, and ray tracing, making it
ideal for an introduction to 3D graphics rendering.
➢ Haines, E., & Cohen, M. (1993). Real-time rendering of complex scenes. Proceedings of
the ACM SIGGRAPH Symposium on Interactive 3D Graphics, 123-134.
This paper discusses real-time rendering techniques, focusing on how to render complex 3D
scenes efficiently, an essential aspect of modern graphics renderers.
➢ Akenine-Möller, T., Haines, E., & Hoffman, N. (2018). Real-time rendering (4th ed.). CRC
Press.
This is a well-regarded textbook that covers real-time rendering techniques, which are highly
relevant for modern 3D graphics engines and renderers.
75
➢ Seitz, S. M., & Dyer, C. R. (1991). Photorealistic rendering techniques. ACM SIGGRAPH
Course Notes.
This paper addresses photorealistic rendering techniques, including ray tracing and global
illumination.
➢ Schaufler, G. (2007). Real-time 3D rendering techniques. In GPU Gems 3 (pp. 247-264).
Addison-Wesley Professional.
This chapter covers techniques that are used in modern 3D graphics rendering, including
shaders and the use of GPUs for real-time rendering.
76