0% found this document useful (0 votes)
2 views15 pages

Tutorial9 WebGPU Encoding and Compression

This tutorial covers the setup and usage of WebGPU for creating interactive 3D graphics in a webpage, including rendering multiple 3D objects and handling user interactions. It provides step-by-step instructions for enabling WebGPU in Chrome, along with complete code examples for generating various geometries and implementing camera transformations. Additionally, it discusses performance statistics related to hidden surfaces and data savings during rendering.

Uploaded by

Mail Working
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views15 pages

Tutorial9 WebGPU Encoding and Compression

This tutorial covers the setup and usage of WebGPU for creating interactive 3D graphics in a webpage, including rendering multiple 3D objects and handling user interactions. It provides step-by-step instructions for enabling WebGPU in Chrome, along with complete code examples for generating various geometries and implementing camera transformations. Additionally, it discusses performance statistics related to hidden surfaces and data savings during rendering.

Uploaded by

Mail Working
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

TUTORIAL 9 - Encoding and Compressing WebGPU Interactive 3D Graphics

What you will learn:

• Basics of setting up WebGPU in a webpage.


• Creating and rendering multiple 3D objects.
• Handling user interaction (mouse drag & keyboard).
• Applying basic camera/view transformations.
• Simulating hidden surface statistics and data savings.
• Periodic UI updates reflecting scene changes.

Make sure your chrome browser supports WebGPU:

• Its version must be 113+

Enable it manually:

1. Go to chrome://flags
2. Search for WebGPU
3. Set it to Enabled
4. Restart Chrome

You may also need to enable:

• Unsafe WebGPU
• Dawn backend

On Linux, WebGPU may still be behind a flag or disabled depending on your GPU and
drivers.

Then open this site in your Chrome browser:

https://webgpureport.org

It will tell you:

• Whether WebGPU is supported.


• Your GPU adapter name.
• If you need to enable any flags.
Complete Code (Save as
webgpu_interactive_tutorial.html):
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>WebGPU Interactive 3D Graphics Tutorial</title>
<style>
body, html {
margin: 0; height: 100%; overflow: hidden;
background: #111;
color: #0f0;
font-family: monospace;
}
#info {
position: absolute;
top: 10px; left: 10px;
background: rgba(0, 0, 0, 0.7);
padding: 10px;
border-radius: 5px;
max-width: 350px;
line-height: 1.5em;
user-select: none;
}
canvas {
width: 100vw; height: 100vh;
display: block;
}
</style>
</head>
<body>
<div id="info">Initializing WebGPU...</div>
<canvas id="gpu-canvas"></canvas>
<script type="module">
import { mat4 } from 'https://cdn.jsdelivr.net/npm/[email protected]/esm/index.js';

// 1. Check for WebGPU support


if (!navigator.gpu) {
document.getElementById('info').textContent = 'WebGPU not supported in this
browser.';
throw new Error('WebGPU not supported');
}

// Grab elements
const canvas = document.getElementById('gpu-canvas');
const info = document.getElementById('info');

// 2. Initialize WebGPU device and context


async function initWebGPU() {
const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();
const context = canvas.getContext('webgpu');
const format = navigator.gpu.getPreferredCanvasFormat();

context.configure({
device,
format,
alphaMode: 'opaque',
size: [canvas.clientWidth * devicePixelRatio, canvas.clientHeight * devicePixelRatio],
});

return { device, context, format };


}

// 3. Geometry generation functions

// Cube vertices and indices


function createCube() {
return {
positions: new Float32Array([
-0.5,-0.5,-0.5, 0.5,-0.5,-0.5, 0.5,0.5,-0.5, -0.5,0.5,-0.5,
-0.5,-0.5,0.5, 0.5,-0.5,0.5, 0.5,0.5,0.5, -0.5,0.5,0.5,
]),
indices: new Uint16Array([
0,1,2, 2,3,0,
1,5,6, 6,2,1,
5,4,7, 7,6,5,
4,0,3, 3,7,4,
3,2,6, 6,7,3,
4,5,1, 1,0,4,
])
};
}

// Pyramid vertices and indices


function createPyramid() {
return {
positions: new Float32Array([
0,0,0, 1,0,0, 1,0,1, 0,0,1, 0.5,1,0.5
]),
indices: new Uint16Array([
0,1,2, 2,3,0, // base
0,1,4,
1,2,4,
2,3,4,
3,0,4,
])
};
}

// Sphere generation: latitude-longitude mesh


function createSphere(radius = 0.5, segments = 16, rings = 16) {
const positions = [];
const indices = [];
for(let y = 0; y <= rings; y++) {
const v = y / rings;
const theta = v * Math.PI;
for(let x = 0; x <= segments; x++) {
const u = x / segments;
const phi = u * Math.PI * 2;
const px = radius * Math.sin(theta) * Math.cos(phi);
const py = radius * Math.cos(theta);
const pz = radius * Math.sin(theta) * Math.sin(phi);
positions.push(px, py, pz);
}
}
for(let y = 0; y < rings; y++) {
for(let x = 0; x < segments; x++) {
const i1 = y * (segments + 1) + x;
const i2 = i1 + segments + 1;
indices.push(i1, i2, i1 + 1);
indices.push(i1 + 1, i2, i2 + 1);
}
}
return {
positions: new Float32Array(positions),
indices: new Uint16Array(indices)
};
}

// Torus generation
function createTorus(radius = 0.5, tube = 0.2, radialSegments = 16, tubularSegments =
32) {
const positions = [];
const indices = [];
for(let j = 0; j <= radialSegments; j++) {
const v = j / radialSegments * 2 * Math.PI;
const cosV = Math.cos(v);
const sinV = Math.sin(v);
for(let i = 0; i <= tubularSegments; i++) {
const u = i / tubularSegments * 2 * Math.PI;
const cosU = Math.cos(u);
const sinU = Math.sin(u);
const x = (radius + tube * cosV) * cosU;
const y = tube * sinV;
const z = (radius + tube * cosV) * sinU;
positions.push(x, y, z);
}
}
for(let j = 0; j < radialSegments; j++) {
for(let i = 0; i < tubularSegments; i++) {
const a = (tubularSegments + 1) * j + i;
const b = (tubularSegments + 1) * (j + 1) + i;
const c = b + 1;
const d = a + 1;
indices.push(a, b, d);
indices.push(b, c, d);
}
}
return {
positions: new Float32Array(positions),
indices: new Uint16Array(indices)
};
}

// 4. Utility: Create GPU buffers from geometry


function createBuffers(device, geometry) {
const vertexBuffer = device.createBuffer({
size: geometry.positions.byteLength,
usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
});
device.queue.writeBuffer(vertexBuffer, 0, geometry.positions);

const indexBuffer = device.createBuffer({


size: geometry.indices.byteLength,
usage: GPUBufferUsage.INDEX | GPUBufferUsage.COPY_DST,
});
device.queue.writeBuffer(indexBuffer, 0, geometry.indices);

return { vertexBuffer, indexBuffer, indexCount: geometry.indices.length };


}

// 5. Create shaders
function createShaderModule(device) {
return device.createShaderModule({
code: `
struct Uniforms {
mvpMatrix : mat4x4<f32>
};
@group(0) @binding(0) var<uniform> uniforms : Uniforms;

struct VertexOutput {
@builtin(position) Position : vec4<f32>,
@location(0) color : vec3<f32>
};

@vertex
fn vs_main(@location(0) position : vec3<f32>) -> VertexOutput {
var output : VertexOutput;
output.Position = uniforms.mvpMatrix * vec4<f32>(position, 1.0);
output.color = (position + vec3<f32>(0.5, 0.5, 0.5));
return output;
}

@fragment
fn fs_main(@location(0) color : vec3<f32>) -> @location(0) vec4<f32> {
return vec4<f32>(color, 1.0);
}
`
});
}

// 6. Create render pipeline


function createPipeline(device, format, shaderModule, bindGroupLayout) {
return device.createRenderPipeline({
layout: device.createPipelineLayout({ bindGroupLayouts: [bindGroupLayout] }),
vertex: {
module: shaderModule,
entryPoint: 'vs_main',
buffers: [{
arrayStride: 3 * 4,
attributes: [{ shaderLocation: 0, offset: 0, format: 'float32x3' }]
}]
},
fragment: {
module: shaderModule,
entryPoint: 'fs_main',
targets: [{ format }]
},
primitive: {
topology: 'triangle-list',
cullMode: 'back',
},
depthStencil: {
format: 'depth24plus',
depthWriteEnabled: true,
depthCompare: 'less',
}
});
}

// 7. Camera projection matrix


function getProjectionMatrix(aspect) {
const fov = Math.PI / 4;
const near = 0.1;
const far = 100;
const f = 1.0 / Math.tan(fov / 2);
const nf = 1 / (near - far);
return new Float32Array([
f / aspect, 0, 0, 0,
0, f, 0, 0,
0, 0, (far + near) * nf, -1,
0, 0, (2 * far * near) * nf, 0
]);
}

// 8. Camera view matrix with translation and rotation


function getViewMatrix(tx, ty, tz, rx, ry) {
const view = mat4.create();
mat4.translate(view, view, [tx, ty, tz]);
mat4.rotateX(view, view, rx);
mat4.rotateY(view, view, ry);
return view;
}

// Main program
async function main() {
const { device, context, format } = await initWebGPU();

// Uniform buffer and bind group


const uniformBuffer = device.createBuffer({
size: 64,
usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST
});

const bindGroupLayout = device.createBindGroupLayout({


entries: [{ binding: 0, visibility: GPUShaderStage.VERTEX, buffer: {} }]
});

const bindGroup = device.createBindGroup({


layout: bindGroupLayout,
entries: [{ binding: 0, resource: { buffer: uniformBuffer } }]
});

const shaderModule = createShaderModule(device);


const pipeline = createPipeline(device, format, shaderModule, bindGroupLayout);

// Create objects with positions spaced on X-axis


const objects = [
{ name: 'Cube', buffers: createBuffers(device, createCube()), position: [-4, 0, 0] },
{ name: 'Pyramid', buffers: createBuffers(device, createPyramid()), position: [-1.5, 0,
0] },
{ name: 'Sphere', buffers: createBuffers(device, createSphere()), position: [1.5, 0, 0] },
{ name: 'Torus', buffers: createBuffers(device, createTorus()), position: [4, 0, 0] }
];

// Interaction state
let rotationX = 0, rotationY = 0;
let isDragging = false;
let lastMouseX = 0, lastMouseY = 0;

canvas.addEventListener('mousedown', e => {
isDragging = true;
lastMouseX = e.clientX;
lastMouseY = e.clientY;
});
window.addEventListener('mouseup', () => { isDragging = false; });
window.addEventListener('mousemove', e => {
if (!isDragging) return;
const dx = (e.clientX - lastMouseX) * 0.005;
const dy = (e.clientY - lastMouseY) * 0.005;
rotationY += dx;
rotationX += dy;
lastMouseX = e.clientX;
lastMouseY = e.clientY;
});

// Zoom controls via keyboard arrows


let zoom = -10;
window.addEventListener('keydown', e => {
if (e.key === 'ArrowUp') zoom += 0.3;
if (e.key === 'ArrowDown') zoom -= 0.3;
});

// Depth texture for depth testing


let depthTexture = device.createTexture({
size: [canvas.clientWidth * devicePixelRatio, canvas.clientHeight * devicePixelRatio],
format: 'depth24plus',
usage: GPUTextureUsage.RENDER_ATTACHMENT
});

// Resize depth texture if canvas size changes


function updateDepthTexture() {
const width = canvas.clientWidth * devicePixelRatio;
const height = canvas.clientHeight * devicePixelRatio;
if (depthTexture.width !== width || depthTexture.height !== height) {
depthTexture.destroy();
depthTexture = device.createTexture({
size: [width, height],
format: 'depth24plus',
usage: GPUTextureUsage.RENDER_ATTACHMENT
});
}
}

// Total triangles in the scene


const totalTriangles = objects.reduce((sum, o) => sum + o.buffers.indexCount / 3, 0);

// Render loop
function render() {
// Update canvas size and configure context
canvas.width = canvas.clientWidth * devicePixelRatio;
canvas.height = canvas.clientHeight * devicePixelRatio;
context.configure({
device,
format,
alphaMode: 'opaque',
size: [canvas.width, canvas.height],
});

updateDepthTexture();

const aspect = canvas.width / canvas.height;


const projectionMatrix = getProjectionMatrix(aspect);
const viewMatrix = getViewMatrix(0, 0, zoom, rotationX, rotationY);

const commandEncoder = device.createCommandEncoder();


const passEncoder = commandEncoder.beginRenderPass({
colorAttachments: [{
view: context.getCurrentTexture().createView(),
clearValue: { r: 0.05, g: 0.05, b: 0.08, a: 1 },
loadOp: 'clear',
storeOp: 'store',
}],
depthStencilAttachment: {
view: depthTexture.createView(),
depthClearValue: 1,
depthLoadOp: 'clear',
depthStoreOp: 'store',
}
});

passEncoder.setPipeline(pipeline);
passEncoder.setBindGroup(0, bindGroup);

// Draw each object with its own model matrix


for (const obj of objects) {
const modelMatrix = mat4.create();
mat4.translate(modelMatrix, modelMatrix, obj.position);
const mvpMatrix = mat4.create();
mat4.multiply(mvpMatrix, projectionMatrix, viewMatrix);
mat4.multiply(mvpMatrix, mvpMatrix, modelMatrix);

// Upload uniform MVP matrix for this object


device.queue.writeBuffer(uniformBuffer, 0, mvpMatrix.buffer, mvpMatrix.byteOffset,
mvpMatrix.byteLength);

passEncoder.setVertexBuffer(0, obj.buffers.vertexBuffer);
passEncoder.setIndexBuffer(obj.buffers.indexBuffer, 'uint16');
passEncoder.drawIndexed(obj.buffers.indexCount);
}

passEncoder.end();
device.queue.submit([commandEncoder.finish()]);

requestAnimationFrame(render);
}

// Statistics update loop — dynamic estimation based on rotation


let lastReport = 0;
function reportStats(timestamp) {
if (timestamp - lastReport > 100) { // update every 100 ms
// Vary hidden ratio with horizontal rotation (sinusoidal between 20% and 80%)
const normalizedRot = (Math.sin(rotationY) + 1) / 2;
const hiddenRatio = 0.2 + 0.6 * normalizedRot;

const hiddenTriangles = Math.floor(totalTriangles * hiddenRatio);


const savedBytes = hiddenTriangles * 3 * 2; // 3 indices per triangle, 2 bytes per index

info.innerHTML = `
Total triangles: ${totalTriangles}<br>
Estimated hidden triangles: ${hiddenTriangles} (${(hiddenRatio *
100).toFixed(1)}%)<br>
Estimated data saved by removing hidden surfaces: ~${savedBytes} bytes<br>
<br>
<em>Controls:</em><br>
Drag mouse to rotate scene<br>
Arrow Up/Down to zoom
`;
lastReport = timestamp;
}
requestAnimationFrame(reportStats);
}

// Start loops
render();
requestAnimationFrame(reportStats);
}

initWebGPU().then(main);

</script>
</body>
</html>
Detailed Explanation
1. WebGPU Initialization

• We request a GPU adapter and device.


• Configure the canvas to render with WebGPU, choosing the preferred color format.
• Create a depth texture for depth testing to handle overlapping 3D geometry
correctly.

2. Geometry Creation

• We define four 3D objects: Cube, Pyramid, Sphere, and Torus.


• Each object has vertex positions and triangle indices.
• Sphere and Torus are procedurally generated.
• We create GPU buffers for vertex and index data.

3. Shaders and Pipeline

• Simple WGSL shaders transform vertex positions by the MVP (Model-View-


Projection) matrix.
• Fragment shader colors vertices based on their positions (for visual clarity).
• Pipeline config includes backface culling and depth testing.

4. Interaction

• Mouse drag changes rotation angles (rotationX and rotationY).


• Arrow keys zoom the camera in and out along the Z axis.

5. Rendering

• On each frame:
o Update canvas size and depth texture if changed.
o Compute projection and view matrices.
o For each object:
▪ Compute model matrix (object position).
▪ Calculate MVP matrix and upload it to GPU uniform buffer.
▪ Issue draw call.
• Submit command buffer for execution.

6. Statistics Reporting

• Every 100 milliseconds:


o The number of total triangles is constant.
o The number of hidden triangles is estimated based on rotationY.
o Hidden ratio varies smoothly between 20% and 80% as you rotate.
o Estimated memory saved is calculated from hidden triangles (3 indices * 2
bytes per index).
o Updates displayed info box with these values and control instructions.

You might also like