The Fundamentals of C - C++ Game Programming - Using Target-Based Development On SBC's (PDFDrive)
The Fundamentals of C - C++ Game Programming - Using Target-Based Development On SBC's (PDFDrive)
Game Programming
The Fundamentals of C/C++
Game Programming
Using Target-Based Development on SBC’s
Brian Beuken
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to
publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or
the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced
in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright
material has not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any
form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and
recording, or in any information storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.
copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400.
CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been
granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification
and explanation without intent to infringe.
1. Getting Started 1
Mine Looks Different? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
First Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Setting Things Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Introducing Visual Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Hello World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Hello Place of My Choosing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
vii
Getting the Machines to Talk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
Sending Our First Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Debugger Hangs Too Much? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
viii Contents
Did We Hit It? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
Box Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102
Circle Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102
Give Me Shelter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
So Which Is Better? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110
Final Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110
Simple Text Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111
A Simple Font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113
How Did We Do? The Infamous Postmortem . . . . . . . . . . . . . . . . . . . . . . . . . .118
Fix Question 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
A Pat on the Back . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Kamikazi Invaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123
The Ship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Da Baddies! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130
Now We’re Talking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132
Make Them Move . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135
Get Them Flying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
A Nice Arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Step by Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141
Dive Dive Dive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142
Bombs Away . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .144
Get Back to Where You Once Belonged . . . . . . . . . . . . . . . . . . . . . . .145
Home Again! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145
Vectors, Our Flexible Friends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .146
Lets Get Lethal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151
Bombs Away for Real Now . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Danger UXB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .161
Stepping Back, Deciding When to Go . . . . . . . . . . . . . . . . . . . . . . . .162
Breaker Breaker Rubber Duck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167
Fred Reacts! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .169
Tidy Up the Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173
Twiddles and Tweaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173
Postmortem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173
Jumping around a Bit Though? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .174
Crawling Over, Time for Baby Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .175
Object-Oriented Programming Is Not an Error . . . . . . . . . . . . . . . . . . . . . . . . .175
Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .176
Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .176
Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .176
Polymorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .176
Start the Music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .176
Welcome to OpenAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .177
Installing OpenAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .177
Getting OpenAL Working . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179
Contents ix
Dealing with Sound as Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .182
How Does OpenAL Work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183
How Does Alut Work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .184
Horrible Earworms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191
The War against Sloppy Code Storage! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192
Our Own Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .193
Using This New Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196
Lets Get a Bit More Compiler Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198
5 .2 Tiles and Backgrounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198
What Do We Mean by Tiles? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199
Working with Tiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199
What a Wonderful World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199
Homing in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Wrapping It Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
Is This All We Need? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
5 .3 Single-Screen Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .214
A World with Gravity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .214
Routine Bad Guys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .221
Point-to-Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .221
Patrolling Enemy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Homing Enemy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Ladders and Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Data, Our Flexible Friend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Loading Our Maps (and Other Resources) . . . . . . . . . . . . . . . . . . . .235
5 .4 Lets Scroll This . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .239
Simple Scrolling Shooter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Let Them Eat Lead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Bring on the Bad Guys! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Process Everything? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .251
No More Mr Nice Guy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .252
What Will Make It Better? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
There’s No Wrong Way…But There Are Always Better Ways . . . . . . . . . . . . . .255
For a FireWork, Life Is Short But Sweet! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .255
A New Dawn for Particle Kind! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .261
There’s Always a Price to Pay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .270
Handling Large Numbers of Objects . . . . . . . . . . . . . . . . . . . . . . . . .271
Locking the Frame Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271
Recapping the 2D Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .272
x Contents
Installing a Maths Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Danger, Will Robinson! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Normal Programming Resumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Three Types of Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .291
Model Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .291
View Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .291
Projection Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
The Relationship of These Three Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Other Matrix Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .295
Moving Around . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Revisiting Hello Triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Let’s Try a Cube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Mistakes Waiting to Happen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
A Quick Word about Using Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
HelloCubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
I Thought We Could Move 100’s of Thousands of Them? . . . . . . . . . . . . . . . . . . 306
How the GPU Gets Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Buffers, Buffers Everywhere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Vertex Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Attribute Pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .311
Texture Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .312
Frame Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .312
Render Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .313
Buffers Are Not Free . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .313
Let’s Get Back to It . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .314
Time to Texture Our Cube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .315
The Fixed Pipeline Isn’t Quite Dead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .318
Mapping a Texture to Our Faces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .319
Choose the Size to Suit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Limited Numbers of Textures? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Everyone Loves a Triangle But! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .322
3D Lets Get into the Heart of Our GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .325
What Else You Got? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .325
Loading Models (OBJ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327
Locate and Install an OBJ Model Loader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327
Installing and Using TinyObjLoader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .329
Do We Care about the Data? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .330
Lights Camera Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .332
The Return of the Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335
Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335
Another Fun Vector Fact-Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336
Who’s to Say Whats Normal? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336
Types of Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .338
Light Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .339
Shadows, a Place Where Danger Hides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .339
Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
So What Is a Shader? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Contents xi
Keeping Track of Them . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Introducing the Shader Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Let’s Light It Up! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
The Camera Never Lies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .350
But What Does It All Do? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .351
In Space, No One Can Hear You Smashing Your Keyboard As You Scream
Why “Don’t You Work!!!” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353
xii Contents
Render Culling! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .398
Adding the Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Physics, More Scary Maths Stuff? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .401
Subtitle…More Long Winded Explanations! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .401
Introducing Bullet Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
How It Works, and Finally Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Let’s Get to It, at Last . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Setting Things Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Stepping Through . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Visualizing Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Force, Torque, and Impulse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .410
Collisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .412
The Downside . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .415
Basic Racing Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .416
Getting and Making the Car Controllable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .417
I Like to Move It, Move It . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .418
Staying on Track . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .421
Using Bullet to Collide with the Terrain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Can’t Find My Way Home? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Other Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Other Performance Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .431
Contents xiii
Expanding to an SDK? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .461
The Last Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Ready-Made or Roll Your Own . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Limitations of Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Cross-Platform Compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Appendix I 477
Where Files Live on Non-Raspberry Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Appendix II 479
Updating versus New SD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .479
Appendix IV 483
Bits Bytes and Nibbles Make You Hungry! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
Appendix V 485
OpenGLES3 .0+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
Appendix VI 487
The Libs We Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
On the PC End . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
Index 493
xiv Contents
“From Hello World to
Halo—It’s Just Code!”
xv
are doing rather than the specific functions . If you still don’t understand the syntax of the
code, you should undertake a beginners’ coding course, there are several online .
In addition, despite the title, this isn’t a book solely about programming Single Board
Computers (SBCs) . The use of a cheap target system is a means to an end to encourage the
reader to limit expectations and work within tight constraints, which game programmers,
especially console programmers have to work with . I want primarily to focus on gameplay
concepts and game structures, which will let us get games up and running really quickly .
However, we do have to introduce some technical concepts later, when we’re a bit more
comfortable, because most of these technical concepts will have a direct impact on the
performance of your games . You will need to know just enough to avoid some pitfalls and
get things up and running correctly .
SBCs are usually quite simple systems, so building a working knowledge of the fairly
generic hardware to produce graphics, sound, and data storage is generally easier to learn
on them, than it would be on your up to the minute PC, which will shield you from errors
by virtue of massive processing performance and near unlimited memory .
Once understood, all of the concepts and projects in this book are easily transferrable
to any development target where the reader can stretch their growing skills on more pow-
erful systems while still being mindful of the need to work within constraints of hardware,
which are hard to push, and personal limits, which should always be pushed .
But SBCs are really fun to work with, cheap to acquire, and present a real sense of
achievement when you make them to do more than just act as media servers or control units .
Most important, this is not a how to do x, with y kind of book . I want to take you
through a journey of discovery, mine as well as yours, and provide suggestions and work-
ing examples on how to do things that games need, and let you decide if the approach I’ve
taken is valid . I want to make you question things and hopefully come to different conclu-
sions, using what I supply as a base for debate and expansion rather than a gospel to be
followed . When working with beginners, I don’t believe in imposing the right way, I prefer
to have faith in, “this works for me, can I make it better?” The right way, for you at least,
will come with practice and the joy of achievement .
2π | !2π
Ok, let’s stop beating about the bush, after all the section heading gives it away . I chose
the Raspberry Pi, because it’s cheap, it’s freely available all over the world, it connects to
a PC via network cables or wirelessly, it has consistent hardware, so what works on one is
sure to work on another even if a few differences in speed happen, and in my opinion, it’s
a machine that has largely been ignored by the games development community . So you’re
going to be treading in largely virgin sand, that’s quite exciting .
I need a consistent, fun bit of hardware, with sufficient power, reasonable graphic
abilities, and onboard memory to create a range of decent little games to learn with . The
Raspberry Pi gives me all that, and most of its clones are close enough to also give us some
insight into the fragmentation issues in a small enough scale to cope!
Now it also has to be said, once I settled on the Raspberry machine I had my eyes
opened up to the fact that there are several Raspberry Pi-type machines out there; in fact,
there is a thriving community of similar small board System on Chip (SoC) machine’s
with their own communities . So I expanded my remit a little to include as many of the
main ones as I could find with a simple limit of cost . I only looked at units I could buy for
under U .S .$100 .
So if you have one of these other systems then it’s only fair that I make sure what we
do here are going to work on them too, so long as they run some form of Linux and have
OpenGLES2 .0 for their Graphics systems . We should be able to get our games to work on
them too . I’ll try to give a summary of machines I’ve tried, and maintain an update on the
support site .
Of course, technology never stands still, and as I was a quarter of the way in writing
this book, the Raspberry Pi foundation announced a new model, the Raspberry Pi 3, and
as usual it sold out within hours of its announcement . Not quite as big a leap in perfor-
mance as the 2 was over the 1, but still another boost in performance for the same price is
much appreciated .
So I guess most of you will now be on model 4 by the time this comes out . But the nice
thing about the Raspberry range is that aside from memory and speed, they all are based
on the same hardware principles and they have maintained the mantra of compatibility .
So even though I’m going to continue with my Raspberry Pi Model 2B for now, swapping
over to the Model 3B quite soon I am sure, everything in this book will be checked on the
latest models before it goes to the printers .
I should say to owners of earlier Raspberry Pi models, all the things in this book will
work for you, but the later explanations on multicore processing will be useless as earlier
The Target
We need an SBC of course; for the most part I will assume Raspberry Pi, 12 million+ users
would indicate that most of you are using that . So that’s your first purchase, if you haven’t
done so already, you need to do this now . At the time of writing, I’m using the current
model, a Raspberry Pi 2 Model B, but am soon going to plug-in my new Raspberry Pi 3
Model B . I will do the odd sanity check with older Raspberry Pi Model A+/B+/Zeroes
I have to hand . I also have picked up quite a few of the so-called Raspberry Pi beaters
1. Write your own interface between Visual Studio and a Linux-based Raspberry Pi
I’m pretty sure you will opt for option 2 . If you went for option 1 . Close the book now, go
write your interface, and be sure to write to me in a few months’ time when you’re ready
to start again .
Support Website
Almost all the code in this book, and some other things that didn’t make it in the final
edit will be available online at my website (https://www .scratchpadgames .net) . Most of the
missing parts are things I want you to enter yourself for the practice . I’ll also maintain an
errata and update on systems or tools I use, color versions of all screenshots, and final and
much more complete versions of all the demos in this book for you to download, review,
and try out .
For brevity, the listings in this book are sometimes incomplete or have had format-
ting altered to fit on a page . Later in this book, when you should be more proficient, I’m
not even going to provide the code as a listing, you can review the downloaded source
code itself, which will be commented and tied in with the text . I’ll provide suggestions
on how to deal with a problem and some outlines, confident that you already have the
There are a lot of people to thank for this book, but at the top of the list as always for me
is my daughter, Danielle, who has somewhat reluctantly featured in the credit list of every
game I’ve written since she was born . Giving her the dubious distinction of numerous
mentions, on several game credit sites, without ever having any interest in playing or mak-
ing computer games .
The addition in December 2015 of her son Harvey, my first grandchild, gives me even
more cause to consider her the greatest achievement of my life, games being a very distant
second, or probably third as I am quite proud of my guitar collection, though not my
actual playing!
Thanks to my friend Professor Penny De Byl for helping me to find a means to pub-
lish this nonsense, and her help with checking my concept and invaluable advice and
encouragement on what to add and take away from the original concept to keep it fun and
interesting .
Thanks also to friend and now former colleague, Jamie Stewart for taking the time to
go through this book at different times and comment on any mistakes I made, deliberate
or otherwise .
Thanks to Grumpy old Git developers (Facebook group, not an insult) Gareth Lewis,
Rob Wilmot, and Paul Carter; Paul for helping me find and convert some low poly car
models for producing some nice LOD versions of the cars . Gareth, for his timely help
with a Rasbian compatible key reading routine when I was on the point of throwing the
Raspberry Pi out the window, and also to my student Petar Dimitrov, who came up with a
xxvii
neat keyboard scan system to determine which keyboard event was actually active, which
was much tidier than the one I had, so I shamelessly stole it, with his consent ☺
Thanks also to old friends Shaun McClure and Ocean Legend Simon Butler for their
pixel-pushing prowess on the 2D art you can find on the site and use, and Colin Morrison
for his 3D race track tiles and a few other models I wasn’t able to fit in but you can find on
the site .
I have to give a huge shout out to the incredibly talented Pim Bos, one of our family
of NHTV students studying Visual Arts, who did the cartoons that illustrate this book .
His fun take on complex concepts is inspiring and made me chuckle every time he sent
one in .
A special shout out to the small band unsuspecting volunteers who ran through this
book for me, finding multiple spelling errors, and more than a few issues with my coding,
especially to colleague David Jones who I now owe free drinks for life for his proofreading
and eye for detail .
Finally, my thanks to the management, staff, and all students past and present at the
International Games Architecture and Design (IGAD) programme of NHTV University
of Applied Sciences* in Breda, The Netherlands, which has been my home for the last
9 years . I’ve learned much from them and I hope I’ve given a little bit back at times .
xxviii Thanks
Brian Beuken: Who
Is He?
Brian Beuken is a veteran games developer, having started in the early 1980s writing his
first games on the venerable Sinclair ZX81 . Self-taught as many were at the time, Brian
wrote games in Basic and Assembler, selling them via mail order before branching out
to form his own small company specializing in conversion of projects from one popu-
lar machine to another . A chance to work for Ocean Software in Manchester, England,
then one of the largest games companies around, saw Brian leave his native Scotland and
become a full-time game programmer, staying in the center of the tech bubble that was
Manchester and working for several companies producing a host of projects in quick
succession .
Eventually, the Manchester bubble burst as companies began to merge and moved
away . Brian became a well-established freelance coder specializing in Z80 systems and
handheld devices from Nintendo and Sega, before again taking a leap into entrepreneur-
ship and forming his own company, Virtucraft . Virtucraft grew from 3 to 30+ people
in the space of 4 years, until once again the bubble burst and the company was forced to
close . Brian went on to become Head of Development at an emerging mobile games com-
pany, which was later sold and became part of the mighty Square Enix . But Brian had left
xxix
before that happened, unhappy with the distance from development in the management
role, he returned to coding, and once again entered the freelance market for several years
again specializing in handheld consoles .
A chance encounter with a tutor at NHTV in Breda, The Netherlands, resulted in
Brian being offered a teaching position at the still new IGAD program they had estab-
lished to bring game development skills that the industry needed, to education . Finding
the program offered far more than he’d seen in any comparable education, Brian signed
up thinking he’d try it for a year . Nine years later, he’s still there, still coding, and add-
ing to his 75+ published titles and enjoying watching his students find the joy of game
development, which they can take with them to an industry that sorely needs more
programmers .
1
First Steps
It’s a common rule that programmers must never assume anything, so I’m immediately
going to break that rule and assume you to know how to install Visual Studio, VisualGDB,
and get your Target; in this case, a Raspberry Pi, set up to go . No? Ok well let’s do the simple
things first .
Set up the Raspberry Pi: This is relatively easy, especially if you opted for a prefor-
matted SD card when you purchased . If so, insert the card into the Pi, hook up
your power, keyboard, mouse, wifi (if you bought one), and display and fire it up .
If you didn’t opt for the preformatted card, you have a bit more work to do, but
it’s always best to go to the Raspberry Pi website and follow the latest instructions .
https:/www .raspberrypi .org/help/quick-start-guide/ .
Install Visual Studio: This also should be pretty simple, Microsoft downloads tend to
be painless, if a little slow because of their size . Installation can take a little while
but there’s not a lot of input required from you, so once you’ve started it and ticked
all the right boxes, you can go and make a few cups of your favorite beverage and
come back when it’s done .
Install VisualGDB: One thing you should do before you install this, is make sure you
have run Visual Studio at least once, and closed it down . On its first run, Visual
Studio sets up a lot of things, and that can interfere with the settings of some plug-
ins, which is what VisualGDB is, a plug-in, a piece of software which extends Visual
Studio’s features .
Once you’ve run it, the installation of VisualGDB is totally painless, but do not
activate it yet!
Setting Things Up
Sadly, we still have a few confusing steps to go through to write our first Raspberry Pi
program, so let’s start by introducing ourselves to Visual Studio . If you’ve already used it
and know some C/C++, you can skip to the section titled, setting up the Raspberry Pi and
other targets .
2 1. Getting Started
Depending on your version of Windows, you should have a link somewhere on your
start menu or taskbar for the version of Visual Studio that you just installed . I prefer to
keep it on my taskbar at the bottom, so it’s always accessible .
Fire it up and if you have already sneaked ahead and installed VisualGDB, it will
immediately ask you if you want to start the VisualGDB trial…answer no at this point, we
have things to set up and we’re going to do one or two little PC programs to get ourselves
comfortable with the Visual Studio .
You’re going to get something like this, a start screen . It won’t look exactly the same
as my screen, I’ve got a lot of different plug-ins on my version, also I’ve used it for several
projects already but the main areas should be similar .
For now, ignore the Start options in the main window, and look at the top left corner, can
you see the FILE tab? Click on it and select New, then Project .
4 1. Getting Started
Notice we have a group of templates, I’ve selected the Visual C++ group, which we’ll
quickly use to get started, but there is also an option for Visual GBD…we’ll click
that soon .
6 1. Getting Started
Now this is a very, very simple project, it actually doesn’t do much, but you can run it . .…
press F5 .
And then a black box appears and then disappears on your screen . Well done, you just
ran your first ever Visual Studio-built program . Of course, it really didn’t do too much
but if you look carefully at the code in the large window, you’ll see there’s very little
code to run .
// ConsoleApplication2.cpp : Defines the entry point for the console
application.
#include "stdafx.h"
int _tmain(int argc, _TCHAR* argv[])
{
return 0;
}
There’s only one function, called _tmain and it has one instruction to return, that’s exactly
what it did . But how did it do it?
Running a program consists of at least three main stages: (1) compiling, (2) linking,
and (3) then running .
If you look at the Build output, you can see that two cpp files were compiled; they
were then linked together, though it does not explicitly tell you, it then made the result-
ing EXE file, the actual program was stored in a directory, called, on my machine;
c:\users\brian\documents\AndroidWorks\Projects\ConsoleApplication2\Debug\
ConsoleApplication2 .exe .
Visual Studio then automatically ran the program in something called Debug Mode
and executed the first instruction it saw…which was to return .
Now we don’t really need to know much about how the black box gets created and the
program starts up, but let’s try to slow things down .
Visual Studio is an Editor, but it is also a very powerful debugger . That allows us to
examine code while its running and also to stop code at certain points, hover your cursor
over the gray bar next to the return 0; instruction . Press the right mouse button and a gray
dot will appear .
8 1. Getting Started
This is a breakpoint . Hit F5 again .
Now what do we see?
We should see an Empty window, this was the black box that popped up and disappeared
before we could see it . This is called a console window; it’s basically a small user output
box that we will often display some text in, usually to tell us something important about
our programs .
But why can we see it now?
Look at the Visual Studio again…it seems to be doing something interesting .
Hello World
In this very simple console application, we really only can do one or two things, we can
print some text, and input some text . So let’s start with printing some text .
The simplest way to print text in a C++ program is to use a function called cout, its
part of a family of routines from what’s called the STL, or Standard Template Library . All
C++ programs have access to these functions, and there are a lot of them . However, we’re
only interested for now in cout, which a quick check of my favorite C++ reference site
www .cplusplus .com tells me is part of the <iostream> class .
Now, if I want to use things in a class I need to make sure my program includes the
class definitions, I also need to check the format for the function, again check my favorite
site .
For speed, I’ve added them here .
10 1. Getting Started
Add the lines .
#include <iostream>
and
std::cout << "Hello World";
As I have in the screenshot . Notice the breakpoint is still there, hit F5 again and you should
see Hello World at the top of the console window . If you get a build error, carefully check
the typing, at this point, the only thing that can really go wrong is a typo .
Congratulations! You have your first Hello World, or whatever childish expletives
you decided to use . There’s nothing funnier for beginners than to make a computer swear .
The std:: part indicates that the cout belongs to the namespace of the Standard
Template Library, namespace is a concept in C++ that allows us to group things together,
and reuse common names, a bit like a family name . There are many people called Brian
in the world, but not so many in the Beuken family, I think I’m the only one, Brian is a
common name, Brian Beuken is specific to me . So, I’m Brian from the Beuken namespace .
#include "stdafx.h"
#include <iostream>
#include <string>
int _tmain(int argc, _TCHAR* argv[])
{
std::string mystring;
std::cin >> mystring;
std::cout << "Hello "<< mystring;
return 0;
}
You should still have a gray dot on your return 0; line to indicate a break point, if not,
add one, so your program will stop long enough for you to read your efforts . Press F5…
the screen will be blank, but you can enter some location, or friends name, when you press
Enter, you will display Hello…then whatever you typed .
Not bad…we’ve done our first input and output and compiled and ran our first two
Visual Studio programs, only a few million more to go till you master it . As an exercise, see
if you can remove the breakpoint, and add another instance of cin to let it create a pause
for you…hint, cin will finish and move to the next instruction when you press Enter .
Now it’s time to focus on the target system we are using, we’ll come back to the Visual
Studio soon .
12 1. Getting Started
2 Getting Our
Target Ready
Now before we get into the setting up, it might be wise at this point for you to skip ahead
to the Appendix section of this book and quickly read Appendix III, which covers the
importance of using source control? You don’t need to use source control, especially if
you are finding it a bit confusing at this point, but after a while source control is going
to be something you really will need to start using . It’s ok, there are many free options,
but you can make up your own mind whether now is the time to start using it . I want
you to be comfortable in what you are doing first before adding new tools you may find
confusing .
13
But probably the first and most important thing you need to know is how to make sure your
version of Rasbian (or other Linux) is up-to-date .
I wish I could spend some time explaining to anyone not using a Raspberry machine,
how to get it set up and running it but in truth there are just too many of them out there,
and I really don’t want this to be a book about single board computers (SBCs) and their
idiosyncrasies, it’s meant to be about programming . So I will apologize straight away if I
miss out some small detail that your particular SBC has or does not have that will prevent
you from moving forward . I am trusting that you know just enough about your unit to get
it set up and working . In nearly every unit I’ve looked at, SBC makers maintain forums/
wikis or other means of keeping their communities updated . That’s where you need to look
for specific information on your system . I will post some info on the support site, but I can’t
really guarantee to cover all the possible systems .
Throughout this book I am only going to show code for the Raspberry range, but fear not,
the only significant* differences relate to the graphic setup and whether it works in a window
or full screen . So you will find that on the non-Raspberry machines everything should work
if you use the standard Linux version downloads from the support site, and do NOT set up a
Raspberry value in the Preprocessor Macros (explained later) . Later I will provide a Graphics
Class that allows you a duel option of Linux X11 and Raspberry, which should cover pretty
much any standard Linux display . In addition, take care to note some of your include and
library directories will be different . Most versions of Linux keep the files we need in the same
places, but reality can bite sometimes, and the fact is that a few systems will have different
locations of those files . I will record as many as I can on the support site but really it’s up to
you to find them as I won’t have every single machine available to test and check .
That said, I’ve collected quite a few of the little beasties, so I’ll try to maintain a run-
ning list of issues on the support site .
Generally, the version of the OS you get from the target maker is going to be the
most up-to-date, and you will use that to burn your first SD card, you’ll find instructions
on how to do that with your documentation or makers website, so I won’t repeat that
there . If you find, however, that your makers OS is not up to scratch and a lot of them are
quite poor, Armbian from Armbian .com is a very good alternative, often providing a more
stable and driver-equipped OS on a very wide range of chipsets .
However, it is wise to know the OS’s update from time to time as do several standard
apps that come with your OS and these are not all updated at the same time . Makers will
post new builds for you to create new SDs . However, be aware that every time you make
a new fresh SD you lose any package libraries or projects that you installed on the previ-
ous version . So rather than continually burning SDs every time a new build comes, and
potentially losing your useful tools and libs, use the update/upgrade routine described in
Appendix II . It’s not a bad idea to make a backup ISO file of your SD card from time to
time, in case you need to reburn it for some reason .
From our view point as programmers, it’s a just a target to run code, whether it runs in its
console text mode or a Graphic User Interface (GUI), we’re going to make it do other things
and the only thing we want most in the world right now is the IP address of the target machine .
Personally I prefer to have the GUI running while developing keyboard-based games,
and console for mouse games, for reasons that will become clear later, but it should be
* There is one other significant difference, which relates to Shaders, but so far I only find this to be a problem
on intel-based machines, full info is on the support site .
Oh Wait…Did We Plug-In?
Of course, we have to assume that it is either hooked to the Dev PC via a cable, or it plugs
into a router on the Dev PC’s network, or that it is connected up to the same wifi network
as the DevPC with a USB→Network dongle . The key point is that your target and your
development PC/Mac need to be on the same network and they need to be connected . This
needs to be a network connection, not USB .
Let’s get back to those numbers, whether you entered ifconfig in the console mode
or in the terminal window, you should get something like this .
A lot of confusing numbers . But we’re looking for the wired and wifi Ethernet connection .
Either will do, but ideally the wired one, which is described as eth0 .
If you don’t have a wired connection but you do have a wifi dongle plugged into your
Raspberry Pi, you should then have a wlan0 connection . We can use that too, it’s going to be
a lot slower but we’re not trying to shift gigabytes of data, so we can use it .
Oh Wait…Did We Plug-In? 15
We’re looking for the IP address, here called the inet addr . This is a number that
identifies your computer on your network, each computer on your network will have a dif-
ferent IP number, and when connecting to the Internet, this is like your computer’s name or
phone number to the network .
The default IP number for any unassigned computer connection is usually 127 .0 .0 .0 or
127 .0 .0 .1, and you will probably see that on the second set of numbers, if your Raspberry
Pi is still all fresh and new with nothing else added .
If you see 127 .0 .0 .0/1 on both the first and third set of numbers it means there may be
a connection issue with your Raspberry Pi and you’ll have to try to resolve that…I can’t
offer a lot of help beyond, try turning it on and off again, check the dongle, change the
cable, and review the Raspberry Pi forums for help .
The number on my machine is currently 192 .168 .178 .13, a pretty common standard
internal IP address for a home network . If you’re connected directly to your target, or
using a network system in an office or school you may have a completely different set of
numbers decided on by your ICT dept . . . But the key is, it’s not 127 .0 .0 .0/1 .
So find the wired IP address, and if not wired, find the wifi IP address .
Take a note of the number; we’ll need that to set up VisualGDB, so it knows what
target machine to talk to .
You probably will have to do this again from time to time, especially if you are respon-
sible and switch all your equipment off when done . The IP address is assigned by the sys-
tem when it starts up and connects, and though, if all things are equal it will provide
the same number, you can’t be 100% certain . The order things power up, or adding new
computers onto your network may change the assigned IP address . But it’s simple enough
to reset, now that you know how to get the IP address .
You can also hover your mouse cursor over your network signal indicator, this works
on the Raspberry range and most others, ignore the /24 part at the end .
We are going to be always using the Linux Project Wizard option; we’ll discuss this more
when we start our first project . Before that, let’s get a connection sorted .
You can choose one of four build systems, I want you to use GNU Make, as you are a
beginner, it exposes more of the values we need to see directly inside the VisualGDB,
select that and hit next . MSBuild is a generally nicer system to use when you are a little
more experienced, but it does hide some things from the VisualGDB, which we don’t want
you to be hunting for . CMake is an awesome system if you have a more advanced under-
standing of how make files work . I’ve never used QT, so won’t make any comment on that .
However, GNU is the system we need to use for now, so hitting next will bring up this
next popup which is a nice and important dialog as this is where we tell our PC, where our
target system is by entering the IP address and name .
Here you can see a blank box, you have to give it the IP address as the Host name, the user
name (pi) and the password, (raspberry), and also tick the box for setting up the public key
authentication . That by the way is why I have so many options on my computer for target
machines, they all are getting saved .
Hit save and now that the machine will be locked into your system’s memory saving you
have a lot of time later .
You’ll then see a box appearing as it does various tests and checks, which will vanish
if all is ok . Again this probably won’t happen to you as a first time user, but you may get
this box appearing .
Take note of the transferred files section . This lets the system know what kind of files we
have to send down to the target to compile . Again you may not see this on your first try,
but as I have been using the system for some time, it has remembered some of the file types
I want to send…You probably only have a few and at this point that’s ok, we’ll add to them
as we go, and this can be edited later if we need to .
That’s it, hit Finish, the system will do some testing, a few boxes will pop and
progress bars will seem to do their thing . They are indicating the testing of compiles and
sending of data to the target machine, including setting up some annoyingly complex
directories . But when you check things out on your Visual studio it should seem remark-
ably familiar .
It’s not actually an error, it is simply telling us that the program has stopped suddenly…
and it found that a bit odd!
But also look closely at your output window (if you have one open, I hope you do) .
Can you see it says Hello World?
Instead of opening up a console window on our Target, VisualGDB has intercepted
that output and sent it to an output window .
The update and upgrade commands are just to make sure you have the latest version, if you
are sure you already have that you can skip those two lines, but it’s usually better to be safe
than sorry (also if using a single core Pi, you can leave out the −j4 arguments, which are
really for multicore compiling) . After doing all this, we should be good to go, reset your
Raspberry Pi, when it comes back, you can open a terminal and type gdb –version and it
should now report a new generic version 7 .12 .1 (or later) of GDB, which is not the Rasbian
specific version . I can’t say if this will have an impact on any programs or projects that are
relying on the Rasbian specific version, so you install this fix at your own risk .
What I can say with some happy certainty is that it cured my constant and very ran-
dom hang-up issues, and saved a few keyboards from coder rage smashes . I hope that the
Rasbian version gets an update soon, so this fix will no longer be needed .
All this only impacts us if we are using default toolchain settings when we start a
project up to build on a remote target, if, when starting, you choose the option available, to
try loading the GDB installed on your target, it may be happy enough to use it . As I recom-
mend all beginners should start with default settings, there’s a pretty fair chance that you
will have this issue on systems which have earlier than 7 .12 .1 GDB .
31
Graphics Explained!
So far we’ve managed to get a bit of code going and print some text; however, to write
games we need nice pretty graphics in glorious color with lovely animations . But to do that
we need to ask the video system in our target to draw our graphics . And there lies a prob-
lem for us as novice coders . Asking a very complex hardware chip to do even the simplest
things is really a hard work, and we generally don’t have access to all the relevant registers
and access protocols the chip wants…How then do we do graphics?
Back in the (good) old days, screen displays were generally memory mapped in some
way, so if you wrote a byte to the memory area representing the screen then a pixel would
appear .
Sadly those days are gone, with the advent of ever more powerful graphic chips whose
sole purpose was to produce 3D objects in virtual space on a flat panel, actually drawing
flat-panel graphics directly has fallen out of favor (though it can still be done on some
machines) .
The preferred method these days is to ask our Graphics Processing Unit (GPU) to draw
things for us, usually triangles . Those are the most beloved of all coder graphic primitives .
But getting our GPU chip to draw even simple triangles means a lot of very low-level
requests for hardware to set up registers, send data, confirm data, attach data, and so on…
it’s a pain .
To relieve that pain, hardware manufacturers make their systems compliant with
graphic Application Programming Interface (APIs), or more accurately graphic API’s are
produced for hardware systems, but given that there is no sense in limiting your market,
hardware makers generally introduces new features to their hardware slowly, to allow the
API’s a chance to incorporate new features .
There are two dominating APIs in the games field . Direct X from Microsoft, which is
used by PCs and most other Windows-based systems, and Microsoft’s Xbox consoles . It’s
a vast, frequently updated, and powerful API, which on recent versions of Windows has
been rolled into the OS itself, so is no longer an additional download .
This is very much a workhorse API for any PC, and its constant development over the
years has created a standard where its market dominance was so powerful that hardware
was forced to comply with it allowing for widespread standardization of desktop GPUs,
which, in turn, allowed the hardware makers to focus more on performance ahead of fancy
new graphic gimmicks that most users could not access . This also had the benefit of allow-
ing coders to specialize and drive the graphics tech forward to the levels that we see today .
New hardware features do come along but under controlled release conditions and
almost always with the APIs updated and ready to use them when they come to market .
The other giant is OpenGL, which is available for almost every computer-based
system imaginable, it is a more open and community friendly API than DirectX, but it
does have a standards body, The Khronos group who maintain and update it when needed,
and enhances performance, maintaining the reference materials that users and hardware
makers can use to develop new software and hardware .
It also has a very popular subbranch called OpenGL ES, the ES stands for embedded
systems, and is considered a low overhead high-performance version for use in machines
that need low power consumption and do not have massively powerful chips or memory .
It does have some limitations and a lot of previously deprecated but still usable features
of full OpenGL that have been removed to keep it slim . But it is a popular API because
(You don’t need the sudo if you are using a machine that gives you a root terminal) .
Now you should find the EGL and GLES2 folders in your /usr/include folders, which is
where most normal forms of Linux seem to install the files . You’ll also almost certainly get
at least one binary lib, libGLESv2 .so, which is compatible with your hardware or provides
the same functions in emulation . It will be in your /usr/lib/name_depends_on_cpu folder .
I have to make clear though, on SBCs, these are not always optimal, they will let you
create and build graphic games but the performance is going to be variable if your hard-
ware has not made direct access available . As soon as possible you need to replace them
with proper drivers for your machine, if they are not available, hassle the makers on their
forums . A board without proper graphic drivers is simply not going to perform at optimal
levels and that does not help their cause of selling a board for multiple uses . This is in my
view one reason why the Raspberry range is such a success, everything you need is there
ready and waiting, even if it’s in odd directories .
I’ve noted that all this works for most machines, sadly there are a few that just don’t
have their GPUs open to our code and nothing we do is going to get them working . Chase
the makers, that’s all you can do, or buy a cheap simple unit with OpenGLES2 .0, such as
So Much Typing?
One of the problems with OpenGLES 2 .0, is that, unlike its predecessor OpenGLES1 .1 it
needs a lot of setting up, specifically it needs fancy bits of code called Shaders, because
OpenGL 2 .0 onward uses them, and OpenGLES1 .1 doesn’t . Coding ES1 .1 was, therefore,
sometimes a lot easier and setting it up didn’t need as much effort .
So why not take the easy route and start with OpenGLES 1 .1, after all Shaders are also
a little bit complex for a beginner, and in some ways are a special kind of new code mystery
If you have this, just hit next and then repeat the earlier process for connecting your
machine, though if the IP address has not changed you no longer need to set up a new
SSH connection .
Click Next, let the wizard do its thing to take you to the second page of New Linux
Project where you can enter your targets detail . Then once more click on Next, it will do a
bunch of checks to make sure that the connection is good .
#include <stdio.h>
#include <assert.h>
#include <math.h>
#include <sys/time.h>
#include "bcm_host.h"
#include <EGL/egl.h>
#include <EGL/eglext.h>
#include <GLES2/gl2.h>
#define TRUE 1
#define FALSE 0
typedef struct
{
//save a Handle to a program object
GLuint programObject;
} UserData;
EGLDisplay display;
EGLSurface surface;
EGLContext context;
EGL_DISPMANX_WINDOW_T nativewindow;
UserData *user_data;
void(*draw_func)(struct Target_State*);
} Target_State;
Target_State state;
Target_State* p_state = &state;
/*
Now we have be able to create a shader object, pass the shader source
and then compile the shader.
*/
GLuint LoadShader(GLenum type, const char *shaderSrc)
{
// 1st create the shader object
GLuint TheShader = glCreateShader(type);
GLint IsItCompiled;
// After the compile we need to check the status and report any errors
glGetShaderiv(TheShader, GL_COMPILE_STATUS, &IsItCompiled);
if (!IsItCompiled)
{
GLint RetinfoLen = 0;
glGetShaderiv(TheShader, GL_INFO_LOG_LENGTH, &RetinfoLen);
if (RetinfoLen > 1)
{ // standard output for errors
char* infoLog = (char*) malloc(sizeof(char) * RetinfoLen);
glGetShaderInfoLog(TheShader, RetinfoLen, NULL, infoLog);
fprintf(stderr, "Error compiling this shader:\n%s\n", infoLog);
free(infoLog);
}
glDeleteShader(TheShader);
return 0;
}
return TheShader;
}
GLbyte fShaderStr[] =
"precision mediump float;\n"
"varying vec2 v_texCoord;\n"
"uniform sampler2D s_texture;\n"
"void main()\n"
"{gl_FragColor=vec4 (1.0,0.0,0.0,1.0);}\n";
// now we have the V and F shaders attach them to the program object
glAttachShader(programObject, vertexShader);
glAttachShader(programObject, fragmentShader);
dest_rect.x = 0;
dest_rect.y = 0;
dest_rect.width = state->width; // it needs to know our window
size
dest_rect.height = state->height;
src_rect.x = 0;
src_rect.y = 0;
DispmanDisplayH = vc_dispmanx_display_open(0);
DispmanUpdateH = vc_dispmanx_update_start(0);
DispmanElementH = vc_dispmanx_element_add(
DispmanUpdateH,
DispmanDisplayH,
0/*layer*/,
&dest_rect,
0/*source*/,
&src_rect,
DISPMANX_PROTECTION_NONE,
0 /*alpha value*/,
0/*clamp*/,
(DISPMANX_TRANSFORM_T) 0/*transform*/);
state->nativewindow.element = DispmanElementH;
state->nativewindow.width = state->width;
state->nativewindow.height = state->height;
vc_dispmanx_update_submit_sync(DispmanUpdateH);
/*****************************************
Draw a triangle this is a hard coded
draw which is only good for the triangle
******************************************/
void Draw(Target_State *p_state)
{
UserData *userData = p_state->user_data;
GLfloat TriVertices[] =
{
0.0f , 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f , -0.5f, 0.0f
};
if (!Init(p_state))
return 0;
esRegisterDrawFunc(p_state, Draw);
// now go do the graphic loop
esMainLoop(p_state);
}
Sorry for making you type all that, but you’ll thank me one day . You will probably make a
few errors, and it won’t work for the first time . Even if it seems to compile and run, but not
do what you expect, it will have errors . That’s normal, you need to develop some practice
in finding mistakes, usually small ones, such as a missing line, or a mistyped symbol, but
whatever you do, do not cling to the dogma, that you typed it in exactly…because if it does
not work, you didn’t! Be cool with that . Learning to accept you will make errors, often, oh
so often, and taking responsibility for those errors so you can find and fix them, is vitally
important to being a responsible programmer .
There’s not really a lot to this code, but it is confusing that you have to do so much to
do so little . At this point, I’m not going to explain everything in here; the comments can
do that for you . But this will serve the purpose of making sure you can compile and run
an OpenGLES 2 .0 project .
Compile and run, and, all being well, lo and behold, we have a triangle .
Now all this is fine and dandy and you can take some pride in getting this up and running
if you typed it in yourself . But I find most entry-level programmers who are trying to
write games, come across this or similar small start-up programs, and then realize there
◾ An initialization system .
◾ A processing system, which will/can include user input or counters and will later
include enemies/AI .
◾ An exit .
These four things are vital to almost any game program, the concept of a main loop is
especially important, and though there are variations on how it’s done, almost all games
have these . But it’s time now, to make something cool happens .
To 1920 and 1080, and I’d get it filling up the screen but if I did I be making a big mistake,
which some, but not all of you reading will have issues with . Let me explain:
On most HDMI-displayed targets the physical pixel size of our screen is fixed, that’s
probably going to be 1920 × 1080 but notice I said probably . We really cannot always
assume that our potential end user has the same screen setup as us . There are some who
are using 320 × 200 pixel LCD panels, some on 800 × 600 panels, some on 720p HDMI,
and so on .
Common sense allows us to discount anything that is way too small to use, or way
too big for our limited target to handle, we can put minimum and maximum limits on
This simple little routine, does exactly what it says it does, it gets the size of the current
display, and stores the result in the variable’s width and height .
After that we can then use the values in the init_ogl routine to create a full size screen .
init_ogl(p_state, width, height);
Now that the little trick, will for now, on a Raspberry Pi,* allow our projects to run on
pretty much any screen size and be visible, regardless of the actual pixel resolution we set .
Let’s get on with making a project that puts this to use .
2D
We are going to start our game-programming journey with some 2D games; I know some
of you want to leap right into your grand, photo quality Hi Def, 3D, Massively MultiUser
Online Cloud processing realistic AI with real lasers, game . But trust me, getting the
basics right is very important and also a load of fun!
Many if not all the concepts used in 2D games are directly transferred to 3D games
and allow you the chance to properly visualize what you are doing, and measure it against
what you were expecting .
I often try to explain to my students with my tongue loosely in my cheek, how Space
Invaders and Halo are basically the same game . You control a character; you can move around
and shoot things, while at the same time trying to avoid being shot by those same things .
* Other systems use slightly different means to determine the screen size; this is noted on the support site .
2D 51
They are essentially the same game, only the visualization, level of complexity, and
amounts of data being shoved onto our screen are variable .
It’s quite possible to break any shooting style game down to the basics of moving
around, shooting while avoiding being shot . If we simply consider that as being what we
have to code, we can add the complexity and additional eye candy as we go .
Of course, there are many other types of 2D game, Puzzle games, platform games,
and racing games, the list is probably endless, and trying to cover them all, even in a book
as big as this, is going to extremes, but you will be able to employ the concepts explained
here in any kind of game, and more importantly you’ll be able to expand and develop the
concepts to suit any new game genre you are lucky enough to think of .
53
So now we’ve reached the point we need to load something from the computer’s stor-
age system, which ideally we have made or downloaded from our PC . Since we are doing
remote building, we are going to take advantage of the transfer of data that VisualGDB
handles for us more or less automatically .
Anything that is not actually compiled but is instead used by your program is called
a resource or an Asset, the terms are often interchangeable, but I prefer Assets when
talking about graphics, or sound data . I use the term Resources to describe other files
that may provide information to or be used in our project, such as a script or list file of
some kind . I may be wrong with this terminology, but no CS graduate has ever told me
different .
But assets are what we need to get access to, at this point we need to load some simple
images, and then find a way to display them .
Loading files is a pretty simple thing, C/C++ STL gives us loading and saving abili-
ties, but graphics are not just files, they are data files, data which are actually encrypted
into a particular format, and there are many formats .
We simply don’t have the time (our 30 days of free VisualGDB will fade away in no
time) to write decryption system to turn graphic files into simple pixel data that we can
actually draw on screen, so we need some way to do that .
Lucky for us the world is full of programming geeks, who like to write graphic sys-
tems that we can use freely .
As I write this after a short session of getting things set up, I have come up against a
few issues . My intention was to use a simple standard library called Simple OpenGL Image
Library (SOIL), because it is free, available for Linux-based systems, and quite easy to get
hold of . But as I tried to install it I had nothing but hassle, because of my Raspberry Pi
being unable to install the libs . Now it may just be that the mirror site is down today, or
that since I’m not that up-to-date with Linux, I was doing something wrong, but I didn’t
want to spend days waiting for it, so I decided to try another approach .
SOIL itself is a wrapper program for collection of image-manipulation programs,
including an Image Loader called stb_image, which is a widely supported and reasonably
easy to use, set of routines to load various common formats . It’s actually all we really want
from SOIL, it also works just fine for Linux…so let’s work just with that . It’s simple enough
to get hold of . If you have a Git client on your PC it’s available from GitHub, if not, go to . . .
https:/github .com/nothings/stb
And download the Zip file (Button on the top right) .
This will give you a whole load of files contained in the zip, but for now, we only want
stb_image .h, but you will probably use a few of the others later as we get more up to speed
with the process, and maybe I’ll get SOIL to actually install properly one day!
Let’s set about adding stb_image to our project .
I won’t ask you to type in another long initialization system, we can use the Hello
Triangle for now; we’re just experimenting with a file loader .
Look inside the SOIL zip for stb_image .h and copy it into your projects source direc-
tory; in this case, it’s just going to be in the root directory of our project . If you are unsure
where your root directory is, just right click on your HelloTriangle .cpp file and select Open
Containing Folder, which will open it up for you allowing you to copy things into it with
ease . Go ahead and copy stb_image .h from the zip into your project directory .
#pragma once
class MyFiles
{
public:
MyFiles();
~MyFiles();
int height;
int width;
int comp;
char* Load(char const *filename,int*,int*);
};
That’s, it’s pretty easy, we’re creating a small wrapper class that will allow us to load a
graphic file and convert it into normal raw memory .
The actual code for this now goes into the MyFiles .cpp file and looks like this:
#include "MyFiles.h"
#define STB_IMAGE_IMPLEMENTATION
#include "stb_image.h"
MyFiles::MyFiles()
{}
MyFiles::~MyFiles()
{}
char* MyFiles::Load(char const *filename,int* width, int* height)
{
unsigned char *data = stbi_load(filename, width, height, &comp, 4);
// we are always going to ask for 4 components for RGBA
return (char*) data;
}
Now we have two very useful files…which will allow any program that includes them to be
able to load some standard image* formats .
Try compiling this, it won’t do much but there should be something to take note of!
Did you notice that adding that file means that our build process has become a lot slower?
Try compiling again? It was very fast that time .
The first slow build is because this header stb_image .h header file is actually quite large
and includes many other files . Also because it’s a header file, if we included it in our main
project it’s going to get compiled EVERY time we alter the file that includes the header…
that’s a bit of a pain . Because we put the stb_image .h include in the Myfiles .cpp that’s only
* Standard images usually include png, bmp, tga, and jpg, but do be aware that there are some variations in the
formats and you need to be sure that stb_image can handle your image .
If we don’t keep our folders and files under control, it can get quite messy quite quickly,
it’s very tempting to put all our files in the same root folder and just use filters . I’ve seen
this done often but it’s not a good practice, even though we’re doing it now we are going to
change it later, it will all depend mostly on where we decide to put files when we add them
to the project . But folders containing assets are best located on the root directory .
Assets
Pictures
Scripts
Root
Source
Header
We need some data; I’ve added a few nice pics from my photo album in the assets listed
on the support site . But for this first attempt at doing graphics, let’s use the most famous
public domain graphic test image there is . Lenna .png .
Our task now is to load this in, convert it to a texture, and then display it . We have the load
part, though we must first make sure that the image is located on our source directory for
now . Later we’ll tidy up the directories, and more important we have to ensure that we add
the * .png to the type of files that are transferred to our target .
Note that I have purposefully put a small test to report a failed load . Because at the moment
we’re not actually able to display this, so we’ll never know if it loaded . Error traps like this
are invaluable to make sure you don’t write buggy code while blind, and assume later that
it’s the draw routines that have failed . Try mistyping Lenna and compile and run to make
sure you get a notice telling you that it failed .
Loading, is done, we’re also pretty sure that converting to texture code is done, so
time to worry about drawing .
Remember I told you OpenGLES2 .0 used Shaders? Well there’s a first clue as to what
will need changing, our current; triangle shader does very little, it simply draws a red
pixel, we now need a shader that will take a pixel from a texture and draw it .
Look at the shader code in Init again . Check out this line
"{gl_FragColor=vec4 (1.0,0.0,0.0,1.0);}\n";
This is the main line that needs to be changed so that it can do more than just write a red pixel .
You may have noticed the shader code itself, references other variables: a_position,
a_texCoord, v_texcoord, and so on .
We never actually did anything to set up those variables, because for the most part
they were ignored as we are just dumping a hard pixel to screen .
They need to come into play now, and be part of our program somewhere as these
provide a means to access particular pixels for any modifications we need . I don’t want to
get into this too much as I am going to assume that you’re still finding your feet with coding,
so explaining how this shader works could be confusing, so for the moment all I’m going to
do is set it . Explanations will come later when we really need them .
Here’s what your Shaders should now look like . The gl_FragColour is now going to
take a value from a texture coordinate .
GLbyte vShaderStr[] =
"attribute vec4 a_position;\n"
"attribute vec2 a_texCoord;\n"
"varying vec2 v_texCoord;\n"
"void main()\n"
"{gl_Position=a_position;\n"
" v_texCoord = a_texCoord;}\n";
GLbyte fShaderStr[] =
"precision mediump float;\n"
"varying vec2 v_texCoord;\n"
"uniform sampler2D s_texture;\n"
"void main()\n"
"{\n"
"gl_FragColor = texture2D( s_texture, v_texCoord );\n"
"}\n";
// Sampler location
GLint samplerLoc;
// Texture handle
GLuint textureId;
}UserData;
Our structure now has values we can use to store the shader’s attribute and sampler loca-
tions, so let’s add some code to our Init routine to do that; at the end of Init, you can see
we are loading a ProgramObject value (which is actually a duplication of a redundant
ProgramObject variable you can remove) …continue with the next three lines with save
our location information so that we can set up the shader .
// Store the program object
p_state->user_data->programObject = programObject;
Now that they exist they can be used, our draw routine needs quite a bit of modification,
so rather than talk through it, replace it all with this .
/*****************************************
Draw a Rectangle with texture this is a hard coded
draw which is only good for the Rectangle
******************************************/
GLfloat RectVertices[] = {
-0.5f, 0.5f,
0.0f, // Position 0
GLushort indices[] = { 0, 1, 2, 0, 2, 3 };
glEnableVertexAttribArray(p_state->user_data->positionLoc);
glEnableVertexAttribArray(p_state->user_data->texCoordLoc);
// Bind the texture
glActiveTexture ( GL_TEXTURE0 );
glBindTexture(GL_TEXTURE_2D, p_state->user_data->textureId);
//actually draw the rect as 2 sets of 3 vertices (2 tris make a rect)
glDrawElements ( GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices );
It’s not too different, but you can see now that there’s a bit more to it, we’re setting up more
vertices, because a rectangle has six being made from two triangles, and it’s also having
to deal with the textures points and setting up the Attribute Pointers so that the data in
our p_state is used by the shader .
Compile and run, and you will still get a blank rectangle window on screen . There’s
one last important part to add to this .
Go back to the main loop, which in this case, is the while (TRUE) loop . After we
check if our data are valid with this;
if (OurRawData == NULL) printf("We failed to load\n");
This final bit of the puzzle will tell our p_state structure, that it has to use the texture ID
supplied by the CreateTexture2D routine .
Compile and run, and we should see this lovely screen .
Well done, we can now display images . That’s a very important step . Of course, this code
is just horrible; it’s a mashup of HelloTriangle and a few bits of things we added . It’s very
rigid and is only really good for one rectangle on screen at a time . It’s been hacked together
to do one particular thing . But it works; now we have to make it into something more
usable and a lot tidier . It also helps to demonstrate that the shader code we need to use is a
little confusing; that can sometimes be the second hurdle after the triangle disconnection
that holds new programmers up and scares them off .
But not to worry, we’re going to try to get past that confusion, trust me for a while so
that I can set some code up that will allow us to largely ignore the hardware and get some
actual game coding done .
glDeleteTextures(1, &p_state->user_data->textureId);
So how do we recycle the memory? Our load routine in the stb_image .h files uses a method
of allocation called a malloc() . This returns an address, called a pointer, which points to
where the data are, our load routines then filled that up with our image . To free up that
memory, the corresponding function is called free() .
By passing the pointer to the data, we want to release or free within the free() function,
the memory goes back in to general use . So just after you delete the texture add this line:
free(OurRawData);
Now, run our program, in fact run it, go make a nice hot beverage, mix some pizza dough,
let it rise, knock it back, let it rise again…and you get the idea . This program can now be
left on its own forever, because memory will now be allocated and deallocated correctly .
… .Adding even more kind of buffers makes my eyes bleed when I see them . But there is a
method in this madness!
This is one of those deliberately bad (very bad) design choices I mentioned at the
beginning of this book, so it’s a fair question to ask why are we doing this?
It’s to do with the history and the fact that a lot of old online tutorials you may find,
used systems like this, you may come across them and will want to implement them on
your chosen target . In addition, I want to remove the use of the GPU from your mind
for the moment so that we can focus on developing some coding concepts and not worry
too much about how to display things . A straight display to screen relationship is simple
to visualize and code for . An abstract, point in space, drawing machine, that works in a
virtual space if you feed it just with the right data, is a slightly scarier concept for you to
digest right at this moment .
// subtractive blending
inline Pixel SubBlend(Pixel a_Color1, Pixel a_Color2)
{
int red = (a_Color1 & REDMASK) - (a_Color2 & REDMASK);
int green = (a_Color1 & GREENMASK) - (a_Color2 & GREENMASK);
int blue = (a_Color1 & BLUEMASK) - (a_Color2 & BLUEMASK);
if (red < 0) red = 0;
if (green < 0) green = 0;
if (blue < 0) blue = 0;
return (Pixel)(red + green + blue);
}
class Surface
{
public:
// constructor / destructor
Surface(int a_Width, int a_Height, Pixel* a_Buffer, int a_Pitch);
Surface(int a_Width, int a_Height);
Surface(char* a_File,MyFiles* FileHandler);
~Surface();
// member data access
Pixel* GetBuffer() { return m_Buffer; }
void SetBuffer(Pixel* a_Buffer) { m_Buffer = a_Buffer; }
int GetWidth() { return m_Width; }
int GetHeight() { return m_Height; }
You can see that the class actually only contains four variables, m_buffer being the most
important, because it contains our data, width and height are obvious, Pitch, however, will
become apparent later .
As C++ variables in a class are known as members, the prefix m_ is commonly used
to indicate that the variable is a member of a class .
The Pixel stuff at the top is to allow us to grab the data in a recognizable pixel format,
which will be much easier later . It’s using the Inline directive because this isn’t something
we want in a routine, if it is used it needs to be pretty quick and this is the best way to do it .
Also notice this section here:
Surface(int a_Width, int a_Height, Pixel* a_Buffer, int a_Pitch);
Surface(int a_Width, int a_Height);
Surface(char* a_File, MyFiles* FileHandler);
We have three different declarations of constructors for our Surface Class, each of which
contains different variables, which identify them as specific overloads, this will allow us
to create a surface, from a filename, or a width and a height, or with width and height and
the address of another buffer .
You’ll also notice that all the variable names start with a_; this is another common
way to indicate that a parameter is a variable that is given to the method . It’s not a hard and
fast rule, and indeed in the header you don’t even need to name the variables, you could
just as easily write declarations like this;
Surface(int, int, Pixel*, int);
Surface(int, int);
Surface(char*, MyFiles*);
The only thing the compiler actually wants to know in the method declaration is what
kind of values are the methods going to use . How many parameters, what type they are,
and the order are very important, because it defines which particular version of the con-
structor we are going to use when we call it .
Even though it’s valid, we don’t need to use logical variable names, but adding a vari-
able name, which is also descriptive, makes for easier to read code; for example, look at
the following:
You can see the problem! Nondescriptive variable names, in a header, might be valid code,
it will compile, but for the poor human trying to read it, it’s not much fun and gives little
away . Always try to use descriptive variable names, even in the header declarations .
Surface(int a_Width, int a_Height, Pixel* a_Buffer, int a_Pitch);
Surface(int a_Width, int a_Height);
Surface(char* a_File, MyFiles* FileHandler);
Much nicer to read, and easier to understand what we are passing, or trying to return .
We’ll start to write the constructors first as they are the most important .
It’s a rather nice feature of C++ that, when we define a class, we can add the concept
of methods we want to have in our class, but don’t actually have to write them yet . It lets us
think about it first . That’s basically all a declaration is, a concept of what we plan to write .
We only actually have to write it if we try to call it .
The accessor functions are usually nice short Get and Set routines whose job is to
access and perhaps consistently modify a value before returning it . Since these are nearly
often very small, they are often best done in the header, so the Get and Set functions are
basically commands that will let us keep our variables private and allows flexibility later
if we perhaps have to modify our values in some way, before we return them to a calling
routine . If they need more than a couple of commands to return their values, then move
them into the .cpp files because they have become proper code methods rather than simple
accessors .
Private members are variables that are usually only accessible by the class methods
themselves . I’m not a massive fan of private members as you will discover, as an old school
assembler coder I like to have all variables open and available to me and not worry about
additional calls using up CPU time . But they are a C++ convention and if you are working
with code that’s likely to evolve and change or you work with other programmers, keeping
variables safe from interaction with other classes can have great benefits, and accessors
do allow you to make controlled consistent alterations to data you want to give to other
classes .
Ok, let’s look at the constructors, which will let us create these lovely little pixel buf-
fers! Of course, you’ve downloaded it, put it in your source folder, and added the existing
file, but here’s the code anyway, so I can explain it .
Surface::Surface(int a_Width, int a_Height, Pixel* a_Buffer, int a_Pitch)
: m_Width(a_Width)
, m_Height(a_Height)
, m_Buffer(a_Buffer)
, m_Pitch(a_Pitch)
{}
Our first one assumes a pixel buffer has been set up and simply passes it to our m_Buffer
pointer along with size and pitch . So our class is complete .
The second is a meatier one it actually makes some space in memory that’s big enough
to accommodate our buffer . I am using malloc instruction here for the moment, which is
a standard way to allocate some memory, but it does not guarantee that the start of that
memory is on what we call an alignment, usually for 32bit machines that will be at every
fourth byte, which is far easier for the CPU to access when grabbing a memory . For now,
we’ll work with it and check later if it is aligned and see what we need to do to fix it, there
are work arounds .
The third actually loads up an image, in the same way our picture displays program
and uses its data in memory as our m_buffer . Notice, we are passing it a file handler to use,
because I don’t really like to create two of them, any classes that need to use a file handler
should be given the address of the one we created at the start . Later though we’ll close the
one in the Main argument and explore other options .
You may notice I also like to inform my user of the files being loaded, or when they
failed to load . This is me being a bit overcareful with my errors, but it would be wiser to
use a function of my own, something like notify_user(char* msg); which can actually be
stopped from outputting when running in a release mode . But I’ve not written it yet, so
we’ll do it later when we refactor the code and we’re sure it’s all doing what it should do .
First, pretty obviously we need two buffers: one to work on and one to display .
Second, we need some way to keep track of which one we are drawing to .
Third, we need a means to do the swapping and put our texture on screen .
Part 1: This is the easy bit; let’s make two buffers that are big enough to hold our
screens data .
Pixel* Locations[2];
bool createFBtexture()
{
malloc and memset are those old throwback systems from the days of C, which
allow us to allocate an area of memory of a specific size . It returns the start of that
area as a pointer and we store it in our Locations array . memset, lets us quickly
set that buffer to a value, I don’t really need to do that here but it can be useful
when debugging to see if the memory has been cleared/written to have it all set to
a simple to view value .
Part 2: Let’s create a few simple variables and a Surface Class, which we are going to
work with as our screen surface . But whose m_buffer value will point to the buffer
we are currently drawing to .
Surface* m_Screen;
GLuint framebufferTexID[2];
Part 3: The messy bit, promise me you will never show this to a real OpenGL/ES
coder…ever . The death and rebirth of the textures to create the display .
Finally, let’s set the screen size to work on any size of screen . For Raspberry Pi, we have a
nice little function called
graphics_get_display_size(0,&scr_width, &scr_height);
which gives us the physical pixel size of the screen we have . Sadly, we don’t have that on
our other possible targets, but I’ve added a best guess system that will work on most on
the downloaded versions . We can use the display size to set up our EGL window, and then
the screen will fill up . But…if we use the full HD 1080 size, it gives us a screen size of
1920 × 1080, that’s rather large, and our textures allocate internally as a closest power of
two (POT), so it will actually create two 2048 × 2048 textures, our max size . Testing shows
this to be a bit too much effort for our humble targets, so I’ve set the maximum buffer sizes
to 1024 × 800, a decent-sized screen, which the display system will scale to fit on whatever
size screen we are using, even those little 3 .2″ LCD . Internally, the texture generated is
1024 × 1024 so that means we’re not wasting too much actual space .
So there you have it, a working double buffer system, and a Surface Class we draw that
effectively becomes a screen to see the results…
We’ve done rather a lot so far without blowing anything up, but we now have a viable
system, albeit clunky, that will let us write our first basic games .
Yes, we’re ready to make a game now . Save our project somewhere safe, this is going to
be our starting block for the next couple of projects .
77
Looks familiar?
Using the OS
I said right at the start that I didn’t want to add any third party libraries and to stay away
from the OS as much as possible, BUT this is one of the places where that is going to create
some issues for us .
Our projects need to have some kind of input, keys/mouse/joystick to be usable . This
is exactly the thing our OS is designed to do for us, handle I/O . We’ve already used it for
our loading files, though C/C++ abstracted that away from us and we didn’t need to see
the underlying code that actually switched on the drives, moved the heads, and pulled in
the data, now we have to use the OS, for our key handling . But it’s not quite as simple as
entering and printing text was, which was also our higher level C++ code talking to the
OS, which controls the console .
We only really want to detect key/button presses and in the case of the mouse, posi-
tion changes . The STL does not provide for that, it really only provides for character-based
concepts; it supplies the character A not the fact that the A key was pressed . Testing indi-
vidual keys or device positions is problematic .
Normally, we’d probably have a consoles SDK, or in a PC, another library in place like
SDL/SDL2 to take care of things like that for us, we have no SDK and SDL/SDL2 is rather
Now I really would like you to type this in again, because the practice you get typing in
code is far more valuable than just blindly adding a file . But if you really don’t want to type
this in, the input .h/cpp files are on the download site . But please . Type it in!
If you are adding the files, then copy them into your folders and add an existing file to
include them; otherwise, if you are taking the sensible approach and practicing your code
entry skills, go back to the Visual Studio Solution explorer and right click on the filter for
Header Files, and add a new file called Input .h, and start typing this in .
class Input
{
#define TRUE 1
#define FALSE 0
public:
typedef struct // A very simple structure to hold the mouse info
{
int PositionX; // contains the relative position from the start
point (take care to not confuse it with the GUI mouse position)
int PositionY;
float RelY;
float RelX;
unsigned char LeftButton; // TRUE when pressed FALSE otherwise
unsigned char MidButton;
unsigned char RightButton;
} MouseData;
char Keys[512]; // maximum possible keys is a little less than this, but
best to be careful in case of future expansion
MouseData TheMouse;
pthread_t threadKeyboard;
pthread_t threadMouse; // handles for the threads
int iterations;
bool KeyPressed;
/**************************************************************************
This thread processes the keys, and stores TRUE/FALSE values in the Keys[]
array.
**************************************************************************/
}; // end of class
((Input*)arg)->TheMouse.RelX = mousex;
((Input*)arg)->TheMouse.RelY = -mousey;
((Input*)arg)->TheMouse.PositionX += (mousex / 1.0f); // 1.0 can be
replaced by a scale factor (entierly optional)
bool Input::SimpleTest()
{
return KeyPressed ;
}
DIR *dir;
struct dirent *ent;
Do take notice of the small comment in the code regarding Linux machines, you may have
to manually change permissions to access the key event handlers, which are basically files
in a particular directory .
Regardless of how you entered it, it’s time to make use of it, to use this we just need to
add a #include Input.h to our list of header files .
In our main program loop, we will need to have an instance of an Input Class like this:
Input Input;
And thereafter in our initialization system we can switch on keyboard and mouse scans
by using
Input.Init();
You can find a complete list of all the KEY_XXX codes in the linux/input .h header, we’ll
talk about the mouse later, for now we don’t need to use it, but we now have some very
useful access to our keys!
There is a downside to this code though! When our project is running, our back-
ground system, Linux, is running its own version of this key scan, so it still has its
keys working . That means we will provide input to our terminal/Graphic User Interface
(GUI) and our project when we press keys, this is not desirable . We need to stop that
happening . But for now, if you run in GUI mode, the GUI will ignore all the key inputs .
GUI takes up some memory though, so it’s important we come back to this issue and
find a way to either switch off the key input or ensure key input is transferred only to
our project .
We’ll do this later, for two reasons;
2. It’s actually a bit of a bug . VisualGDB uses a bit of a hacky means to send info back
to our development PC, and that basically stops any normal attempts we have to
redirect the native OS’s outputs…so for now, we need to live with it, it will only be
So taking stock, now we can load graphics, display screen graphics, read keys and even
have a pretty good handle on how our four main game states work . But we do need a few
more things in our arsenal before we can make an actual interactive game . We need to
have an ability to create and display some form of graphic object . In this case, sprites .
Sprites are a pretty old fashioned name for a graphical object that is displayed on
screen and moves over/in/out of the screen area under programmed control . So far we
have screens, now we need things that move within the screens .
This is where it starts to get complex, but fun…Let’s start coding up some means to
display sprites and then use them to make our first game .
Start as We Mean to Go on
Yup you guessed it, we have to do some setting up before we can actually write the game,
which means a new tidy project and perhaps more important a simple way to keep track
of the initializing and setting up of our OpenGL 2 .0 graphic setups, as well as things that
will be used a lot and can be expanded on as we go forward .
I’m not going to be cruel again and make you type in everything…there are a couple
of setup files you need to have which probably could need some explaining but that will
stop us getting to the good stuff .
So on the Support site, you’ll find a project called InvaderStart, download it and fire
it up .
It clearly doesn’t do much yet but the basic structure of our 2D games is there and
we’ll improve on it as we go .
For now, let’s ignore the OpenGLES code files, they are tech files doing what we need
to do to get our game image on screen . We’ll make more sense of that later .
The most important thing is that our project fires up into an application entry point,
which is used to set everything up . On this project it’s called Invaders, we’ll rename it in
different projects and expand it a bit but really it has one purpose, to get our game up and
running .
Let’s look at the main function, which is where everything starts:
createFBtexture();
p_state->user_data = &user_data;
You can see it starts off making a small structure for some Program data that will be used
to draw . It then does a little bit of machine specific magic to get the width and height of
our screen stored in variables . Note, graphics_get_display is a Raspberry function, I’ll find
something equivalent for non-Raspberry machines .
Then two cool things happen . We initialize our Input and File handlers . And notice
we do them a different way? Input is declared as an instance at run time with this simple
line near the top of the file
Input TheInput;
So it’s created and constructed when the project fires up, so that when we use TheInput, we
are actually always talking about the instance of the class we created there .
FileHandler though is declared like this;
MyFiles* FileHandler;
MyFiles* FileHandler;
Creates and instantiates that at the location where the value is declared, it might seem like
it’s the same thing but there are subtle differences .
We have to create an instance of MyFiles and store it in FileHandler to use it . That’s
done with
This is referred to as a Dynamic allocation, in other words, we create it when we need it,
and also we can remove it when we don’t .
Addressing variables or functions inside the classes will also depend on how they are
created . We’ll talk more on this as it comes up .
Compile and run your project, it won’t do very much but you should get something
simple on screen .
So we now have a very straight forward system . That initializes our machine, jumps to a
Game Class and goes about its business . Initializing on the first pass, creating our instances,
and then returning to a main loop, which checks for an esc key press to allow a clean exit .
This is a good framework to build on .
◾ And we can be hit by our bullets—we blow up, and we lose a life .
◾ The invaders move left or right then move down at the edge .
It’s important to think about what we are going to display, this little list and a rough screen
drawing give us a lot of information . We need the game to able to draw different images,
respond to key presses, move objects semi-intelligently, create new objects when needed,
bullets and missiles, and somehow do it all at the same time . Take some time before ever
putting fingers to keyboard, to think about what kind of code we need .
Seems simple enough, Ok, so we’ve described the main features, these are things we
need to code and the order we do seems fairly simple, we’ll start by putting a shooter on
screen .
On the support site, you will find two projects InvadersStart and InvadersFin . I really
want you to use the InvadersStart, which is the basic framework and assets we are going to
use, I’ll put down the code in here that you need to enter in to get it all working, and let you
try out things as you go . The InvadersFin project is just so you can see the finished work,
but only use it if you really mess up . You should get as much practice as you can entering
code and trying out things before moving the next steps .
Our first task is to put a player on screen, let’s call him Bob, cos I like the name Bob . .
BOB, it has a nice ring to it . But before we do that we need to think a bit more about what
other kinds of things are going on screen .
Now that isn’t too bad, but that type variable in the members list is a bit clunky, every
time we do the update, we have to use a switch system or condition test, to get to the bit
of code we want that relates to the type of thing we are updating, and we are now going to
fill our Object Class with update code for all the different types of thing we have, even if
the instance of thing does not need them . We really only want the type to be used for ID
purposes, not for decisions on which update routine to use .
There has to be a better way? Of course, there is, it’s called inheritance! Inheritance is a
key part of C++’s makeup, which allows it to do a range of nice things, if we consider that
Objects is a Base Class that contains all the prime concepts of an object, such as its position
and its image, and an update routine, then that’s all we need . But we want to create different
types of objects we should be able to create a new class that has access to all the base values .
Think of it like this, you, me, and everyone we know, are humans, (I hope?) . We all
derive from a basic concept of what a human is . Bipedal, four-limbed mammals with big
brains and stereo vision . But if we are all built from the same base model . Why are we all
different, and how do we even begin to explain males and females?
A male human is derived from the basic concept of a human template, but has a few
extra bits . A female human is also derived from the basic concept of a human template,
and has a few different extra bits .
The human genome, defines a whole range of different variables and values, they alter our
looks, build, skin color, eye color, hair, and so on . The vast range of these variations are com-
mon variables to both and our genes switch different things off and on . But some genes are only
found in our Y Chromosomes, meaning only Males can have them or set things to on/off in our
base variables . Making Males and Females quite different (as if we didn’t know that already) .
But, both derive from a human template, so both are human . If we wrote that in code
we could use this
What this basically means is that we have two types of Human, each has their own con-
structor and can be identified as being of a class Female, or Male . They are unique, but
share the fact they are Human, and their differences can be listed in the class definitions,
or used to set up the values contained in variables in the base human class .
So how does this amazingly over simplistic explanation of the battle of the sexes help
us draw and update things? We have a base class, called Objects, it’s going to grow a little
as we go, but basically even now it tells us all we need to know about things that are going
to be drawn on screen . But the behavior is going to be different . Say however we write a
Bullet Class, which inherits from Objects we can put the behavior of our bullet in its very
own class definition and not clutter up the Objects Class .
The definition of our bullets can then look like this .
So now, we have a Bullet Class, (actually I called it MyBullet), which derives from Objects,
so it has all the Objects traits and variables . It even still has a variable called type though
we may not need it any more .
What is different though and now neatly contained in its own class definition and has
the code that is unique to it and its update function are now cleanly held in a simple class
and readable file .
Now let’s get back to Bob, Bob, is a shooter, so we need to create a class for him, so to
your project add a Shooter .h file, which contains this class definition code
#pragma once
#include "MyFiles.h"
#include "surface.h"
#include "Objects.h"
#include "Input.h"
class Shooter :public Objects
{
public:
Shooter();
Shooter(char* f, MyFiles* fh);
~Shooter();
bool Update(Surface*,Input*);
};
We could also add the destructor here, but if we don’t enter anything, it will simply use the
Base Class destructor, which for our likely needs is all we need . It’s up to you . You can add
a destructor of your own if you want?
The Constructor is only setting up a few variables . MarkForRemoval will become
apparent later .
Now the meat of this class is the update routine, which is actually going to move old
Bob around, so add your movement routines now
bool Shooter::Update(Surface* a_Screen, Input* a_Input)
{
bool fire = false;
if (a_Input->TestKey(KEY_LEFT))
{
Xpos--;
if (Xpos < 0) Xpos = 0;
}
if (a_Input->TestKey(KEY_RIGHT))
{
Xpos++;
if (Xpos > SCRWIDTH - Image->GetWidth()) Xpos = SCRWIDTH
- Image->GetWidth();
}
Image->CopyAlphaPlot(a_Screen, (int)Xpos, (int)Ypos);
return fire;
}
Ok, so Bob is now happily moving left and right and being drawn . We’re done with him
for a few minutes .
Now we can see clearer that MyBullet, and Shooter have a lot in common . They both
need coordinates to tell us where on the screen they are, they both have surfaces, which
need to be copied onto the active screen buffer before they can be seen . These are now nicely
contained in our Objects Class . The only real difference between Bob and our nameless
bullet is that Bob responds to key controls, and bullets just fly up until they go offscreen .
Now any class that derives from this, which has an Update function with the same return
value and argument list, will replace this call .
We could also do this;
virtual bool Update(Surface* s,Input* InputHandler) = 0;
That would indicate this function is a pure virtual, in other words the Base Class has no
update routine at all, and therefore any derived classes are forced to provide it . But for now
we’ll stick with an overridden Update to give us a bit of flexibility .
Objects* Galroth[55];
What this code does is create space for 55 pointers to Objects, and we can reference them
by using Array indexing, so Galroth1, who sadly will never live long enough to attend a
naming ceremony on his victorious return to his home planet, can simply be addressed as
Galroth[0]…ermm wait he’s Galroth1 not Galroth0 .
Well that’s to do with a little quirk of array-based indexing, we need to start with 0 as
our first index, so our index range for our 55 Galroths is Galroth[0]… . .Galroth[54], and
besides we really don’t care what he’s called . From now on he’s an index, and the cool thing
about indexes is we can use variables to get to them .
If we define a variable with a number, we can access the objects update function like
this:
int I = 25;
Galroth[I]->Update();
Is the same as
Galroth[25]->Update();
Galroth26->Update();
But to update or otherwise do anything with 55 Galroths, we would have needed 55 indi-
vidual calls to their proper names . That’s not practical, so an array is best because we can
loop through it with ease .
Of course so far all we have, is just an array full of empty spaces or random rubbish,
there no actual values in there at the moment, so our Game initialize routine must now
create our Galroths, line them all up and put them on screen, a bit of work, but it does not
now have to create and keep track of 55 individually named objects .
We can still keep Bob as Bob, because there’s only one and he’s a fairly important
object, but these invader scum don’t deserve our time to name them all .
We’ll talk about update in a few moments, but for now this is enough . You can work out
the default constructor and destructors, which are not going to do much, so let’s look at
the main constructor for this .
Aliens::Aliens(char* fName, MyFiles* fh)
{
Image = new Surface(fName, fh);
MarkForRemoval = false;
this->Type = Alien;
}
And the update function, which for now can be a simple call to the base functions update
to provide a draw to the screen .
That will give us now a simple basic ability to create and update an alien . Now go inside
the Game .cpp file at the init, and let’s set about creating 55 invaders, which might seem
a simple task, it is slightly complicated by having different graphics on different lines, so
we’ll create a small array of filenames for the graphics for each line first .
char* Names[] =
{
(char*)"../Assets/invaders8x8/InvaderA-1.png",
(char*)"../Assets/invaders8x8/InvaderB-1.png",
(char*)"../Assets/invaders8x8/InvaderB-1.png",
(char*)"../Assets/invaders8x8/InvaderC-1.png",
(char*)"../Assets/invaders8x8/InvaderC-1.png",
};
AlienCount = 0;
for (int i = 0; i < 5; i++)
{
for (int x = 0; x < 11; x++)
{
Aliens* T = new Aliens(Names[i],a_FileHandler);
T->Xpos = (x * 11) + 5;
T->Ypos = (i * 11) + 40;
AlienList[AlienCount] = T;
AlienCount++;
}
}
Now there was a perfect example of how to use an array when defining a set of objects at
compile time, I could have used a hard number 5 like this:
char* Names[5] =
But as I was defining them as I created them, the compiler was happy to count how many
strings I entered . So the number of entries was clearly known at compile time .
AlienList on the other hand, had to be defined in our Game .h file as an array of point-
ers to Alien instances with 55 entries . As there is no way for the compiler to know how
many things were going to be entered into it, compilers have no way to look through your
code and understand what you mean by a pair of nested loops creating 55 aliens . So you
have to explicitly tell them .
Aliens* AlienList[55];
As it’s in the Invaders .cpp file, only that file will know about it and we really want the
Game Class methods to see it, so we need to tell the Game Class, which wants to use that
vector that it actually exists, we can do that with this line of code, at the top after the
headers
extern std::vector<Objects*> MyObjects;
So now the Game Class can use the MyObjects vector . Its nasty, but it works .
If we now ensure that our init system in Game loads up the MyObjects list to a vector,
although we’re not doing it yet, it means we can also load and process other objects in that
list, exactly the same way we would with an Aliens* AlienList[55];
char* Names[] =
{
(char*)"../Assets/invaders8x8/InvaderA-1.png",
(char*)"../Assets/invaders8x8/InvaderB-1.png",
(char*)"../Assets/invaders8x8/InvaderB-1.png",
(char*)"../Assets/invaders8x8/InvaderC-1.png",
(char*)"../Assets/invaders8x8/InvaderC-1.png",
};
And the principle is now exactly the same . But I know that no matter how many other
objects I put on this list, or even perhaps remove from it, they will update .
Move Em Out!
Let’s get them moving, they move left and right and sometimes down…which means we
need a value for them… Since they all move in the same direction at the same time, we can
keep that direction as a game variable . Let’s use an enumeration to specify the directions
and give them easy to remember names . But one thing to remember is they all move as a
group, so we need to work out a way to do that, moving them individually isn’t going to
work .
Add this to the Game .h file before the class is described because this will be used in
other places not just in the class
enum Directions {Left, Right, Down };
And also in the class itself, let’s keep a variable, which is going to be one of those directions
Directions Direction;
Now back to our Game .cpp file . Our update routine only draws our guys at the moment,
and it would be nice if we could get them to move but we don’t have that . Yet!
So we need our Game loop to do the movements, even though it’s against our golden
rule to have too much code in there . But at the moment, we don’t seem to have a choice, if
we want them all to move, we need to do that at the point of access . Let’s make it simple…
Add this code to your Game loop just before the update loop .
Now at first glance this looks ok, we’re moving all the 55 aliens depending on the direction
they travel, when it gets to the edge, the direction changes…easy huh?
Ok run it and see…
Kinda cool to see the aliens bouncing left and right, but it’s not right is it? We’ve some-
how got one guy sticking out and creating a gap .
Why?
Well there’s a simple flaw in our system, yes we are moving a group in the same direc-
tion, but we also need every single one to check if they are at an edge . That means that five
guys at different times in the update cycle are going to change the direction and the guys
in front of them, will get the message, the guys behind won’t, so there’s going to a gradual
change in their motion . Not good .
So rather than let them cause the change direction when they detect the edge, we must
let them signal that a direction change is needed, and only after all the aliens have moved
can we then set a change in direction .
The code for that now looks like this:
Directions ShallWeChange = Direction; // keep track of the current Direction
for (int i = 0; i < 55; i++)
{
float Xstep = 0;
float Ystep = 0;
switch (Direction)
{
case Left:
Xstep = -1;
if (MyObjects[i]->Xpos < 1) ShallWeChange = Right;
break;
case Right:
Xstep = 1;
if (MyObjects[i]->Xpos > SCRWIDTH - MyObjects[i]->Image->GetWidth() - 1)
ShallWeChange = Left;
break;
case Down:
Ystep = 1;
break;
default:
printf("Huston, we have a problem");
break;
}
MyObjects[i]->Xpos += Xstep;
MyObjects[i]->Ypos += Ystep;
}
Run that, and check out our cool new left <> right aliens .
Nice, but there’s another issue, it’s far too smooth, we don’t want our aliens to glide
like that, we want them to step, and soon to animate . So let’s make them move in slightly
bigger steps, and also add a timer variable to our Game .h file after direction .
int StepTime;
Every time we want to make a change, that’s two sets and two checks…four times, but by
using a defined value we only have to change the defined value once . And there’s no danger
of us forgetting to change a value somewhere in the code .
Now go ahead play with the TIMEPERSTEP and SIZEOFSTEP values, until you get a
nice jerky movement in your aliens . I’m going to stick with 50 and 2 for now .
All pretty neat so far, we’ve got nice jerky moving aliens, but it’s still not right, we
need to go down . And Down is a direction all on its own, we also need to change direction
after we’ve gone down… So a bit more logic is needed, we want to go down, then change
to a new direction .
Our Directions ShallWeChange variable is a local variable, which means it’s
going to be lost once this routine is finished, so we need a more permanent variable
Directions SavedDirectionToChangeTo;
So there we have it, aliens moving left to right and dropping as they go . We have our bad-
dies . This is an example of the game loop doing the logic for our enemies, which is not
very Object-Oriented Programming (OOP), and also not very flexible . Though for these
particular very basic baddies we can live with it for now .
But do you see how cluttered our main loop and Game .cpp file has become…Its really
only supposed to process our baddies and check if we’re done with our slaughter of invad-
ing hordes, but now it’s handling the main logic of our aliens . We’ll look at this later, for
now, let’s pat ourselves on the back .
So we’ve got baddies, we’ve got Bob, the last savior of the human race, we’re almost
ready to start killing baddies . But there is something missing before we start on the shoot-
ing and the killing and the maiming and the ewuugh . We want to put a bit of animation
in here… .
Animation 101
Animation at this level really is nothing more than changing our displayed image every
so often so that we can create some semblance of motion . It’s very much the same concept
as flicker books; as you flick through the book, each image is seen by the eye, and for a
moment that image is retained, so that when you see another image, slightly different it
appears to be a transition .
Cartoons have relied on this concept since the dawn of the film . And computers pretty
much are like a bank of Disney animators at a giant desk drawing 50 or 60 frames of screens
It’s not the worst warning ever, it just means that because we are using char* to track our
file names, it’s asking us to use the more up-to-date C++ string type . So we can do that…
or…we can tell it, no, I really want to use chars please…and that’s done by explicitly cast-
ing to char* .
Adding a (char*) cast to the start of the strings ensures that our compiler will know that
these are char*, you are telling the compiler, “it’s what I expect them to be compiled as, now
stop complaining and do as I ask” (Use appropriate power crazed internal voice for that state-
ment) . It might seem a bit of a chore but it’s very good practice to be clear to the compiler!
The key point you must keep in mind, don’t let warnings pile up, they are there for a
reason, and while most of the time you can ignore what they are telling you, sometimes,
often in fact, there are good reasons for the compiler to warn you that you might be mak-
ing a mistake, and those good times, can be obscured by the thousands of simple situa-
tions when being clear and explicit costs you nothing . Treat warnings as errors, and make
sure your code tells your compiler exactly what you want it to do at all times .
So now our Aliens Class is the type of object we want to create, we can make a few
simple changes to our Game::Init so it now looks like this . Much tidier…We moved all the
main init code into the Aliens constructor, which you’ll find in your folder, but not added
to the project .
Add a #include Aliens.h to your Game .h file under the current list of #includes, then alter
Game .cpp’s init to the new version . Now you can add the new Aliens files into your proj-
ect . They are very simple, so have a quick look at them . By adding the Aliens .cpp and
Aliens .h code to your project, and compiling we will see our baddies are animating and
moving… we’re nearly there .
Did you notice though that in Game::Update() we didn’t do anything to the update
loop itself
for (int i = 0; i < MyObjects.size(); i++)
{
MyObjects[i]->Update(a_Screen,a_InputHandler);
}
It’s still happily processing all the objects in the vector, by calling their update
(Surface*,Input*) functions, Aliens, are all Objects Class things, in the same way that Bob,
our Shooter, is an Objects Class thing!
Now this still isn’t ideal, and the shortcomings, might become apparent soon, but for
now we’ve got movement and animation and we’re ready to do the next cool bit . Shooting! .
So now the bool that Bob returns, which indicates a desire to Fire, true or false, is stored
in another bool, and then tested . We only allow Bob one bullet at a time so a check to see
if a Bullet currently exists, is needed, and if we do not currently have a bullet in place, we
can now create one, set its values to be starting just at Bob’s nose, and then leave it to do its
thing . Now we have firing! There’s no end to the damage we can do!
This is the very common box check, or more accurately an axis aligned box check
or AABB . It’s an algorithm, which is easy to implement, reasonably fast and pretty
accurate for objects that are made up of square/rectangles . It does have a few limits
though, as you can see in the diagram, it only detects the fact the squares have over-
lapped, it does nothing to test if the overlap area actually contains any part of the
graphic . Testing for that needs a bit more work . But usually it’s not needed if our game
is moving reasonably fast .
The basic code format for the box check is something like this;
if (
(rect1.x < rect2.x + rect2.width) &&
(rect2.x < rect1.x + rect1.width) &&
(rect1.y > rect2.y + rect2.height) &&
(rect2.y > rect1.y + rect1.height)
)
{ printf("overlap detected"); }
Not too shabby, though even when laid out in one line you can see there are a number of
compares, tests and && checks going on so it’s got to check quite a few things, you can
speed it up a little by breaking the test down into pass/fail tests and returning as soon as a
fail is encountered but generally we leave it like this for speed and ease of use .
Circle Checks
Another very popular 2D collision system, is the circle to circle test . It works in a very sim-
ilar way to the box check but is a little bit faster . It works using our old friend Pythagoras’
theorem, to test if the distance between two circles, which encapsulate the main part of
our sprite, is less than the combined two radius of the circles .
This can be optimized a little by not actually caring about the square root and keeping
track of the squared Radius values in the object somewhere because it’s unlikely to change .
This is my personal preference for a quick fast, reasonably accurate obj<>obj collision test .
// do a simple circle/circle test
float R1 = TheObject->RadSq;
float R2 = this->RadSq;
// assuming sprite ref is top left, move to the centre
int diffx = ((Xpos + My_Width) - (TheObject->Xpos + Ob_Width));
int diffy = ((Ypos + My_Height) - (TheObject->Ypos + Ob_Height));
Collision checks are one of the most intensive things we can do on a game, because we
have 55 invaders, all of them need to check with our shooters bullet to see if they have
been hit . That’s a lot of tests . And in other games where objects may collide with other
objects, each of those objects will have to test with every other object as it tests with you .
The number of tests in most games can become very large very quick . So an effective fast
test is essential .
Now you may think that these are totally passive and uninteresting things . They don’t
move, they don’t shoot, they simply stay on screen and degenerate/vanish with hits, it’s
the bullets and missiles that can do all the work? But they are still objects, so they need to
exist as a class .
Create and add Shelter .h and Shelter .cpp files to our projects .
The header is going to look very familiar .
#pragma once
#include "Objects.h"
using namespace std;
class Shelter :
public Objects
{
public:
Shelter();
~Shelter();
bool Update(Surface*, Input*);
bool TestForHit(Objects*);
};
It’s not dissimilar to a Bullet Class, but we’re not going to need any graphics for it, so it
does not need a file-based constructor . It needs an update, and a hit test, but unlike the
bullets, it has to check for hits from both the player and enemies bullets . So it’s going to
need its own hit routine .
And that presents a problem .
Neither the bullets nor the Invaders have any idea what the shelters are, they have no
access to the Shelter instances, which are going to be contained, most likely in the Game
Classes MyObjects list .
Look at that, isn’t it horrible? Right in a the end of an already cluttered loop, after we
update an object, we then have to make sure we’re not the actual Bullet, then test if our
current test object is hit . Fair enough, that works we can use that . But that game update
loop is starting to look more than a little untidy .
#side note- One thing you might notice is the way my brackets line up . And I also have
a couple of comments to show which bracket is associated with which condition . This is a
format I like to use, when loops or conditions start to get intense . Having the brackets line
up like this, and the code within indented, gives an immediate visual clue as to the way
the code works, at least as long as it stays on screen . Massive global wars have been fought
and many kittens killed over the correct way to use brackets like this . I try to stay 100%
neutral, and use the correct, proper, and only sensible way . But it’s up to you! Just make
your code as readable as possible, especially when you are going to be prone to errors at
this stage of your development .
Right, let’s get back to it, we’ve got maybe 10–12 invader bullets, and 16 shelter blocks
to test, to make it even worse our own player bullets can hit the shelters and kill them .
Things are starting to get a bit strange . It’s no longer a simple case of 1 object testing 1
object, its 10 or more, each testing 16 . And to make it even more fun, we don’t really know
how many bullets are in play, or how many shelters are still standing .
Now you should start to realize why collision tests need to be fast, there’s a lot of them
likely to happen and for the most part 99% of them are going to result in a negative test .
With the system we have at the moment, there’s no real way round this, but, we do
have a pretty simple and fast test and a reasonably small number of objects, so let’s allow
it to do its thing, because trying to test if a collision is needed is probably not going to give
us a great advantage .
The first problem though is identifying our interested parties . We know where in the
vector things started, but things might move around, that’s the nature of dynamic arrays .
As our bullets are more variable in their numbers, we’ll let the bullets test for the
shelters . But for that to happen we must know where all the remaining (active) shelters are .
This is where we have to hold our hand up and admit, that we can’t keep that game
loop nice and simple, if we put our Shelters into the normal MyObjects list we will just
lose track of them .
Now in the Game .cpp file, let’s create the shelters in the init function, and also use a gen-
erated array to work out where to place them, to make our lives a bit easier when using a
loop .
// because these are not at equidistant points let’s keep a simple table of
x locations to place them
#define BARRIER1 32
#define BARRIER2 BARRIER1+64
#define BARRIER3 BARRIER2+64
#define BARRIER4 BARRIER3+64
int BarrierPositions[] = {
BARRIER1, BARRIER1 + 9, BARRIER1 + 18, BARRIER1 + 27,
BARRIER2, BARRIER2 + 9, BARRIER2 + 18, BARRIER2 + 27,
BARRIER3, BARRIER3 + 9, BARRIER3 + 18, BARRIER3 + 27,
BARRIER4, BARRIER4 + 9, BARRIER4 + 18, BARRIER4 + 27
};
for (int i = 0; i < 16; i++)
{
Shelter* s = new Shelter();
s->Xpos = BarrierPositions[i]; // use the counter as an index
s->Ypos = SCRHEIGHT - 48 - 16;
s->Type = AShelter;
s->MarkForRemoval = false;
Shelters[i] = s;
}
Pretty simple, not really much different from the way we created enemies, but rather than
using a push .back() function, we just load the empty array position with the pointer to the
shelter we just made .
And now the game update needs to also loop through the shelters to draw them . After
the main MyObjects loop, add this very simple loop .
First let’s add to the Bullets test so it can shoot shelters, currently we should have this
// if we have a bullet check if it hit anything
if (Bullet)
{
if (MyObjects[i] != Bullet)
{
if (Bullet->TestForHit(MyObjects[i]))
{
MyObjects[i]->MarkForRemoval = true;
Bullet->MarkForRemoval = true;
}
} // if !Bullet
} // if Bullet
So after our own Bullet tests it exists, and makes sure it isn’t trying to test on itself, it tests
for any Object in the MyObjects list, which contains Aliens and their missiles . AFTER
which it then test for all 16 shelters .
Yeah, it’s starting to hurt a bit now, our lovely clean Game Update is far from clean
with loops inside of loops . But it works . Though the Shelters don’t yet remove themselves
when hit, in fact nothing does .
This is where that MarkForRemoval comes in, it’s been there from the beginning, but
now its purpose can be coded .
When we hit something with a bullet, we want it to blow up and vanish, blowing up,
we’ll get to another time, but certainly we need it to vanish because a dead character that
does not go offscreen and out of the MyObjects list is just going to create zombie objects
that will never die . We could remove them from the list as we do the update but to be hon-
est that creates a few problems, since objects just killed might have info that objects not
killed need . Not that that is the case here but we need to think about it . Removing them
should really be done after the update loop has completed, in a clear-up operation . Getting
rid of any dead objects and removing them totally from the processing lists so they are no
longer actually updated .
As we have two lists of objects we need to clean up loops . MyObjects is the easiest to
do first and view .
Remember to add this AFTER the update loop has closed .
Notice, that the loop runs backward, because if you remove things from the list the num-
ber of items in the list is reduced, which can cause a loop counter moving forward to lose
track of the number it’s testing, but running backward and decrementing the counter
It does not have to run backward because our Array size is fixed and never changes, so
we’re not going to have to worry about resizing the array and losing track of how big it is .
Enter this, just after the MyObjects clean-up, compile and run, and shoot a shelter, and…
we get this
Uggh that’s not good . Pressing break will give us a yellow arrow at this line:
Shelters[i]->Update(a_Screen, a_InputHandler);
which might seem odd, as that’s not the code we just entered? But the code we just entered
did something very simple to a hit shelter .
Shelters[i] = NULL;
It caused it to have a NULL or 0 value… the update routine could not call from 0 . And
created a break .
* Please note, I cleared the Shelters[i] value with NULL after the delete, to not do so would leave a dangling
pointer, that is, a pointer to an instance of something, which is no longer in use . It might keep working, but
the reality is that memory has now gone back into general usage and the next new command that is used
could re-use that memory . This is a time bomb of a bug, and often not immediately obvious . Always, NULL or
nullptr any pointer immediately after you delete it .
There are two other cases where we are potentially going to use a NULL Shelter, one is in
the Bullet test for collision with the Shelters, so you can do a double test on that;
if (Shelters[i] != NULL && Bullet->TestForHit(Shelters[i]))
I’ll leave you to find the other yourself, it’s quite simple, Visual Studio will give you a break
when you hit it .
Once you’ve done that, we will now have the basics of our game, go ahead, unleash
hell on the alien scum .
So Which Is Better?
So which is the better option, Arrays or Vectors? It really depends on what you need them
for and how you use them . We’ve seen some of the issues when using them to store point-
ers . Both have and pros and cons .
Removing from an Array isn’t possible, leading to extra checks for NULL values,
assuming you use them, to prevent accidental use of dead instances .
Once we hit the assigned number of items we define in an array, we can’t (easily)
create any new entries, so the fixed size limits us .
Removing from a Vector needs a slightly tricky backwards loop, and it’s a lot slower,
but we can track and alter its size as we please .
But ultimately, you choose what works for you . In an ideal world, we could have all
the objects including shelters in one single list, but keeping track of where the shelters are
as the vector alters size becomes a challenge, but hardly a difficult one .
Final Details
There are still a last few details we really should add; we need to actually include some
game rules, allow ourselves to actually be killed and detect game over when all our lives
are gone .
As you can see it goes from a blank space, through punctuation, then numbers, upper case
letters, then lower case, up to a few characters beyond a lower case . It’s also laid out in a grid,
eight characters per line . So we should be able to load it into a Surface, and work out how
to copy the right tile into a nice receptive surface that will then contain the new graphic .
There are several ways we can work with this, some more efficient on memory or
speed than others . But let’s focus on a simple system to get it working . First thing to do is
create a new font class and load this image up so we can use it .
Now one thing about our font image is, it’s not actually an RGBA image, it has no
alpha value and we basically have an image that has 0’s for transparent and 1’s for any black
pixels in it . That presents us with some interesting opportunity to manipulate the data to
produce different colors . We could for example decide to create a Red character, which is
simply done by taking the 0 or 1 value of the pixel and multiplying by our desired color
value, making sure to add an Alpha value so it can be visible .
I’d suggest a fairly simple method we can use, first of all load this into a surface which
will act as our store for the font . Then we need a new method in our Surface Class that will
draw a selected character using simple x y offsets to locate the 32 × 32 pixels that relate to
our character and draw the pixels .
That will give us a very effective font drawing system, our new method can be set up
to provide useful features such as color, alpha values, and perhaps even scale . But we do
have to write it ourselves .
So we can see a familiar looking class formation, a constructor and destructor, and some
useful methods, which should be fairly obvious .
Notice this time I made the variables or members as they should be called, for the
class private . That forces me to use get and set routines to make changes to them, in this
case three set routines, I don’t need any get systems just yet so I’ve not added them .
I’ve done that because despite my normal preference to make everything public, it’s
not considered good coding practice, and this class needs to be totally portable, and inde-
pendent, so ensuring that its members are private, and that it will use its own routines to
set them keeps it cleaner for later insertion into a routine .
The cpp file will look like this:
// very basic tile font display system
#include "TileFont.h"
TileFont::TileFont()
{
MyFiles* FileHandler = new MyFiles();
this->TheImage = new Surface((char*)"../Assets/fontwhite.png",
FileHandler);
delete FileHandler; // we opened some memory, make sure it is reclaimed
}
Although for and while loops test for conditions, for is usually associated with an
iteration, while however, is looking to see if a particular test condition is done with no
regard to how often the loop repeats . We’re also using an interesting way to access memory
using *Text, which is checking for the value contained in the address Text holds . Since I
used a “set of characters contained in quotes” the internal method C++ has for storing
such strings adds a 0 to the end to terminate it . My while loop is then searching to see if
we’ve reached the 0 yet, and if not it prints the character and then increments the pointer
in Text to look at the next character .
The parsing through the characters is fairly mundane, but the drawing of the charac-
ters is a bit more fun, and relies on the idea that we have eight characters in a line, starting
with the space character . We then take the ASCII value of the letter we want to print, use
modulus to work out which column we want, and a simple divide to an int to get the row .
void CopyBox(Surface* Dst, int X, int Y, int Width, int Height, int
srcPitch, int DestX, int DestY, Pixel Colour);
And the corresponding routine in Surface.h
void Surface::CopyBox(Surface* Dest, int SourceX, int SourceY,int Width,
int Height, int dstPitch, int DestX, int DestY, Pixel colour)
{
Pixel* dst = Dest->GetBuffer();
Pixel* src = GetBuffer();
src += SourceY*(this->GetPitch()*Height);
src += SourceX*(Width);
if ((src) && (dst))
{
dst += DestX + (dstPitch * DestY);
for (int y = 0; y < Height; y++)
{
for (int x = 0; x < Width; x++)
{
if (src[x] & ALPHAMASK) dst[x] = colour;
}
dst += dstPitch;
src += GetPitch();
}
}
}
We can now print any color of text by setting the color and we can choose the location
where it starts and write the text, each frame (remember our screen is cleared every cycle) .
Of course, we need to have an instance of a font, so let’s get the Game Class to make
it like this in the Game .h Class
TileFont Font;
Not a pointer this time but an example of an automatic instance, which is constructed
when the instance of Game is created so there is no pointer . We don’t use the “->” operator
to access its methods, we use the “ .” operator as you can see here .
Print systems usually print text at character positions, within the size of the character
itself, so 8 × 8 in this case, rather than pixel positions, but it’s a simple thing to change this
from character to pixel if you want to add that ability .
So now we can display text . It’s not very advanced at the moment, and needs a little
help to be able to print a variable since this will only print ASCII chars, but we have STD
functions to do that .
If you are wondering about these lines;
#define PRINT_AT 1
#define NEWLINE 13
They are a maybe, for later . I nearly always use a Print At style text draw when I print text
to allow me to, well, print at, any point on the screen and have the locations built into the
string I send . But for simplicity and demonstration I used a separate set function . Newline
should be obvious too, for the moment we can only print along one line, and this will allow
us later to add the concept of a print at newline . If you decided to code it up .
This line below is fine if we want to just print a text line, but it does not really work if
we want to print a mix of text and numbers, or indeed even numbers since a variable like
score is not stored as a set of characters .
this->Font.FontPrint((char*)"This is my test text", a_Screen);
This is a cool, old, but clearly still useful, C system that lets us print a formatted string into
a buffer, and can also convert numbers into strings even giving them leading 0's (%04d),
that buffer is then passed to our text print, which can remove that horrible cast to char*
like this
this->Font.FontPrint(buffer, a_Screen);
Aside from the fact we to have a buffer somewhere accessible, and that has to be big enough
to cope with our largest possible string+1, this is a very easy and simple system, which will
let us print scores, lives, and hello world on screen .
And now we can print some text and numerical data, it’s a little primitive but it will
serve our purpose of displaying the score and number of lives we have .
I want to move on now, I know it’s far from a finished product, but we have learned all
we can from this project for now, it’s time to move forward . If you can think of any other
2. Why were we not able to get the invaders themselves to fire their missiles?
These and a few others you can think about for yourself, are extremely important ques-
tions . As our games become more complex we may find ourselves becoming more limited
in our ability to do things, so we need to review this . We really need to ask is this the
best way to do this project, even though it works, and works very nicely, it’s a very poor
example of a C++ program .
We had the extern because our vector of objects was based in free memory, in other
words it was not held in a class . That made it very hard for us to pass the address of the
vector to any routine, which might have wanted to manipulate it . It was convenient to use
an extern, but not at all elegant!
This also answers our second question, we could have passed the vector address, and
a load of other addresses to the Game Class, and it, in turn, could have passed the address
to the update routines, had the update routines themselves been passed a vector address .
It would have been far easier though to simply pass the address of a Startup Class, which
held these main game variables, if you have the class address, you can access all the values
in that class, meaning instead of passing 2, 3, 4 or more different important addresses or
variables, we just pass the address of the Master Class . We’ll do that next project .
Question 3 is a good one, because in some ways we certainly needed the Game Class
to control the directional aspects of our alien’s movement, but we let it do movement as
well, and our animation of our aliens . We’re also passing flags back from objects, which
are telling our game loop to make new things such as bullets and bombs . That’s messy .
Ideally, we want this class to simply provide a means to service the objects, not do their
actual logic for them .
Fix Question 4
So yes, this one bothers me so much, and it’s hopefully bothering you to want to fix it now
before we move on . Let’s consider this, we’ve got 55 invaders, all loading 2 images for their
animation, but there are only actually 6 images… .our game is loading and storing space
for 110 images . But we only display 6, 6 is all we display, not 110, 6 . There are only 6 images!
You get the idea . This is kind of wrong . Surely there is some way to re-use the images?
’Course there is…let’s consider what our surface class calls an Image, it’s just a pointer
to a pixel buffer . So if we create six pixel buffers containing our images . Then can tell the
Objects to simply point at the right image .
Let’s make a new constructor, which does not create a new pixel buffer, but uses an
existing image . Like this
Objects::Objects(Surface* a_Image)
{
Image = a_Image;
MarkForRemoval = false;
}
Images2[4] = Images2[2];
Images2[3] = Images2[2];
Images2[2] = Images2[1];
for (int i = 0; i < 5; i++)
{
for (int x = 0; x < 11; x++)
{
#else
for (int i = 0; i < 5; i++)
{
for (int x = 0; x < 11; x++)
{
Aliens* T = new Aliens(i, a_FileHandler);
T->Xpos = (x * 11) + 5;
T->Ypos = (i * 11) + 40;
MyObjects.push_back(T);
AlienCount++; // keep track of how many we create so we can tell when
they are all dead
}
}
#endif
>>>cont
I used a nice preprocessor feature here too, with the #ifdef, #else and #endif . These are
compiler directives that ask if the label I am testing exists, not that it has a value, but has
it been defined, and if yes, it will compile the snazzy new code . And if not it will compile
the old and wasteful code . If I want to use the old code, just comment out the #define
PRELOAD .
The snazzy code needs a different constructor in Alien Class, which can use the two
Surfaces; can you work out how to add that yourself?
The Aliens will still work exactly the same way . Using them exactly the same way, and
we saved 104 * 32 * 32 * 4 bytes that’s 416 k …Also, though it might not be so noticeable,
we’re not loading 106 extra images, which takes up a fair chunk of time saved on our ini-
tialization, that’s nothing to sniff at .
There are reasons of course why we might want to have 110 separate images but not
today! It’s generally better to ensure images are not duplicated and loaded at times when
speed is not important .
We may not really have noticed this though, because we only did these loads when
we set up the aliens for the first time, so it was our initialization system that was a little
bit slow, however that was largely done in under a second just before the game started .
Chances are we simply never thought about how long it was taking, and never gave much
thought to the waste of memory .
It’s MUCH more worrying to think that every time we fire a bullet or missile and cre-
ate a new instance of one of those things, this loading process is happening for every single
thing, so we must resolve that .
So as with the aliens, make sure you load the graphics in the Game Class initializa-
tion and make sure the update routine passes the location of the Game Class to the actual
invaders update, that will then ensure that the Aliens can get access to the bullet graphics,
to make a new instance of a bullet and are able to look at the list of currently active objects
for any other tests we may want them to do .
* FPS also stands for Frames Per Second in relation to programming, I will probably use this term a few times
but you should get the meaning from the context .
This game involves some nice new graphics, this time in color . The original was one
of the first games to use color sprites, which really made it stand out and become a
classic .
Rather than just simply moving left and right, sometimes we’re going to let a few of
our invaders fly and drop a hail of bombs on your poor shooter, which operates pretty
much the same way as before but without any shelters to hide behind .
We’re also going to tart it up a bit, the plain black background of our invaders game
needs a bit of pizazz to make it more fun .
Download the Kamikazi base project from the support site, you will see its essen-
tially still the InvaderStart project we achieved before adding gameplay, but I’ve
renamed a few things, this is going to be our baseline project for the next few games .
We’re going to keep our original Game Objects Classes, Collision Classes, and input
systems from Invaders, so I’ve copied them from the Invaders Project directory into the
Kamikazi directory and added them to the new project, but the graphics and Game Class
are not going to be usable for this, so we’ll need new ones for this .
Also it’s time for us to increase our screen resolution to 1024 × 720, still an older reso-
lution but will give us a better level of detail . We’ll use some much bigger sprites though
still quite simple .
Finally we’re going to organize our classes a little better so we can do away with those
nasty extern commands and get better access for all our objects to info held in the main
Game Class .
Good, we’re ready .
◾ We have five rows of baddies, slightly different numbers in each row, which change
frames and slight differences in graphics .
◾ Our bullet sits on our ships nose when available, only one at a time .
So a lot of similarities, and a few interesting differences . We know how to create things that
move left and right . We know how to make bullets fly, we know how to do the collisions .
The unknowns are the dive-bombing aliens and the star field .
We also have the advantage of experience now; we know we did some things badly last
time, so let’s work to avoid that this time .
Star fields are pretty easy, as you can see the stars are all moving in one direction, they
are randomly placed, and for fun we’ll make them twinkle a bit . The StarField Class is all
set up for you but has no actual code yet . First up, decide on a number and add this in
StarField .h
#pragma once
#include "simplebob.h"
#include "surface.h"
#include "Game.h"
It’s a very small class because it really does not need much, it derives from SimpleBob, an
equally simple class that essentially is empty at the moment but may be useful as a graphic
object that by definition is going to be simple, at this point I am thinking it might end up
being removed, but for now I’ll allow it to stay in case I think of something else I want to
create that derives from it .
We could even make our stars purely as SimpleBobs, but as I want to do a few different
things I prefer to keep the SimpleBob Class as separate as possible . As often, this is a choice
you need to make, create a Star Class on top of the SimpleBob, or replace SimpleBob with
a Star Class?
SimpleBob though does itself derive from our Standard GameObject so all the usual
game object values we might need are there for us, also since SimpleBob supplied versions
of the GameObjects virtual functions I only need to supply one Update routine and one
draw routine for Star .
The actual star code in Star .cpp looks like this
#include "Star.h"
Star::Star()
{
this->Xpos = Rand(SCRWIDTH);
this->Ypos = Rand(SCRHEIGHT);
This is really nice simple code, a constructor, to place it randomly within the top of the
screen and give it a color, which for the moment is fixed, an update routine to move it
down, and reposition it when it gets to the end, and lastly a draw routine to display it .
Notice, there is no Surface or Image to draw here, it’s just a pixel . If we used a sur-
face we could have used a simple copy-to function but here we can see the idea of using a
single-dimensional array, in this case our screen, being accessed by the X and Y values .
This is a nice feature of C/C++ using a base pointer address as the start of an array; it
allows us to directly access memory using index values from that base . Though, a small
caution is needed . It does not ever check if your index is outside the range you allocate to
the array space/buffer allocated . So an incorrect address will still be written to, often with
unpredictable results .
Run your new code with stars and see what happens?
Cool, we have stars; they are currently a bit dull, though let’s work on that twinkle
idea .
Our update routine needs to move them, down at a steady place .
Our Update can do more things though, let’s add a little animation counter and other
things, and we can if we want to use an image rather than a pixel . The choice is entirely ours .
Let’s try drawing a couple of very small images and creating some effect .
Now if you look in the Star Class header file, you will see we only have 1 Pixel, called
Color1
Add three more, called Color2, Color3, and Color4 .
Pixel Colour1;
Pixel Colour2;
Pixel Colour3;
Pixel Colour4;
Run your code… fixing any typo’s you might encounter, there shouldn’t be much too
worry about here as we only added some Pixels into the mix, we’re not using them yet .
So, now we have colors, all currently defined but empty, let’s load them up with nice
values . Our Pixels are made up of different intensities, or levels, of Red, Blue, and Green,
and what’s called an Alpha value, to decide how transparent it is on screen .
This effectively sets the Colour1 to white, because these Mask values have all the binary
digits set for those colours . We’ll discuss masks and binary a bit more later .
We could make Colour2–4 variations after you set the value for Colour1
Colour2 = BLUEMASK + GREENMASK + ALPHAMASK;
Colour3 = REDMASK + GREENMASK + ALPHAMASK;
Colour4 = REDMASK + BLUEMASK + ALPHAMASK;
Now hopefully you’ve been paying attention, and you remember that having to reference
individual variables by their specific name is a bit of a pain, especially when we want to
reference them using a variable… So we’re not going to change the code a bit here and get
rid of Colour1 and replace it with a nice array called Colours .
Pixel Colours[4];
All we have to do now is alter the code in our Star constructor to create four colours, which
live in this array, like this .
Colours[0] = REDMASK+BLUEMASK+GREENMASK+ALPHAMASK;
Colours[1] = BLUEMASK + GREENMASK + ALPHAMASK;
Colours[2] = REDMASK + GREENMASK + ALPHAMASK;
Colours[3] = REDMASK + BLUEMASK + ALPHAMASK;
Notice I made sure that was always there ALPHAMASK, this will make sure that we can
see it but we can still play with it later . Now in the Star::Draw method, let’s add a line and
alter one .
Pixel Col = Colours[Counter];
dst[TheScreen->GetPitch()*(int)Ypos + (int)Xpos] = Col;
I have a Counter variable, be sure to add that as an int into your Class definition, and
in the update method, we’ll add this
Counter++;
if (Counter > 3 ) Counter = 0;
Ok, so we’re good to go? Compile, and run, if you’ve made any typos, or forgotten to add
the Counter in the Class definition then go do your fixes and try again .
Not bad eh, but you can barely see the blinking, it is happening, but the update rate is
so fast, probably well over 30 fps, that it just makes it seem a bit blurry . It’s up to you if you
want to keep this, but for me, it’s not quite having the impact I wanted, how can we slow
the animation down?
Simply using numbers 0–3 creates too fast an animation so let’s try something else,
use a much bigger number and scale it down, let’s allow the counter to go up to a large
multiple of 4, try this;
Counter++;
if (Counter >= 4 * 32 ) Counter = 0;
And there you can see the /32, which makes sure we never have a value bigger than 3, since
the index is an int, the floating point part will always round down so we’ll get 0,1,2 and 3
as possible index values . There is however a better way to do this, I won’t say what just now,
but if you know a bit of maths and C++ . Feel free to change it . Run that and you should
see them all changing color .
Ok but they are a bit uniform…let’s give them different numbers, and keep a copy of
it, (in the Star’s specific member list) . Let’s make sure that the counter is slightly different
for each one when we start them, using our friend Rand(32), add this after your array
setup in the Star Constructor;
Counter = Rand(32);
Much better, 50–100 little stars moving down screen at a nice steady pace twinkling…
hmmm should we try changing the speeds? It might add a little more variation? Add
more color, change the Alpha values? Feel free, it won’t do any harm and it’s a good little
enhancement you can work out yourself .
That’s our stars done; we can be comfortable with that . Or can we? The Pixels work
very well, but you could just as easily do the same effect with small images, four small rep-
resentations of stars, why not try that for yourself . Try adding a few surfaces in your Star
Class, and draw those rather than pixels, change the image to create animation and watch
it do its thing autonomously!
So…Stars, not quite the dumb things we thought, they move, they animate and they
reposition themselves when they go offscreen . It’s not complex logic, but it is logic . You can
add to that any way you want, make them move diagonally for example?
All the code is neatly contained in the update method of the Star Class so to make
changes we just alter the update routine, we can even have variations of type, as we’ll see
a little later .
The Ship
Our shooter is a little better defined in this game, so let’s call it a ship this time . It’s not
really very different from the Invaders ship, we still create a class, load a graphic, and con-
trol it left and right with firing, there is a slight difference, though it keeps its bullet at its
nose ready to fire .
So let’s write our Ship Class:
#pragma once
#include "GameObject.h"
class Shooter :
public GameObject
No real difference here is there? Everything should be pretty familiar with the original
invaders shooter .
The class itself is supplied to you as an empty class, all we really have at the moment
is the draw routine . I’ll let you add the update function, which uses the InputHandler to
make your ship move left and right .
#include "Shooter.h"
Shooter::Shooter(){}
Shooter::~Shooter(){}
bool Shooter::Update(Surface* s, Input* InputHandler){}
bool Shooter::Update(Game* g){}
void Shooter::Draw(Surface* TheScreen)
{
Image->CopyAlphaPlot(TheScreen, Xpos, Ypos);
}
The bullet presents a minor problem, when it is available to fire; it needs to be on the nose
of the ship . But for that to happen, we need to know where the bullet is, or the bullet needs
to know where the ship is, the choice is up to us .
I prefer to have the ship know where the bullet is, because the ship is going to be a
fairly constant object in our world, though in truth so is our bullet as it never actually
gets deleted . But the Ship is where most of our control goes and we want the bullets to be
as autonomous as possible, so let’s make sure the Ship knows, by adding a value into the
Shooter Class, which can point to where the bullet is (once it’s created) .
Ok, so make a little note somewhere that when you initialize the bullet you must make
sure you tell the Ship where the bullet is, we’re going to come back to this later . For now,
just make sure that value has a null in it, when you create the shooter, so we don’t acci-
dently end up pointing at mad memory, we do that with this bit of code at the Shooter
constructor
Shooter::Shooter()
{
TheBullet = NULL;
}
This seems fine so far, let’s get our init systems to create our ship and try moving him
around a bit .
#define Row1 48
#define Row2 (Row1 + 32 + 8)
#define Row3 (Row2 + 32 + 8)
#define Row4 (Row3 + 32 + 8)
#define Row5 (Row4 + 32 + 8)
#define Row6 (Row5 + 32 + 8)
By defining our Row1 as 48, we can accumulate previous defined rows to give us a fairly
easy set up for the y positions of our aliens . Also we are going to keep our aliens at a spe-
cific distance away from each of them, so we can define that distance as a variable . . . And
then lay out our new aliens .
#define DIST 40
int AlienCoords[46 * 3] = // we could use a [46][3] but it’s not so hard
to use a single
{
// top row ALIENS we use X,Y,Type
(SCRWIDTH / 2) - 100,Row1, 0,
(SCRWIDTH / 2) + 100 - 32, Row1, 0,
//2nd row 6 aliens
(SCRWIDTH / 2) - (3 * DIST), Row2, 1,
(SCRWIDTH / 2) - (2 * DIST), Row2, 1,
(SCRWIDTH / 2) - (1 * DIST), Row2, 1,
(SCRWIDTH / 2) + (0 * DIST), Row2, 1,
(SCRWIDTH / 2) + (1 * DIST), Row2, 1,
(SCRWIDTH / 2) + (2 * DIST), Row2, 1,
// 3rd row 8
(SCRWIDTH / 2) - (4 * DIST), Row3, 2,
(SCRWIDTH / 2) - (3 * DIST), Row3, 2,
(SCRWIDTH / 2) - (2 * DIST), Row3, 2,
(SCRWIDTH / 2) - (1 * DIST), Row3, 2,
(SCRWIDTH / 2) + (0 * DIST), Row3, 2,
(SCRWIDTH / 2) + (1 * DIST), Row3, 2,
(SCRWIDTH / 2) + (2 * DIST), Row3, 2,
(SCRWIDTH / 2) + (3 * DIST), Row3, 2,
...continues (see source)
Notice that we are also using SCRWIDTH, because it’s a predefined value that we can be
confident of using that will allow us to center these aliens in the middle of the screen .
Though we are still using a few hard numbers but we can make simple adjustments by
altering the value in Row1 and DIST rather than altering 46 entries we only need to make
a few small changes .
So this is an Array just like we had with our invaders that has 46 entries, with 3 values in
each entry . You look down to find the relevant alien you want using a variable or a hard
number to pick .
Like this:
And this:
Two-dimensional arrays are very important as we’ll discover later, but it’s not always nec-
essary to set them up, where we have an equal number of elements per entry, it’s simple
enough for us to know that can do a simple calculation of the Y index * the number of
elements and add an x offset/index to get the value we want .
Two-dimensional arrays need a bit more typing and can also present a few minor
issues when passing a 2D array as a parameter, though none of this is an issue in this
project . I’m just being a bit lazy and making sure you are fully comfortable with single-
dimensional arrays before we move on .
Now that we have the positions, and types of aliens we can go right ahead and create
our aliens and put them on screen .
So we have our ship and baddies, we might as well get the bullet and collision done, exactly
the same as we did before . I’ll leave you to add that?
1. Doing an Arc
There is one other feature that would be nice to have, it would be very cool to have the
aliens rotate around their center in order to make the arc and dive more impressive create
the impression that they are banking as they change direction . However, we don’t have a
rotation method in our Object Class or draw system . Hmmm well for now we’ll have to
leave it…but how cool would it be? Why not try to create a new draw system that can allow
rotation . Put it on a to-do list .
Let’s consider the steps, or as coders like to call them the states . We have six distinct
behaviors that our individual alien needs to do…and lucky for us they can do them in
order, that lets us track certain conditions that will indicate when one state should transi-
tion to the next .
2. Arcing, well arcing needs a bit of maths, if you remember your high-school maths
you will know you can draw a circle using sin or cos of an incrementing angle to
plot points, we can use that here, so long as we keep an incrementing angle, this
can also indicate to use when our arc is finished since if it goes over 180°, we are
done with arcing and ready to fly .
3. Diving, so this state is easy, we want to dive bomb our ship, so we are going to aim
our alien just to the left or the right of the ship depending on our direction and set
up something called a vector, which will be explained in a bit .
4. Dropping bombs . While diving, at a certain point we are going to open the
bomb bay and let a load of bombs drop, hopefully catching the ship at its current
position .
5. Going offscreen, again continuing to Dive but now checking to see if we have gone
offscreen .
6. This is a slightly tricky move because they need to come back in from the top of
the screen and home into the position they were in before their dive, so we need to
know where our position is to return to, and as the aliens will have moved while
we are diving that position is changing every single frame… So in Step 1 we need
to keep track of their left<>right positions and whatever position they have when
they are flying .
So again by laying out the features we have identified some new variables we are going
to need to keep track of and a problem in that we need to keep track of our normal left/
right moment position . We also need to give the top aliens a priority for when to start a
dive, and decide when to dive . Perhaps we can even let them take a few of their chums
with them .
In code, we can define these different states using an enum command, because it’s
easier to add to the states and change values when we don’t have to look for hard numbers .
This means a command like:
if (State == 0) DoMove();
Why? I hope you’re asking yourselves . Well if you remember that moving is 0, then that’s
good, but suppose you decide to add another state, or you end up with 20 different states
and then need to add one in the middle, it can get a bit hard to keep track of which number
So what this means is that Moving has the value 100, and Arcing 101, Diving
102, and so on . But more importantly typedef creates a new type of variable, which
in this case can consist only of these listed values, Moving, Arcing, and Diving .
We can insert or remove values into this list quite easily and the numbers assigned will
change but the code does not need to . So we can now define our variable State as a type
Status like this;
Status State;
Finally, why did I start with 100? Well when all said and done, the Status type is still
basically an int, and it’s not impossible for us to accidently test for something using
another typedef’d int that has the same value, so it’s wise to make each enum
start at a different base . It’s not essential, but it is wise . The compiler will take care
of the numeric for us, we just have to use the names we’ve assigned to the numbers,
the numbers can change each compile but the code will always look the same and be
understandable to us .
Now we can put a simple switch/case combo into our Alien::Update methods
like this:
switch (this->State)
{
case Moving:
break;
case Arcing:
break;
case Diving:
break;
case Bombing:
break;
case FlyOff:
break;
case FlyBack:
break;
default:
printf("Something undefined happened in Alien Update!\n");
break;
}
We now have to fill in the code for each step, notice the default state, which is there to
catch anything we forget to write code for . If you see that text in your console, fix it right
away! It’s also another reason why I set up the Moving value as 100, because if you don’t
Just before the switch statement, this will provide a value to return, so our first task now
looks like this in Alien .cpp .
case Moving:
if (TheGame->AlienGroup->Direction == true)
{
this->MovingX++;
if (this->MovingX > (SCRWIDTH - 64))
ReturnState = true;
}
else
{
this->MovingX--;
if (this->MovingX < 64)
ReturnState = true;
}
this->Xpos = MovingX;
break;
At the end of the update routine, but that’s it for moving . I trust none of this needs much
explaining now?
In the GroupManager .cpp code, we just have to make sure we check for this return
value, add this to the GroupManager::Update
// this only needs to update the aliens left and right
bool Changed = false;
for (int i = 1; i < 47; i++) // 0 is the ship
{
if (Game::MyObjects[i]->Update(G) == true)
Changed = true; // we could get a few things returning true, so be
sure to change here
}
if (Changed == true)
Direction = !Direction; // then check if Changed was set which indicates
a direction change
Now did you notice, that my Alien code did not actually update the Xpos, it updated the
MovingX then moved that value into the Xpos to create the movement, the reasons for
this were briefly discussed, I hope you were paying attention, but I’ll let it become more
apparent later .
case Moving:
this->Xpos = MovingX;
this->Ypos = MovingY;
if (Rand(1000000.0f) < 1100.0f && TheGame->Fred->State == Shooter::Normal)
{
State = Arcing;
ArcInit = false;
}
break;
There, couldn’t be simpler could it? Though why am I using such a big number 1,000,000
and testing for 1100, which is about 1 .1%? Why not test for 100 and <1 Not great odds is it,
if you were only throwing the dice once?
We’re throwing the dice 30 times a second or more and you really will throw up a
lot of 0's and 1's triggering a near constant stream of Aliens, because even low odds will
eventually hit if you try often enough . Having a larger number as the seed, when you are
repeatedly calling a random system, reduces those chances quite a bit . But when all said
and done it’s a case of try it and see, I found 1100 out of 1,000,000 to be a good result,
increase or decrease as you need .
A Nice Arc
Ok, so we are ready for Step 2 of our Kamikazi’s attack, the Arc, which if you remember
your high school trig is a semicircle, and that’s neat because we can draw semicircles using
very simple Pythagoras calculation . Ah! Reasons to love tringles, are never far away when
you are doing programming .
So let’s think about how to plot a circle, If we were to assume a radius of r’ and assume
we have 360°, we can make a circle using two simple formulas to calculate the x and y
points .
xPoint = cos(angle)*r’
yPoint = sin(angle)*r’
Really simple, and if you do a for loop, of 0–360 as the angle value, you will get 360
dots that will more or less form a circle…we can actually plot that… try entering this
code?
Put it in the Game::Update method, just after the check for init, it’s not going to stay
around long, it’s just so we can see what’s happening .
Now we added an offset to our point to move it away from the 0,0 origin point of the
screen, and in fact we can use any value as that offset, which means we can replace that
offset and use it to plot a circle from any specific start point .
In fact that could be a nice way to create a shield of some kind around our ship, suppose
we move the offset to Fred’s position with this code .
for (int angle = 0; angle < 360; angle++)
{
float Xpos = cos(angle) * 30;
float Ypos = sin(angle) * 30;
TheScreen->Plot(Xpos + Fred->Xpos, Ypos + Fred->Ypos, REDMASK +
ALPHAMASK);
Hmmm something went a bit wrong? No not really, you see our circles center point is
based on Fred’s top left corner, because that’s how we decide the x and y start points of our
sprites . To balance it out we would need to add half the height and half the width to get to
the center of the sprite image .
Ok, that actually looks quite nice; it’s not really part of the game though, unless you feel
like adding a shield or some kind of impact marker, I’ll leave it up to you .
But I do want you to look at this image carefully… .and then ask yourself, how this
line;
if ((xpos >= 0) && (ypos >= 0) && (xpos < m_Width) && (ypos < m_Height))
Hidden away in the plot routine in your Surface Class, has prevented you from doing some
massive damage? Any ideas?
Of course a semicircle only needs 180 steps so try running the code with 180 steps
instead of 360 . What happened? Not quite what you thought? You were maybe expecting a
nice arc pointing up, but we ended up with a circle again, but with holes in it? That’s ok, I
was expecting it . There are two minor things wrong with what we are doing, even though
it seemed at first to be doing what we wanted .
First is our use of degrees . Degrees are easy things for humans to understand but not
so easy for computers, they tend to use things called Radians, which are essentially pro-
portions of the circumference of a circle . Radians go from 0 to 2PI (6 .2831…) And the way
the cos and sin functions work, is to use a modulus value of 2PI so whatever number you
put in, it will range between 0 and 2PI . So let’s fix that first and make sure we send Radians
to our sin and cos routine .
It’s still ok for us to use Degrees, as I say, humans understand them better than
Radians, but we do need to convert, this is done with a simple formula;
Radians = (Degrees*Pi)/180
As I’m a bit old fashioned and like to use Degrees when possible, I swap back and forward
as I need to, so I define a couple of simple macros to do this for me;
#define DEG2RAD(x) (x*PI)/180
#define RAD2DEG(x) x*(180/PI)
These usually live in a common file, such as a Defines .h file, but here for now I will use the
Surface .h file, immediately after the PI define that it relies on . Now, let’s try again this time
sending Radians to our sin and cos functions, like this:
for (int angle = 0; angle < 180; angle++)
{
float Xpos = cos(DEG2RAD(angle)) * 30;
float Ypos = sin(DEG2RAD(angle)) * 30;
TheScreen->Plot(Xpos + Fred->Xpos + (Fred->Image->GetWidth()/2), Ypos
+ Fred->Ypos+(Fred->Image->GetWidth()/2), REDMASK + ALPHAMASK);
}
We got a semicircle ok, but it’s the wrong way up; 0° to 180° should have drawn half a circle
from the top to the bottom, has the world and the laws of maths gone mad?
No not really, it’s because we tend to think of 0° as being straight up, but our C/C++
maths systems prefer to think of it as pointing to the right at 90°, there’s a couple of ways
round this, we could add/subtract an offset to the angle, but that’s not ideal, as we already
have a lot of calculations going on in the loop . Or, we simply accept that the start point we
want at 270° as we see it, is 90° less at 180°, and change the step values .
Good, that was a nice and interesting bit of simple maths wasn’t it? Let’s try to use it to get
our aliens to move in an arc . We know the formula; we just have to work out how to do
them in steps so that our movement is visible over a period of time .
Step by Step
We already understand the basic idea of steps, because we are doing our state processing .
We know that if we are in a certain mode our switch statement is going to take us to the
right case to process that particular movement . In the case of arcing though, we only really
need to be sure of one thing… That we have set up the step value and that we don’t keep
resetting it . So we need a variable to indicate that Arc initialization has been taken care of .
case Arcing:
if (ArcInit == false)
{// so now we set up the variables
ArcInit = true;
ArcStep = 180;
}
So add this, and remember to make sure you have declared ArcInit and ArcStep as a
bool and an int in your class definition .
else
{
Xpos = MovingX + ARCRADIUS + (float)(cos((ArcStep)* PI / 180)*ARCRADIUS);
Ypos = MovingY + (float)(sin((ArcStep)* PI / 180)*ARCRADIUS);
ArcStep += 3;
if (ArcStep >360 ) State = Diving;
}
break
Yes, that really is all that’s needed . Define ARCRADIUS as some reasonable value,
I have 44, and your Aliens will now do a semicircle centered on their MovingX,
MovingY position, until they have completed it . Before we did our circle in a loop,
this time we do it in a step each time the method is called, the only variable changing
is ArcStep, which like the angle in our circle drawing for-loop, dictates the new place-
ment of our alien .
Try it, you have a very simple trigger routine ready, which we’re going to refine shortly
but let’s see if you can get your Aliens to arc . Notice I added three to the step rather than
one . Well it’s all a matter of balance; I want them to move fairly smoothly and quick, one
step at a time is too slow, three is nearly right, play with the value and see what works best
for you .
Ok good, that should not be too hard . When the Arc is complete it changes into the Diving
sate, which we can now cover . I personally find the Arc to be a little too regular; I’d like a
more oval shape, any thoughts on how that could be achieved?
Xvalue
Clearly we need to move along the hypotenuse of a triangle to get to our man in the short-
est possible time . If only there was some way to work out what those Xvalues and Yvalues
were, I hear you cry?
Please tell me you have worked this out? Remember triangles are a programmers best
friend because it allows us to get so much information, angles, distance, how much to
move along the X coordinate to be at a point, how much to move along the y coordinate,
to be at a point .
And that’s what we want here…how much to move? But we can’t just move Xvalue and
Yvalue . We need to move them in steps . Ideally converting the amount to move in the X
direction and Y direction in to small useful chunks .
First thing, first calculate those X and Y values, by subtracting the X position of the
alien form the X position of the Ship, and same for the Y positions .
As we are going to do this one time, it becomes part of the initialization process, we
can make a choice here to do it at the point where the state changes, or use a flag to indicate
that an initialization is needed the first time we do the Diving code .
On the off chance that you might want to go into a dive without first doing an Arc,
let’s give the Diving system its own init flag, but do remember to set it to false when you
set the State to Diving, we should also have done the same with Arcing, since we may have
a ship arc more than once .
Diving now looks like this, a simple case of working out the X and Y Values and
reducing their scale to provide a value that should mean in 32 frames it will hit our man .
case Diving:
if (DiveInit == false)
{
float Xvalue = TheGame->Fred->Xpos - this->Xpos;
float YValue = TheGame->Fred->Ypos - this->Ypos;
this->StepX = Xvalue/32 ;
this->StepY = YValue/32 ;
DiveInit = true;
} else
{
Xpos = Xpos + StepX;
Ypos = Ypos + StepY;
Bombs Away
This looks like it works wonderfully, if we actually leave out the check for change state,
our aliens will come heading toward us at the end of every arc . We can choose when to
decide to start Bombing at around the midpoint of the screen . Bombing will basically
repeat the dive code, but also add some random factor to decide if we should drop some
bombs .
case Bombing:
// now we must decide if we are going to drop bombs
if (Rand(100) < 5)
{ //drop a bomb once you create the class;
printf("bombs away\n");
}
Xpos = Xpos + StepX;
Ypos = Ypos + StepY;
if (Ypos > SCRHEIGHT * 2) / let them get off screen before changing
{
State = Moving; / temp normally change to FlyBack
}
We don’t yet have a Bomb Class, so consider the printf to be a temp thing that will show us
the code is working, and let us continue .
Starting to look like a game now, you can alter your random timers to increase the
incidence of dives just so we can continue the tests, but for now all we really want to do is
step through the different states and make sure our aliens do what they are supposed to .
Without actually causing any virtual loss of life . You can change the State to Moving if you
want them to just reappear at the end of their dives, while we work on the tricky Fly back
to the top, and the return to formation .
1. The easy way… Actually place them back at the top of the screen .
2. Let them move up to the top of the screen, usually off the side but just enough to
see them heading back .
On face value the easy way is clearly the best, and simplest . But a small part of me would
love to have them on the edge of the screen being seen to be moving back up .
I’ll tell you what, for brevity, I’ll do the easy way . You do the cool way, you already
know how to make a step system work for a target point, so you have all the tools
needed .
But both systems need a target point . I’m going to pick a point above and outside the
screen, on the left and another on the right, depending on what side of the screen the alien
is when we trigger the state .
Since I’m doing a single step I don’t need to use a target variable, I’m just going to set
Xpos and Ypos like this .
case FlyOff: // here we must return to a point top left or top right,
depending on where we are.
Ypos = -64;
if (Xpos > SCRWIDTH/2) Xpos = SCRWIDTH+64;
else
Xpos = -64;
State = FlyBack;
break;
Cool, so now when the FlyOff is called it will set itself to the top left or right of the screen
ready to start homing to its correct position .
Home Again!
So now what we want to do is get our alien to slowly move back into its position where it
should be in the ranks of Aliens back where we first flew off . Now you see why we kept
that MovingX and MovingY variables . But we have to home in… . hmmm basically this
means we need to check where we want to go to, and move in that direction, can we use
triangles again?
Yes, of course, we can, it’s exactly the same process as the target we chose for the
dive, but this time we have a moving target, as the MovingX and Y values are, ermm
moving .
So that means we have to recalculate the target every time and add our step value .
When both our X and Y positions are the close to being the same as our MovingX and Y,
we’re done . Let’s try that out .
case FlyBack:
{
float Xvalue = MovingX- this->Xpos;
float YValue = MovingY - this->Ypos;
Ok looks simple enough, if we’re about 4 pixels away on the x and y, we’ll change into
moving mode, which will snap them back to the moving mode and moving positions, try
it out, what happens?
So, it kinda works, but can you see sometimes the aliens are struggling to home to
that final point before they can be allowed to reset back into moving mode . They get there,
but it’s clearly a bit of a struggle often playing slow motion chase until the pack changes
direction . Why?
Well, the closer they get to their target point, the smaller the step value we calculate, and
we even scale it down by dividing it by 32 . So the closer they are the smaller the step value
causing them to need the target to slow down or move toward it before it catches up . It does
work, sort of, and the speedup then slowdown effect we get is actually quite nice, but are
we happy with it? I’m not!
We need to talk about a different way of creating those step values .
5
3
For this representation to work we need to consider the start of the triangle as being at a
0, 0 origin, and a lot of maths assumes that, but no matter where a (4,3) vector starts, it’s
going to point the same relative direction from its own origin .
We can have any numbers, that can therefore represent any direction we want, but
generally if using vectors only to indicate direction, we keep them as numbers less than
one . The reason is really simple an X of 2 and a Y of 2 mean you will always move in a
diagonal from a start point, but an X of 0 .5 and a Y of 0 .5 will ALSO go in exactly the same
diagonal just in a smaller step within a square unit, so lots of numbers can represent the
exact same direction, though having different magnitude, or length of movement .
If we keep them all under 1 or −1 for opposite directions, using a process called normal-
izing to create a unit vector, we can be sure our object is going to move within a given 1 × 1
square in the direction we want . And here’s a bonus, by multiplying that unit vector by a
scale value we can change the effective amount of the movement keeping the same direction .
That scale can be a number from 0 to 1 for even slower movement, to, well, anything
that the game needs . We can consider that scale value Speed and alter that to create vari-
able speed in our object, in the direction we set the vector at, and if we set it to negative
values, Speed will move the object backwards . Win!
Normalizing isn’t as mad as it sounds, it’s simply a standard equation that considers
the X and Y values of a vector to be the height and width of our visualization triangle,
which as your old high-school teacher tried to tell you, in those boring trigonometry
classes, if you know those two, you can work out the hypotenuse of that triangle (and
quite a few other things we will discover later) . As we now know, the Length/Magnitude of
the vector…dividing the X and Y by that Length will always result in a numbers between
−1 and 1 . Perfect for movement .
A small confusing point, vector2D types are not a standard part of C++ they are part
of a maths library, and C++ provides only basic functions for numbers, so if you don’t
use a maths library, you have to code them up yourself . They are, however, pretty much
standard .
Although it does seem a little odd, a neat thing about vector2D as a code type, is that
they can represent different things, like points in a grid and a direction, which means we
can add one set of vectors to another, so you can add a Vector2D of a movement vector to
a vector2D of coordinates to create new coordinates .
So if we store our X and Y Screen positions in a vector type we can move our object
by adding any other vector type . This makes things quite simple and we can still access x
and y individually if we want .
#pragma once
#include <math.h>
class Vector2D
{
public:
float x, y; //everything we do with 2D vectors revolves around these 2
numbers
Vector2D(float X = 0, float Y = 0) // allows us to create an empty vector
{ x = X;
y = Y;
}
~Vector2D() {};
// we need a few standard arithmetic functions
Vector2D operator*(float scalarValue) const
{ return Vector2D(x * scalarValue, y * scalarValue); }
Vector2D operator+(const Vector2D &vect2d) const
{ return Vector2D(x + vect2d.x, y + vect2d.y); }
Vector2D operator-(const Vector2D &vect2d) const
{ return Vector2D(x - vect2d.x, y - vect2d.y); }
// magnitude (length) of the vector
float mag() const {return sqrtf(x * x + y * y);}
This little class throws up a couple of interesting points, which you may not have encoun-
tered yet in your C/C++ journey . Most interesting is the use of operator, which is a cool
but dangerous C++ feature . We are essentially telling C++ that when it sees a *(multiply)
operator in code, that it needs to look at the kinds of objects it is acting on, if it is acting
on a Vector2D and a float value, it uses the routine here, which is actually doing two
multiply commands .
Also for + and −, if it sees they are acting on two Vector2D’s it uses the code listed .
+ and – will still work perfectly on every other kind of number, but when it is used on
Vector2D’s from now on, it does a different kind of addition/subtraction that creates
new vector values .
Cool eh? But also as I say, very dangerous, this is a concept known as overloading,
allowing a command or operator to do more than one thing under specific circumstances .
Don’t overload an operator unless you really need to, it might seem fun, but it can lead to
a world of pain for you later, when you forget you overloaded the + function on two floats
to return an int for one particular circumstance, where it seemed to make sense, but if
you actually needed a float returned, you did a bad thing .
Here though it’s pretty safe, there are no current concepts of how to multiply
Vector2D’s by scalers, or add or subtract 2 Vectors2D, so this is a clear case of need .
We’re also adding a few functions to our class, such as Dot Product, Magnitude
(which I hope you recognize from school Trig Class), and the important one, Normalize .
What are the others for? Well you will need them later, and they are very trivial to code, so
doing them now is going to be useful . I will explain later . There are many other functions
that we can add to this as we get more into our maths journey, but for now these will do
what we need and allow us to expand a little later .
Another thing was the use of const, in some of those functions; this is just a way of
making sure we don’t let the function change anything it’s not supposed to . In such small
functions, it’s a bit moot, but it’s a good habit to get into, since accidently altering the x and
y values in more complex functions could have a disastrous impact . If we added some code
that did make changes the compiler would then complain and force us to think about what
we are doing .
Ok, time to convert our project to use vectors and normalized step values, which we
hope will give us a more accurate stepping and allow for fluid movement .
The actual Dive and bombing systems are not really badly impacted by the bad maths,
so let’s focus on the step where it was most apparent, the FlyBack step . Feel free to fix the
Dive and bombing later .
Replace the code in the current FlyBack step with this . Notice, I’ve reduced the
approximate point of impact check from 4 to 2, as it’s a lot more accurate now . Also
to maintain the idea of speeding in, then slowing up as it gets to its target, I’ve used
the distance/length/magnitude whatever you wish to call it, as a factor, but ensuring
it never gets too slow . Since a normalized number is always less than one, so we need
to be sure we can at least move at the speed of the MovingX motion in order to catch
it up . This is the big advantage of a normalized vector; we can alter the scale of it very
easily by multiplying by a float value, it’s still going to point the same way . In fact aside
from a means to indicate direction, a vector is very important for having a magnitude .
To be mathematically correct for a moment, it has both direction and magnitude . That
will become much more important later when we use Vectors for things other than
simple 2D motion .
How much more visually neater is that? We see our Aliens flying in and smoothly
taking their place, no chasing and neatly slotting in place .
Ok, time for me to confess, we could have put a small minimum value on the scale
system too, and yes I must be honest it would have given more or less the same result, but
this was meant to be a painless introduction to Vectors, did you feel any pain? No? That’s
ok, it’ll come later!
Of course we’re still using a mix of separate X and Y values and Vector2D in this
project but now that we have a Vector2D Class, let’s use it for all future situations where
we have an object that has X and Y motion .
Oh one final thing on this, we are calculating the magnitude of the vector
when we calculate speed . But later, we’re doing a rather clumsy calculation with this
line here .
Can you think of a really simple way to remove this line and change the code to achieve the
same result? I’ll give you a clue, we’re effectively trying to work out if we’re within a certain
distance of our goal, and we should not really need to calculate that twice!
There are a couple of other fantastic and very useful things that vectors can do for
us, but we don’t quite need them yet, so we’ll not clutter your head at this point . But rest
assured we’re a long way from being done with vectors .
bool Shooter::Update(Game* g)
{
bool fire = false;
if (g->InputHandle->TestKey(KEY_LEFT))
{
Xpos-= 1;
if (Xpos < 0) Xpos = 0;
}
if (g->InputHandle->TestKey(KEY_RIGHT))
{
Xpos+= 1;
if (Xpos > SCRWIDTH - Image->GetWidth()) Xpos = SCRWIDTH
- Image->GetWidth();
}
if (g->InputHandle->TestKey(KEY_SPACE))
{
fire = true; / fire
}
}
Only a minor modification from the methods we used for Invaders from Space . Do make
sure you add
#include "Game.h"
To the top of the Shooter .cpp file or it won’t know that the Game Class has an InputHandle
to get info from .
So it’s all working, but it’s a bit slow, try changing the increments to values you find
give you more smooth movement, I’ll let you pick the numbers, try a few, and see what
works best for you . Hopefully, you’ve left your aliens in working order, so you will see that
as you move around they still attempt to crash into you, or at least the point where you
were when they started their dive . This adds some gameplay as you now really have to keep
moving to avoid the diving aliens . Later we should increase the frequency of these dives to
make the game a lot more intense, and avoid bullets too .
Now, you are asking yourself, why do I still have an empty Update function? Well
that’s a fair question, and the answer is, because when I laid it out I felt I might need a
routine that only passed a surface and a handler, but since I’ve moved more to using an
update that passes the Game Class address, and with that everything that the routine will
need accessed via that instance of game, it has become redundant . But since I made the
original Abstract Class have those updates, I have to at least provide the empty version .
I should go back to the original Base Abstract Class and remove it . But I’ll let you do
it, it’s good practice for you . Note that the Aliens are using the Update with Surface and
#pragma once
#include "GameObject.h"
Nothing too strange there, I’m maintaining the two types of update for now but I am only
going to use the version that passes Game, as I want the shooter to make decisions on
when to fly . I also have a bool in there called ReadyToFire, because the bullet has only two
states, flying, or on the ships nose . I can use a bool to inform me .
Then we must create our Bullet .cpp file, so add that into your source lists .
#include "Bullet.h"
#include "Game.h"
Bullet::Bullet() { this->ReadyToFire = true;} // we can't do this here
Bullet::~Bullet() { }
bool Bullet::Update(Surface* s, Input* a_Input){ }
bool Bullet::Update(Game* g){ }
void Bullet::Draw(Surface* TheScreen){Image->CopyAlphaPlot(TheScreen,Xpos,
Ypos);}
Ok, we’re almost ready . Let’s try to create a bullet in the Game class, we can create it
just after we make Fred, but do not add it to our vector of game objects until AFTER
we make all the aliens, because we are doing some stuff that relies on the aliens start-
ing at a set value in the list, and we don’t want to hunt through the code to find it, let’s
just make sure our bullet is the last thing added for the moment until we add bombs
at least .
This was ok as a holding value, because when we wrote this, we didn’t actually know what
a bullet was . But now we do, and we want to make a new bullet and modify the flag it has
which a GameObject does not have, so we must change this now to
And remember to include the Bullet .h file at the top of the file .
Now, back to why the Bullet is in Fred, there probably is no need to keep Fred’s bul-
lets address in Fred . We could just as easily put it in the Game .h file as we did Fred him-
self . But Fred and the bullet are linked, so it’s slightly more sensible to let Fred reset the
ReadyToFly flag at the appropriate times by accessing his own variables rather than dip-
ping back into the Game’s variables . But that really is a choice you can make for yourself,
keep the Bullet separate from Fred in Game, or keep in in Shooter .h . I going to leave him
for now inside Fred, but it does mean we should be careful to delete Fred’s bullet when
Fred’s destructor is called .
But now, let’s now get the bullet to stick to the nose of Fred . Change the Bullet update
routine to this;
bool Bullet::Update(Game* g)
{
if (this->ReadyToFire == true)
{ / it’s on the nose of Fred
Xpos = g->Fred->Xpos + g->Fred->Image->GetWidth() / 2 - 4;
Ypos = g->Fred->Ypos - 12;
}
}
We’re using the Game* g update because our main loop in Game calls that update pattern
for all non-Alien objects (unless you fixed that?) .
Annoyingly I had to subtract a tiny offset of 4 to get the line in the middle, can you
guess why? Now it’s not good to use hard numbers in code, so I really would like to fix
this . Perhaps then you can work out a way to alter this line so that if we change the bullet
graphic to something thicker it will still center . For this it’s a trivial problem, so if you
don’t know why it’s doing this, just leave the −4 in place .
Ok compile and move Fred around and you see the bullet stays on his nose . Time now
to get him to fly .
The bullet actual movement is going to be achieved by simply moving up whenever
the ReadyToFire flag is false, so we can add this to our update
else
{
Ypos -= 10;
if (Ypos < 0) ReadyToFire = true;
}
Now run that, you should now get a nice bullet firing, and returning to your nose tip when
it gets to the top . Notice I am checking that the bullet is in ready to fire mode, we don’t
want to fire it when it has already been fired .
Notice also that the shooter’s fire code contains a redundant fire variable, we really
don’t need it in this game, so unless you feel there is some need to keep it, it can be removed .
Our shooter can now shoot . Its bullet is smart enough to test for the end of the screen
and reset itself, all done by simply setting a flag . There is one other condition, which would
reset the flag . If it actually hit an alien!
So why don’t we get the bullet to test for Alien hits? Well it’s the same issue we had
with the Invaders from Space, we don’t actually always know how many aliens there are at
any given time and we’d waste processing cycles if we blindly tested every single one dead
or alive, worse, we may be reducing our vector list of objects as they die, so we could end
up testing something else .
No, far better to let the Aliens test if they’ve hit our bullet, and also for the Bombs to
test if they’ve hit our Shooter .
But we’ll do collision in a bit, first let’s give our Aliens the chance to drop some bombs
on us, and make our bombs a tiny bit more interesting .
Vector2D BombMotion;
float BombSpeed;
Surface* Frame1;
};
#include "Bomb.h"
#include "Game.h"
Bomb::Bomb(){ }
Bomb::~Bomb(){ }
bool Bomb::Update(Surface* s, Input* a_Input){ }
bool Bomb::Update(Game* g){ }
bool Bomb::DidWeHitFred(Shooter* Fred){ return false;}
void Bomb::Draw(Surface* TheScreen)
{
Image->CopyAlphaPlot(TheScreen, Xpos, Ypos);
}
I’ve left the Updates empty, and for now, made sure that DidWeHitFred always returns
false . There is one condition I want the Update routine to test for, Bombs going offscreen .
But what exactly do we do when bombs go offscreen or hit? Unlike our bullet, which
always exists, bombs are dynamically created, so we need to remove them from the object
list which means our old friend MarkForRemoval comes back into play .
One more thing though, when we work out the BombMotion we are going to tar-
get a fixed point, probably where Fred is at that point . So we need to know the normal-
ized vector . It’s a calculation that needs to be done every time a bomb is set off so to
keep our trigger routines clean, we should do that calculation and the subsequent setup of
BombMotion in this class, so add one more method definition in Bomb.h
Remember to add Shooter .h and Alien .h so that the Bomb knows what they are .
The code for this is simply:
bool Bomb::Update(Game* g)
{
Vector2D Step = BombMotion*10;
Xpos += Step.x;
Ypos += Step.y;
if (Ypos > SCRHEIGHT + 10) this->MarkForRemoval = true;
}
There’s that MarkForRemoval flag, if our bomb gets too near the bottom, we set it to
indicate to our game loop that we want it to be removed .
// we don't need to update the 1st 46 because those are the ship and aliens
handled by AlienGroup->Update(this)
for (int i = 48; i < Game::MyObjects.size(); i++)
{
Game::MyObjects[i]->Update(this);
if (Game::MyObjects[i]->MarkForRemoval == true)
{
delete Game::MyObjects[i]; / delete the object in memory
Game::MyObjects.erase(Game::MyObjects.begin() + i); / remove from the
vector
i--;
}
}
What this is doing is interesting and needs to be looked at carefully, we’re updating all the
nontrivial enemies such as Fred and the Aliens, before this loop, then the stars and now
the Bombs that may have been added . To remove an object from play we have to delete it,
because it was new’d, then erase it from the list . The reasons for the i-- should be appar-
ent but are often forgotten . When we start this loop we may have 150 objects to parse, sup-
pose we are looking at the 100th . And we remove it . The vectors erase function will remove
it, then shift everything down 1 step, entry 101 now goes into entry 100’s space after the
erase . If we don’t decrement the counter, which currently stands at 100 the next loop it will
be 101, and the new occupant of space 100 will get ignored .
We also need to go back into our code, and check all the object constructors take care
to set MarkForRemoval to false when instantiated, or we may delete things we don’t want
to delete because of a assumed false setting on creation .
So we’re looking pretty good so far, lots of bombs trying to hit Fred and Kamikazi
aliens trying to smash into him . We’re close to our conclusion for this game, but let’s add
the all-important collision, and with it we need to add some gameplay rules, for lives,
and so on .
Like our previous game we really don’t need anything too fancy on this collision, a
simple circle to circle check will do fine . Start by getting the Aliens to test if they hit the
bullet . Or if they hit Fred . Both of these conditions end our Aliens life, either in Glory or
in misery!
Add a definition for a new method in your Alien .h file
Now this is interesting because this actually helps us to justify the decision to keep Fred’s
Bullet inside Freds Class, as we only need to pass this function Freds address to get access
to both Fred, for a suicide collision, and Fred’s bullet for an ignominious defeat .
Ok add some code in the update, to run this test and if true, for now set the
MarkForRemoval flag .
Compile and run, and then try shooting an alien, what happens? Not what you were
expecting? Any ideas why? Let me give you a clue, in our Game .cpp file we have this
update sequence going on:
Can you see it? I put a comment in there for you to take note .
We update our Aliens in the AlienGroup->Update(this); which is great as it
keeps them all nicely grouped together so we can keep the moving status updated regard-
less, but it does mean those first 47 objects, Fred, and the 46 Aliens we created, do not
respond to the Update loop that follows, which starts from Object 48, which is the first star .
Well, that’s a good thing? Or a bad thing?
It’s kind of both . Bad, because hard numbers really suck, but the fact is the group
manager really needs to work on the Aliens and only aliens, it could have scanned the
Just a simple addition of two new states, which is why enums are very cool, we can add
things to them with ease .
In the Alien .cpp file, where we process the states, add two empty states handlers, just
before the default, they can go anywhere in any order between the switch brackets but
convention is to do them in order .
case Dying:
break;
case Dead:
break;
default:
printf("Something undefined happened in Alien Update!\n");
break;
Ok, now that’s added, rework the collision system, not to change the MarkForRemoval
flag, but to set the State of the Alien to Dying . Now we just need to write some code to
handle these states and we’re good to go .
if (this->DidWeGetHit(TheGame->Fred))
{
printf("He done got me in my killin' parts\n");
this->State = Dying;
}
If you run this now, and shoot, you’ll see that when they are hit, they freeze, because they
now no longer have a state that currently functions, so they are stuck . That shows the code
is working but needs more .
We also should have noticed our bullet didn’t stop and went on to disable many more
aliens . That’s something we also have to fix . Do that first by simply setting the Bullets
ReadyToFire flag, which will, of course, return it back to Fred’s nose .
But be careful, it’s still considered to be active, so let’s also use that flag to actually decide
if the bullet is currently lethal by wrapping the collision check for the bullet in a test
like this:
if (Fred->TheBullet->ReadyToFire == false)
{
int My_Height = this->Image->GetHeight()/2;
int My_Width = this->Image->GetWidth()/2;
int Ob_Height = Fred->TheBullet->Image->GetHeight()/2;
int Ob_Width = Fred->TheBullet->Image->GetWidth()/2;
// do a simple circle/circle test
float R1 = sqrtf((My_Height*My_Height) + (My_Width*My_Width));
float R2 = sqrtf((Ob_Height*Ob_Height) + (Ob_Width*Ob_Width));
// move to the centre
int diffx = ((Xpos + My_Width) - (Fred->TheBullet->Xpos + Ob_Width));
int diffy = ((Ypos + My_Height) - (Fred->TheBullet->Ypos + Ob_Height));
float Dist = sqrtf((diffx*diffx) + (diffy*diffy));
if (Dist < (R1 + R2)) return true;
}
Now, when we compile and run, we should be able to shoot the aliens, and have them
freeze, and our bullet returns back to our nose where it does no harm to any suicide attacks .
Perfect, we’re almost done with this part now, just need to make some decisions on
how we plan to show them dying and how we plan to remove them from play .
Dying is an important state, because it provides a means for us to check on how we
have done in the level, perhaps alter some factors as more aliens get killed and to check if
we’re at the end of the sequence . So Dying needs a bit more work .
Dead though, is really easy . Just position them offscreen . There’s still a little process-
ing going on, and the draw routines later will try to draw them, but that’s ok, we don’t
mind that on this game as we’re not really taxing our CPU…… . .Wait…did you really think
that was ok?
Hell no! We have to make sure they are not drawn at all; even a draw that does not
show them is a draw that’s wasting CPU cycles deciding not to draw them . We need to
stop that . But how?
Well we have a flag in GameObject Class that has been there for a long time but has
not been tested
bool onscreen; // we might mark an object as on/off screen to avoid
processing
It’s been there all this time quietly being ignored, but now its time has come . We should
now use this flag, so that the part of our update in Game .cpp, which does the drawing can
test this to see if some objects have been switched off or not . Of course though since it was
never initialized, its state at the moment on each of our drawable objects, including stars
onscreen = true;
Now every Bullet, Bomb, Alien, and even Fred who is created, will use one of those base
constructors and will therefore have its onscreen flag set to true . All that remains is to alter
the draw loop to take that into account .
And we should find on a test run, that nothing will be different until we set that flag to
false in an object . Let’s make that happen in Dying for now;
case Dying:
onscreen = false;
break;
case Dying:
Ypos = 2000;
onscreen = false;
State = Dead;
break;
case Dead: // I really don't need to do anything here just acknowledge
that I am dead
break;
Danger UXB
Next up is the bombs we have to make lethal to Fred, and also we need to consider what
happens to Fred when he is hit . We know there are two circumstances where Fred can be
killed, a Kamikazi hit, and a bomb hit .
Dealing with the Kamikazi hit first should be trivial, as it’s really just extending the
concept of the DidWeGetHit routine because it relates directly to the Alien, who sadly
dies in a blaze of green slime glory if he manages to collide with our heroic Fred . So that’s
the first thing to do . Add the following after the bullet test . It’s still the same circle to circle
code but with different targets:
// now check for a Kamikazi test
if (State == Diving || State == Bombing)
{
int My_Height = this->Image->GetHeight() / 2;
int My_Width = this->Image->GetWidth() / 2;
int Ob_Height = Fred->Image->GetHeight() / 2;
int Ob_Width = Fred->Image->GetWidth() / 2;
// do a simple circle/circle test
float R1 = sqrtf((My_Height*My_Height) + (My_Width*My_Width));
float R2 = sqrtf((Ob_Height*Ob_Height) + (Ob_Width*Ob_Width));
// move to the centre
int diffx = ((Xpos + My_Width) - (Fred->Xpos + Ob_Width));
int diffy = ((Ypos + My_Height) - (Fred->Ypos + Ob_Height));
float Dist = sqrtf((diffx*diffx) + (diffy*diffy));
if (Dist < (R1 + R2)) return true;
}
Notice, I have a different condition wrap around the code this time, I’m testing if the Alien
state is currently set to Diving or Bombing . I don’t really need to, I could just test regard-
less, but I know that most of the time the Aliens are not actually in a place where they are
likely to ever collide with Fred . This test therefore is what we call a cull .
We use culls in collision a lot, especially in 3D because a collision check is a fairly
decent amount of calculation . Here, for example, we have three square roots, a notoriously
slow function, and several multiply and divide functions . If we have 46 Aliens, and only
2 of them are Diving, it’s far more effective for me to only test the ones who are diving .
I will save 44 totally useless and processor expensive, collision tests, which I know will not
produce a result .
We basically also culled the Bullet collision when ReadyToFire was not set; avoid-
ing any collision call when you know it’s not needed is a good practice to maintain .
Compile and run, with this new addition to the collision and hold off firing at things,
as you want to see the result of when they collide . You should see them acting in exactly
◾ We want to have some code to decide when someone is going to fly, generally this
is done on either a timer or as we did, random coin toss system .
◾ We also want to decide if we let the Aliens fly alone or in groups, that’s quite an
important decision .
Starting with when they are going to fly, let’s just use a random value as a timer? Well we
could, it worked before but pure random systems, somewhat obviously, tend to produce
values that are too random, it does depend on the range we seek, which now leads me in a
potentially rather long and boring discussion about random numbers .
Random numbers are an interesting concept in games, because in truth there are
no such things, CPUs actually find it very hard to produce true random numbers, but
we can simply manipulate a start value, called a seed, and modify it using various means
to produce a seemingly random sequence of numbers . This is the most common form
of randomness computers use, which has an advantage sometimes in that the sequence
derived from the same seed will produce the same results . That can be useful for ensuring
a repeating pattern of events .
It also can be undesirable, if you truly want your triggered events to be unpredict-
able . But though the sequence that is generated is predictable, the frequency in which it is
chosen and used, can be varied by truly unpredictable factors such as when a user presses a
key, which is an example of a truly unpredictable and therefore random event, because no
computer will know exactly when the user is going to hit a key . Setting a seed from unpre-
dictable or changing values, like the time the app starts up, can also produce a sequence,
which will be different from game to game .
So randomness can be partially controlled, or allowed to be seemingly unpredictable,
like many things in programming what you need will depend on the needs of your project,
it’s up to you which you choose .
// do some housekeeping
Trigger = false;
TimeTillChoice--;
if (TimeTillChoice == 0)
{
TimeTillChoice = BaseTime + Rand(30 * 5);
Trigger == true;
}
I’ve defined BaseTime as a #define in my game .h, inside the class like so
int TimeTillChoice;
bool Trigger;
Needless to say these need to be initialized, and in the first init I’d actually make them
TimeTillChoice about four or five times longer than BaseTime . To allow the player
to take view of what’s going on before all hell breaks loose .
Now this is an example of a code-based timer, but it’s not actually using time as such,
it’s just decrementing a value we set somewhere to 0, it’s really more accurate to call it a
counter, but we are using time, since we know that our game runs at 30 fps so 30 cycles
is 1 s . We could however use real-time values if we wanted as there are in fact timers on
board in your OS that can allow you to tell how long things have taken . Though most SBCs
don’t contain onboard real-time clocks (because of lack of batteries), once they are pow-
ered up and running they keep count of how long they have been awake, and once they
have been allowed to get the time from the Internet or your input, they set that to the time
gettimeofday(&t2, &tz);
We will discover cool things to do with this later, since it allows us to tell how long our game
cycle takes, we call this delta time, the time that passed since last cycle, and we do keep track
of it, because later as our projects become more complex it will have a very important use . We
could for example subtract that delta time from our TimeTillChoice counter, for a really
accurate value . But, this is a random value, actual proper time isn’t that important to us, we
simply want to create a variation in time . No need for microsecond accuracy here is there?
The time from the gettimeofday routine has many other uses, for example, it can
be a truly Random seed because we have no way of knowing what time our game starts to
the microsecond .
Back to our Alien selection, we keep getting distracted by geeky interesting things!
Once we decide that we are going to fly, we also need to decide who is going to do it! That’s
pretty important, since we really want the two top guys to do their thing before the others
get into a free for all, and we don’t want them all doing it at once . So some rule is needed
to decide, who gets priority .
We also would like them to fly in small groups, so when we trigger one, we should try
to trigger some close companions .
So let’s discuss what we’re going to do . We’re going to use the timer we just wrote,
when the timer says we can go, we need to pick someone . Here random can help us as we’ll
set up probabilities, which we can use to stack the odds of a move .
We have three types of Alien, top is our Commanders, we want them to go first, in fact
as long as there are Commanders in the line, we want them to always go first, things will
only start to get more interesting when they are both dead .
Thereafter we’ll let the Sergeants next in line have a go, then the privates .
I would like the Commanders to try to take two Sergeants with them when they fly,
and also the Sergeants to take two privates .
Privates though can go off by themselves as long as the Commanders and Sergeants
are dead .
Ok so that’s the battle plan, let’s try to code it up .
So, if we have three types of Aliens, let’s have a small array that gives us a percentage
chance which type we trigger and we can scan through, let’s also keep a current running
tally of how many of that type we currently have, which will be useful later . We didn’t
actually record the type of Alien when we created them, so we need to backtrack a little bit,
and actually set that up . We can use an enum and typedef again to make that happen .
Inside the Alien Class after public: and before the #define put this code
Notice, I break my little rule about starting an enum at a different point from other
enums, that’s because in this case I am going to use these as an index rather than a simple
compare .
We could leave them like that, but you know my feelings now about hard numbers…so
time to do a bit of copy and replace, I don’t recommend a full block copy and replace, just
copy type the first Alien::Commander, copy it, then select the 1 in the next row and
paste in the Alien::Commander . Then replace the first, two with Alien::Sergeant,
copy it, then highlight the next two, and press CTRL to replace the two . Repeat till done
then do the three and Alien::Private .
Why not a block replace? Well we’re replacing the numbers 0, 1, and 2, which do
appear a lot in the body of the chunk of code we’re altering, so individual, select and
replace is better if a bit tedious, it looks like this when done .
So that’s done, for no other reason than we may choose to alter the values we know that we
are looking at Commanders, Sergeants and Privates…If you compile that, you will not see
any difference at all, the code in the setup routine is still using 0, 1, and 2, because that’s
what those enum values are set to .
This time though, when we get the whatImages value, we need to store it in the Type
variable we make in the Alien .h Class .
AlienType Type;
A->Type = (Alien::AlienType)whatImages;
Take special note of that comment, for later . Make sure you have ChanceOfHit[3] and
HowManyOfThisType[3] as ints in your Game .h .
So we’ve set up the timer to be about 9 s that may be too long, but we can adjust later,
and the Chance of a hit for Commanders is 60% and Sergeants 5% because we don’t want
to totally eliminate them from the first few dives . But we don’t want the Privates to dive
alone until the Commanders are dead, that btw, will need us to keep a bit more notice of
what happens when things die, we’ll check back to that in a minute .
Once our timer says we can go, we simply test our existing Aliens, and if the trigger
is set we roll a Random chance, and compare it to the type % chance we gave . Don’t go for
100% as the highest though; we need to add a little uncertainty, that’s why we have a 60%
chance that you will fly a Commander when the time is ready .
If we scan through the list and don’t trigger one, it might need some code to reduce
the timer value to try again, but for now let’s see how it goes .
In our Moving step of the Alien update, let’s test if we want to fly, replace the holding
random routine with this:
if (TheGame->Trigger == true)
{
// lets do some triggering folks
if (Rand(100) < TheGame->ChanceOfHit[Type])
{
State = Arcing;
ArcInit = false;
TheGame->Trigger = false; /should we clear?
}
}
Now run your code, we should see a nice delay after the game starts, then the com-
manders will fly to their glorious Kamikazi deaths but it seems ok so far . We’ve not
added any formation code yet but that’s ok, we will . Leave it even longer and you will
eventually also get a sergeant or two taking the plunge . But with only a 5% chance every
3–5 s it’s going to be a while before we clear the board of them, and the Privates are
never going to go .
Ok, so we are doing a few small things, every time we kill a type, we increase the prob-
ability of that type flying . If we eliminate an entire type, we increase the probability of the
next type, though we can’t increase beyond a Private, so there’s a safety check in there and
also if we kill all the Commanders, we quite deliberately stack the odds for the Sergeant
and Privates to fly .
All pretty good, so far, time to get them to attack in convoy now . We can fiddle about
with the timing and also the increase we add to the chance of a hit to alter our gameplay
but we’re getting close to what we want . More kills = more dives, so the action should
become a little more frenetic as the level progresses .
And look, it passes a pointer to Game, and I will be able to know who I am from ID as the
Alien I am currently updating, so that I can look up a list of Aliens I know are under me .
Go back to your GroupManager initialize routine, and in the loop where you create
the Aliens after you actually new it, add this
A->MyID = i;
And that will give our Alien and ID number between 0 and 45 inclusive, perfect for look-
ing up an array!
But we have a problem . Quite a big one; we have indexes we can use to look up an
array, but we don’t currently in our game have an Array of Aliens…what yes we do, we’re
updating them using Game::MyObjects, and that is true . But Game::MyObjects is
not an array of Aliens, it’s an array of GameObject’s and the GameObject Class does not
contain the MyID or State variables we need to change, they are in the Alien Class! We
need another array, of Aliens, in the Game Class .
This can be a straight array because we know exactly how many we have, if you choose
later to have more Aliens of an indeterminate number then make it a vector, for now
though, add this to the Game Class in Game .h
Alien* TheAliens[46];
Annoyingly before we compile this, we’ll need to add a small predefine for our Alien Class,
so that Game Class knows about it . This is because of the fact our Alien Class includes the
Game .h file to use Game* in its class define, but now Game wants to use Alien, so it kind
confuses the compiler, as some things are attempting to be created before it knows about
them and it is a bit of a cyclical mindbender .
Adding this line just before we define the Game Class, will tell it, that there is a class
called Alien, and once it’s all compiled the linker will fix things up for us . It’s untidy cod-
ing at this point but trust me, it works .
class Alien;
Now in the init system for our GroupManager we need to populate this new Alien array
with the address of each Alien as we make it, add this line after the push_back at the
end of the routine .
ParentGame->TheAliens[i] = A;
Boom, we now have a trail of breadcrumbs our Aliens can use to find each other, which
will let us write that GetMyFriends routine like this:
void Alien::GetMyFriends(Game* TheGame)
{
if (MyID > 7) return; //Privates not allowed friends!! :(
This will make sure that Commander only takes two of his possible three friends, and
sergeants can take all three if they are available . Privates, sadly have no friends, unless you
want to add them?
You can see though that this routine needs a new array, called WhoAreMyFriends,
so where is that? Well here of course!
int WhoAreMyFriends[8][3] =
{
{ 2, 3, 4 }, / 2nd 3rd Aliens for sure, and possible 4th Alien
{ 6, 7, 5 }, / move to the end of the sergeants
{ 8, 9, 10 }, / 8th 9th and 10th
{ 9, 10, 11 },
{ 10, 11, 12 },
{ 11, 12, 13 },
{ 12, 13, 14 },
{ 13, 14, 15 }
};
I put it just above the routine, and let it live in Global space, it makes it quicker and easier
to access . Can you work out from the routine and the array what’s going on? Run it now
and fix any errors/typos that come up and see what happens .
That’s pretty much it for the Aliens now, we can tweak some of those annoying hard
numbers, though far better if we replace them with some nice easy to locate #defines at
the top of the file, but overall we’ve achieved our goal, time to move on to our next problem .
Fred Reacts!
So what does Fred do when he’s hit? Well he blows up, in fact he does pretty much the same
things as the Aliens, we might show a little explosion graphic, then he dies, a life counter,
which we don’t currently have anywhere, will decrement, and then it’s up to the game code
to decide if he gets reborn or if the game is over .
And now we see we have a problem . We have no game management code at all in our
Game loop, we’re only doing the simple steps of updating and rendering objects with very
little thought into how our game itself should flow .
So we need to put something on our to-do list, *write some game management code .
And first part of that needs to be to include some lives . So go back to Game .h, add a
Lives variable as an int, and in the Game init code set it to a value, let’s say three .
Now, our Fred/Shooter update code does not contain any concept of states like our
Aliens do . Given that we now realize we need to have at least three states . . . Active, Dying,
and Dead . We should add them as an enum, and a state variable in our Shooter Class .
I put it just after the constructor to show you . Now we have a variable called State that the
Shooter can check, and it’s using a new type called Status, but it’s enclosed totally in the
Shooter Class, so when we come to set it up, after we create the Shooter (not before) we use
this format:
Fred->Xpos = (SCRWIDTH / 2) - 16;
Fred->Ypos = (SCRHEIGHT) - 40;
Fred->State = Shooter::Normal;
As we might want to use Dying or Dead as a state, we can use that Shooter:: prefix to
help us to make sure that we’ve got the Dying and Dead value we want for the Shooter,
which is different from the Dying and Dead value the Alien has, just to be quadruple
sure, I made sure the enum for the Shooters states started at 200 where the Aliens
started at 100 . This is a form of encapsulation . We’re making sure that the variables we
want to use for that class are only usable in that class we could enhance this even more
by making the enum private inside the class, but I’m not a big fan of that, though real
coders will disagree .
Ok, so let’s make a few alterations to the Shooters update code, we currently only have
the normal state, so let’s put in a switch and use that as the normal case so we get this:
bool Shooter::Update(Game* g)
{
switch (State)
{
case Normal:
if (g->InputHandle->TestKey(KEY_LEFT))
{
Xpos -= 5;
if (Xpos < 0) Xpos = 0;
}
if (g->InputHandle->TestKey(KEY_RIGHT))
{
Xpos += 5;
if (Xpos > SCRWIDTH - Image->GetWidth()) Xpos = SCRWIDTH
- Image->GetWidth();
}
if (g->InputHandle->TestKey(KEY_SPACE) && TheBullet->ReadyToFire == true)
{
this->TheBullet->ReadyToFire = false;
}
break;
Also if you’ve not already done so, I’ve taken out the redundant fire flag code from the old
Invaders version .
Run this, and if you’ve set it up ok in the initialize this will work just perfectly with no
apparent difference from the last attempts . But now we have the ability to do something
cool when we get hit and all we have to do to trigger that, is tell Fred, he’s Dying .
Let’s put in some holding code, so we know that it gets to the right bit of code when
we get hit .
case Dying:
printf("Uggg he got me, farewell cruel world\n");
State = Shooter::Dead;
break;
Now to actually make the bombs deadly, time to update and use that DidWeHitFred
code we wrote way back in the bomb’s update code, which now looks like this:
bool Bomb::Update(Game* g)
{
Vector2D Step = BombMotion*10;
Xpos += Step.x;
Ypos += Step.y;
if (Ypos > SCRHEIGHT + 10) this->MarkForRemoval = true;
if (g->Fred->State == Shooter::Normal)
{
if (DidWeHitFred(g->Fred) == true)
g->Fred->State = Shooter::Dying;
}
}
Compile and run, and let Fred get hit by a bomb, and we should see that he will report his
condition in the console window and then freeze up .
Remember also, we have Diving and bombing Aliens who can kill Fred, and they have a
specific test that checks that, but are only removing themselves when they hit, now we can add
this to the bottom of that code in Alien .cpp in Alien::DidWeGetHit(Shooter* Fred)
Great, I mean, how sad, he’s dead . But it’s all good we have more lives, maybe . Though, we
should probably stop the Aliens from starting new dives now as they clearly have completed
Our ability to know that he’s in a normal or dying state allows us to tailor several of our
behaviors . We could even trigger a little celebration dance, if we had the graphics for it .
But the most important thing we need to do is to get the game to check for an end
game condition or a restart condition, which incidentally is where stopping the arc is quite
useful, as we don’t want to respawn in a swarm of diving Aliens .
Ok so we know it all works, let’s add some explosions and some logic code to control
the game flow . Go back to Shooter .h and replace the eloquent bit of text with some relevant
code:
case Dying:
if (g->Lives > 0) g->Lives = g->Lives - 1;
State = Shooter::Dead;
break;
I’m checking that Lives is not already 0 because there’s the possibility of more than one
bullet hitting, though in truth only the first one is ever going to set this, it’s a good sanity
check to make sure .
I’m also going to add a timer to the Game Class in Game .h called the respawn timer .
int RespawnTimer;
Its purpose hopefully is obvious, but do make sure at game init time you set it to 0 .
To make it work in the Game Class, add this code before you update Fred .
if (RespawnTimer != 0)
{
Fred->onscreen = false;
RespawnTimer--;
printf("Get Ready for a respawn in %i \n", RespawnTimer);
if (RespawnTimer == 0)
{ // time to bring him back
Fred->State = Shooter::Normal;
Fred->onscreen = true;
}
}
This will prevent Fred from being drawn and decrement the timer to reset him back . This
should give you a clue to when we want to set that RespawnTime to have a value?
case Dying:
if (g->Lives > 0) g->Lives = g->Lives - 1;
State = Shooter::Dead;
g->RespawnTimer = 30 * 3;
break;
Its only purpose is to signal to the game code that he’s not currently able to have any influ-
ence until Fred is set back into normal state .
If you compile this now, you should actually have the first real gaming experience with
lethal force and the chance of your shooter being killed, or clearing up all your Aliens . But
then it does kind of stop, there’s no way to continue, which is not a good thing .
Postmortem
Now this time we really didn’t find it too hard to get access to the info we needed, by
ensuring our Base Class contained the main data structures that connected all the other
classes to that Base Class it was possible for one object to interact with another and also to
generate bombs, put them in lists and do tests with the shooter, who likewise could make
Abstraction
Is harder to explain in simple terms, I like to think of it as moving away from the techni-
cal details of an object and looking at it from a higher level, for example, our Kamikazi
Enemies, they are just enemies to me, I don’t want to spend too much time thinking about
the underlying variables and subclass that make them work, I just want to make the ene-
mies move around . The fact that Enemies and Fred, are essentially the same workhorse
Object Class, allows me to consider them as abstracted concepts .
Inheritance
Allows us to build on previous concepts and objects to create new things, thus creating
a hierarchy of object concepts but able to refer to them as their base types . Again our
Enemies in Kamikazi are a great example, all the Enemies inherit their data from Objects,
but are controlled by their final class types .
Polymorphism
Which despite sounding like a nasty virus, simply means an ability to have multiple meth-
ods of the same name but having different functionality . This is usually denoted by pass-
ing or returning a different set of arguments . There’s a bit more to it than that, but that’s a
decent one sentence explanation .
Real hard-core C++ programmers will adhere to these four principles as if their life
depends on it, and if you really want to do proper C++ then so will you . But it can take
quite some time to bend your brain around the concepts while you are still struggling to
get to grips with, “what happens if I do this .” So we are going to be slowly developing our
OOP concepts and moving away from the very basic C and C++ with classes we’ve done
so far . I’m just warning you ahead of time that our coding style needs to develop away
from this fugly beginning, to something a bit more elegant . But as far as humanly possible,
I’m going to keep the code readable, which will break some of the main OOP rules from
time to time . Working a lot with beginners, I suggest readability and understanding will
always be better than correct but confusing .
Welcome to OpenAL
As its name suggests this is or rather was, an open access Audio API, but it’s actually
named because it was designed in a similar way to the concept of the original OpenGL . It
started out as a royalty free cross-platform system, but it is now proprietary, so you need to
pay for the very latest version . However, it’s not all bad, there is still a fully functional very
stable and widely used version available called OpenAL Soft, which we can use royalty
free, which more importantly, we can easily install on our target, and will do pretty much
all we need to do with a modest amount of effort .
There ARE many other sound APIs available, FMOD, OpenSL, and as mentioned
OpenMAX, for example, from the Khronos group would seem to be ideal, as we are look-
ing for royalty free cross-platform systems . But though you can get some SBCs support-
ing them, not all do, audio on a SBC is often part of the main chips system, so it’s up to
the makers to decide on compatibility with audio APIs and that support is very variable .
I can’t locate it for most of my SBCs, including the very well-supported Raspberry range .
I also, for the purpose of learning, need a very general system that should work on all
the available SBCs we are targeting in this book . You are, of course, absolutely free to use
another sound API, if you accept you might limit your number of user targets . So for now,
OpenAL it is! And the experience of using it will stand you in good stead when you want
to upgrade to a better, more modern, or more target specific API .
Installing OpenAL
We have to break the no external library rule one more time for a good cause . This time
though unlike our header-based graphics loader we’re going to install a prebuilt library
from the net, the process is a tiny bit different .
Open up a terminal or if you are in a console mode, enter,
It will display the location of all the files in the package, including docs . No, I don’t
know what dpgk is supposed to mean, Linux is an absolute mystery to me, I don’t know
why it is so popular, let’s hope that’s the extent of the Linux we have to use…though
I suspect not!
Hopefully you’ll get something like this:
which lets us know our include files are as we hoped in the /usr/include and usr/include/
al directories
We need to pass this info to the VisualGDB properties like so:
You can see I added the include directory, to the Include directories, with a space after
the last one . And the Library directories, notice we got the name of the * .so from our
dpkg app and the name is openal . Library names are usually, the name of the * .so file
Only this time, we don’t need to worry about installing all the include and lib directories
as it will install itself in the OpenAL directories, we only need to add the lib name as Alut .
Once it’s done, select ok and you should find your red lines all gone . The intellisense works
on the files cached in your dev machine, so now it will work again . If you ever add a lib and
find you are getting red lines, even when you are sure the directory you put in properties
is correct, this should fix it .
And now at last, we should be able to get our sound to work, we’ll get that going shortly .
If you don’t hear anything it may just be that your volume is turned down, go to the audio
preferences, make sure you select controls and activate the Pulse Code Modulation (PCM)
control, which gives you volume .
If you are using a target other than a Raspberry Pi, it’s going to be a total crap shoot
if you have sound activated on your system, half of the target boards I have do not have
working sound systems built into their OS as standard, which is a frustration to be sure .
However, there may be some help at hand, in the form of another library we can add to
provide sound play on most systems . If you don’t already have it installed try installing
pulseaudio on your target .
sudo apt-get install pulseaudio
◾ Also some music for the background, but that’s tricky, so leave that till later .
◾ A bigger death knell for losing all lives and the game over .
Ok, that’s enough, there’s a temptation to add more and more, but, that’s unwise . I’ll
explain in the next section why .
1. Sound is memory hungry; I keep saying this because it’s important, only load
what you need so that it’s ready to be triggered . If you need to load a new set of
sounds arrange them in an order that you can load at nongame critical times .
2. As sound data are generally quite big, loading from our storage system is going to
be slow . Try to avoid loading in the game loop, it will cripple performance, this is
why we use buffers .
Let’s keep the sounds simple for the moment, adding it only to our Kamikazi project .
We’ll add a couple of explosions and a shooting effect, with some nice spot effects for dif-
ferent game events . That’ll be enough for this game to let us get everything working and
stand us in good stead for a project with more complexity .
To use our sounds we need a simple sound class, and a way call the fx . So let’s start by
giving the sound fx some easy to use names in an enum .
◾ Buffers: This is where our raw sound data live, this is not the same as the data we
load from our storage systems, this is converted from wav or other formats into
PCM data .
◾ A Listener . You .
The reasons these are separate is to do with the way OpenAL renders sound . It does actually
take into account the direction and distance of a sound as well as the speed the object emitting
the sounds is travelling, so it can even simulate the Doppler effect of sound . Those are fairly
complex equations, not unlike the concepts of OpenGL when it comes to working out what
pixels are actually seen in a given direction, OpenAL works out the sound you should hear .
Buffers are the easiest of our concepts, it’s simply a location in memory where a raw
sound sample exists and then identified with a handle once it’s set up . We should load up
all the sounds we need and keep track of the locations .
Understanding sources as simply the objects in a game world which emit the sound is
the easiest way to think of a Source . In time we will allow every object that emits sound,
to be a source . But for a simple 2D game, the only real source we need is going to be fixed
at a position that represents the screen we are looking at which hopefully has speakers
to the side of it . In a 3D game though, we might have two ships, with engines humming,
moving toward our location at different speeds . OpenAL allows us to think of each ship
as a source, playing the same engine sound from a buffer, but we as the listener viewing
the game world at a fixed point, will be able to determine that they are coming from two
different locations in our stereo soundscape .
We don’t need to pass any parameters to alutInit, though it is designed to take parameters
should it be called from a main function, you can give it command line parameters . In our
case passing NULL, NULL is fine .
Ok a simple sound class header should look like this:
#pragma once
#include <al.h>
#include <alc.h>
#include <alut.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdlib.h>
#define NUM_BUFFERS 8
#define BUFFER_SIZE 4096 /* 4K should be fine */
ALCdevice *device;
ALCcontext *context;
For the moment, all we are defining are some data concepts and the very simple idea of
loading and playing sounds . At this stage we don’t really need much more, and we should
not try to do much more . Let’s just get it working .
So our Sound .cpp file only has to worry about our constructor/destructor pair, and
the two simple concepts we asked for, Load and Play sound .
Our Constructor can set up all the main things, and we’ll let LoadSound do the load-
ing, I am going to add a few functions though to allow us to report errors and get access to
some information that OpenAL can give us .
It should look a little like this:
#include "Sound.h"
#include <AL/alut.h> // we need this to load Wav
#include <string.h>
list_audio_devices(alcGetString(NULL, ALC_DEVICE_SPECIFIER));
defaultDeviceName = alcGetString(NULL, ALC_DEFAULT_DEVICE_SPECIFIER);
device = alcOpenDevice(defaultDeviceName);
if (!device) {
fprintf(stderr, "default device not available\n");
return ;
}
// tell us the device
fprintf(stdout, "Device: %s\n", alcGetString(device,
ALC_DEVICE_SPECIFIER));
// clear the errors
alGetError();
// create the context
context = alcCreateContext(device, NULL);
if (!alcMakeContextCurrent(context))
{
fprintf(stderr, "failed to make default context\n");
return ;
}
#pragma once
#include <al.h>
#include <alc.h>
#include <alut.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdlib.h>
#define NUM_BUFFERS 8
#define BUFFER_SIZE 4096 /* 4K should be fine */
class Sound
{
public:
Sound();
~Sound();
ALCdevice *device;
ALCcontext *context;
ALCenum error;
ALuint source[NUM_BUFFERS], buffer[NUM_BUFFERS];
ALuint frequency;
ALenum format;
const ALCchar *defaultDeviceName;
bool LoadSound(char* fname, ALint index);
Notice the addition of a PlayMusic method, to allow me to differentiate from a sound and
a music loop . PlayMusic is always going to work with source 0, PlaySound will work with
sources 1–7 inclusive .
And the code for this now looks like this, though I have removed the text output for-
mat to save a bit of paper here .
#include "Sound.h"
#include <AL/alut.h> // we need this to load Wav
#include <string.h>
>>>> I removed the list_audio_devices and TEST_ERROR, as they are the same
as before
list_audio_devices(alcGetString(NULL, ALC_DEVICE_SPECIFIER));
defaultDeviceName = alcGetString(NULL, ALC_DEFAULT_DEVICE_SPECIFIER);
device = alcOpenDevice(defaultDeviceName);
if (!device) {
fprintf(stderr, "default device not available\n");
return ;
}
// tell us the device
fprintf(stdout, "Device: %s\n", alcGetString(device,
ALC_DEVICE_SPECIFIER));
//clear the errors
alGetError();
// create the context
context = alcCreateContext(device, NULL);
if (!alcMakeContextCurrent(context))
{
fprintf(stderr, "failed to make default context\n");
return ;
}
// lets talk about OpenAL
printf(" OpenAL Version %s\n", alGetString(AL_VERSION));
printf(" OpenAL Renderer %s\n", alGetString(AL_RENDERER));
printf(" OpenAL Vendor %s\n", alGetString(AL_VENDOR));
printf(" OpenAL Extension %s\n", alGetString(AL_EXTENSIONS));
So the only major difference is the scan through to find a free source, and the new
PlayMusic method . We’re ready to set off the music in our Game Init using
TheSound->PlayMusic(2);
Where 2 is buffer number 2 (or sound 3 since we count from 0) . We should really use our
enumeration system though .
Horrible Earworms
Our looping sound works well, and does what we want, but human ears are sensitive to
repeating patterns, so in a pretty short timescale this short loop is going to start to irritate
our sensitive programmer ears as we spend a week or two listening to this droning on and
on and on . We could just turn the TV volume down, but that’s more of an ICT problem
and you can never get an Engineer out on a Sunday!
So even though a longer tune will still start to make us crazy after a few days, proper
music for our background is clearly something we want, but we are faced with a few more
technical issues, and even a legal one . We already see just how large, even modest wavs,
are in terms of memory . OpenAL can only cope with uncompressed data so longer fx or
short tunes start to eat memory even more . We can comfortably use 400 or 500 MB on a
few minutes of uncompressed sound . Since some of our targets don’t even have that much
memory to play with, alternatives methods have to be found .
Ideally we would try to save our music in a common compressed format such as MP3
and decompress it as we play it . But, sadly MP3 is not free to use, and though the people
who own it have been a little less litigious in recent years, there’s no sense in poking a
sleeping bear .
And of course add OGG to the list of libraries you include . And the library should be
located at /usr/include/ogg so add that to your library list in your Makefile settings .
Using OGG files is as simple as loading graphics, it works in almost the same manner, we
can load an OGG compressed file and let it occupy space ready for OpenAL to work with it .
Streaming
So we can load, decompress, and play sound, but you probably noticed we’re still using
staggeringly large amounts of memory, decompressing a small file into a big buffer does
not eliminate our chronic shortage of memory on our systems .
We need some other way to deal with this, the process is known as streaming . What
this involves is allocating a certain amount of space that we can afford to give, and having
OpenAL work with that, but, keeping that space filled up with sound data coming from
the process of decompression into that space .
But the reality is different…take a look at our project directory and we see something
horrible beyond words . A mish-mash of * .h, * .cpp’s, make files, setting files, and so much
more . It may all look neat and tidy in Visual Studio, but it’s a mess in the folder itself . If we
only had a couple of files it wouldn’t really be an issue, so we’ve not worried about it so far .
But we’re going to start increasing the numbers of files with each project and while it has
no impact at all on the compiling, Visual Studio knows where every file is and compiles it
correctly, it will make it hard for us to locate files we may want to use in later projects . So
it’s time to grow up and clean up our bedroom a bit .
The simplest way to do that is to make sure for each filter you have in your solution,
you also have a similarly named folder in your directory!
Take note of the Remote Directory option, by default it will use a copy of your PC’s own
directory structure, stored off the /tmp directory, and for the most part during devel-
opment that will be fine, but this library is going to be used by many different projects
now, so it really needs to live in one central place that won’t change if you use a different
PC or decide to relocate your projects on your PC . I’ve set it to use a MyLibs Directory
just off the VisualGDB directory, that way I know it’s my code and it is generated and
maintained by my own VisualGDB projects . Now that isn’t actually ideal, because the
tmp directory is erased every time you power off your target, so it will not be there until
you do a build . So it’s important to make sure at the start of a code session, you build
that library to replace it before your main program tries to build . You could and prob-
ably should put this in an actual permanent directory not associated with tmp . For now
though as we are still learning and things are going to chop and change we can tolerate
this need for a prebuild .
One Final thing to do, which I could have done when I named it, but I can still to do
it here, is enter in the MyLibrary VisualGDB properties and rename the output file name,
to libMyLibrary .a file .
The GNU Linker expects all libs to have a prefix of lib to their name, which it
strips to get the library name . Our projects will therefore include this lib by naming it
MyLibrary .
It may just be a minor niggle with the VisualGDB integration, I have reported it, so it may
get fixed, but it’s not the end of the world, just build (without dependency) and remember
to make sure you have build MyLibs first, right click on the MyLibs project and build the
This is nothing more than a loop that fills the screen with tiles going from the top to the
bottom, one line at a tile .
These red and green tiles are being created in the Game Classes init routine
void Game::GameInit()
{
this->InitDone = true; / ok we can set this to true so this won't be
called again
/* Tile example */
// make a red tile
Surface* T = new Surface(16, 16);
T->ClearBuffer(GREENMASK+ALPHAMASK);
Game::Tiles.push_back(T);
// make a green tile
T = new Surface(16, 16);
T->ClearBuffer(REDMASK+ALPHAMASK);
Game::Tiles.push_back(T);
And the resulting tiles are being pushed into a vector held in the Game .h file .
The Map itself is a big 2D array at the top of Game .cpp, actually held in what is known
as global space, that is, not in any particular class . It’s only there for now, so we can access
it quickly, normally we would keep data in a class .
Aside from this new DrawMap and a few additions to Game, I hope you can see this
is currently just a very simple template . But most of the content should be quite familiar to
you . It’s really not much different at all from the Invaders . But that map array is important
as it contains the indexes of which tiles need to be displayed on screen .
This DrawMap system will effectively replace our clear screen system . It’s a little
slower of course but it is very effective .
Now when you compile and run that you will get a nice image of a map with a rough
spelling of HELLO, which if you look in the Map1 array, you should be able to see it . Try
adding a full stop to the hello in the map?
◾ Draw and display a map: That’s done for you as you can see .
◾ Place our character in and interact with that map: Still to be done .
We really need to work out what the win goal should be, do we just want to get to the
top? Or perhaps we’ll make it a bit more interesting and make it a collecting challenge .
So let’s arrange to collect a number of items while avoiding enemies, which are guarding
those item .
First thing we need is a character to control, make a class called Player, and create the
header like this:
#pragma once
#include "GameObject.h"
#include "Game.h"
class Player : public GameObject
{
public:
Player();
Player(char*, MyFiles*);
If you’ve been paying attention, you will remember in our Kamikazi and Invaders games,
we defined a quite different update routine, which was deliberately designed to override
this update routine in the GameObject;
virtual bool Update(Surface* s, Input* InputHandler);
This new form of update, passing the Game Class, is frowned on by C++ purists because
I’m sending the calling class’s address to the player code, and everything the calling rou-
tine has in it becomes exposed . That goes against every rule of C++’s encapsulation and
protection of data .
But I really don’t care, because I want to get access to quite a few of the Game Class’
variables and I don’t want to jump through hoops to do it . Not at the moment anyway . It’s
important we don’t end up fighting with the language at this point; we just want to get
things done . Once we are comfortable with the way things work, and we feel we under-
stand how, then, and only then should we care about the perceived proper ways to use the
language, remember it works for us, not us for it!
But to use this update system, I need the GameObject Class to have this same type of
update . Why? I hope you are asking, well because my Game Class stores all the relevant
objects it has to update and draw in this vector .
static std::vector<GameObject*> MyObjects; // this will hold all the game
objects we need in game?
And it’s this vector we iterate through to do the updates, which is as you can see a Vector
of GameObject pointers . So I can only call GameObject methods, not a specific Player
Update . Though, of course, I could, there’s only one player and calling one specific update
is not a major problem for us, I just prefer to have them all in the same vector .
One solution is to add the same kind of update to the GameObject Class methods as
a virtual and make it an Empty Class, like this, in GameObject .h:
virtual void Update(Game* g) {} ;
So we are both declaring and defining this in the header . We’ll add to the concepts we
might need later . For now, an update, a draw, and some coordinates are all we need .
The cpp file is also going to be a little different, we’re used to doing empty constructors
and destructors but this time we are going to do something a bit different . To begin with
our player has four frames, one for each direction, and our header we made some space for
that with an Images[4] array . So we should fill that, in the constructor let’s load in the four
frames that our character uses .
Player::Player(char* fname, MyFiles* fh) : GameObject(fname, fh)
{
// we can create with this images but lets store the full set
this->Images[0] = this->Image; // store the 1st one we loaded up with
(walk down)
Our constructor starts right away by calling its base constructor with a filename and file
handle, this is fine because we know the image is the down image, and we’re happy to let
that get loaded into the GameObject Image pointer, and also now into our Images[0] . Let’s
load up the three other frames in to the array and that’s done .
Now notice when we loaded the array, we used the new command three times . That
means that memory was allocated for these instances, and done in this instance of the
class . We must adhere to a golden rule, every new must have a delete .
So our Destructor is not going to be an empty one, it must be responsible for deleting
these three objects when/if it is called . But what about the first image? Which will actually
be the result of this call in the Game .cpp file
Player* GO = new Player((char*)"../Assets/walkDown.png",this->FileHander);
As this is called in the Game Class, we need to let the Gameclass itself handle its removal,
so for now we’re only going to worry about our three news, and make sure we have three
deletes .
Player::~Player()
{ // because this class made 3 images we need to delete them
delete this->Images[1];
this->Images[1] = nullptr;
delete this->Images[2];
this->Images[2] = nullptr;
delete this->Images[3];
this->Images[3] = nullptr;
// but... we didn't create Images[0] here, it was created in the Game class
so let it remove it
}
There, that’s done . We can create and when needed delete our player and store away his
four images . The update routine can now be written:
void Player::Update(Game* G)
{
Input* IH = G->InputHandle; // easier access;
if (IH->TestKey(KEY_RIGHT))
{
this->Image = Images[3];
this->Xpos += SPEED ;
}
if (IH->TestKey(KEY_LEFT))
{
this->Image = Images[2];
this->Xpos -= SPEED;
}
if (IH->TestKey(KEY_UP))
{
this->Image = Images[1];
this->Ypos -= SPEED;
}
This will allow us to move our man around…once we have three things done in our
GameClass, first in the init routine we need to create our player:
Player* GO = new Player((char*)"../Assets/walkDown.png",this->FileHander);
GO->Xpos = 400;
GO->Ypos = 200;
MyObjects.push_back(GO);
And we need our Game::Update routine to cycle through the MyObjects vector calling
this new update system:
for (int i = 0; i < Game::MyObjects.size(); i++)
{
Game::MyObjects[i]->Update(this);
}
We could put it in the same loop as the update, but I prefer to keep logic and drawing
separate . Reasons for this might become more obvious later when collision could result in
the removal of an object already updated, and drawn .
Don’t forget to add Player .h to the Game .cpp list of headers . Compile and run…it
should look like this:
So this works pretty well, if we press left we go left, if right we go right, and our frame
changes to suit the direction . But notice we use a speed value, defined in Player .cpp after
the headers, with
#define SPEED 1.0f
If we want the Left we can just subtract 1 from that address, and Right adds the width of
the sprite image to get to its top right corner . If we don’t detect a red pixel we will allow
the addition of the speed .
if (IH->TestKey(KEY_RIGHT))
{
this->Image = Images[3];
if (*(Point + (this->Image->GetWidth())) != REDMASK + ALPHAMASK)
this->Xpos += SPEED ;
}
That awkward looking dereference symbol * is telling us that we want to actually test the
pixel at Point + our calculated offset . There are other ways to address an area of buffer
memory, but I thought it would be nice to see how we access memory like this, we’ll be
using a nicer system soon .
As we’re looking directly at a memory address up and down need a slightly different
approach, we need to know the width of the screens buffer, so we can multiply that by how
far down we want to look, or just subtract the width of the buffer in pixels, to look .
Down would look like this:
if (IH->TestKey(KEY_DOWN))
{
this->Image = Images[0];
if (*(Point + (Offset * this->Image->GetHeight())) != REDMASK + ALPHAMASK)
this->Ypos += SPEED;
}
See if you can write the tests for Left and Up yourself? All you need to know is that instead
of adding the reference point they are subtracting .
Compile and run the code and try walking around, you can see that our player can
indeed move until he hits a red pixel!
I made it a typedef so that it would allow me to do this for the variable type;
Direction Dir;
From now on we can load Dir with Up, Right, Down, or Left, rather than 0,1,2, or 3 . It’s
just a nicer way to address things because hard numbers can be confusing sometimes if
you forget what number relates to what value .
Ok, now I just remembered we said we were going to also include homing, so I will
add that to the type of Direction controls we plan to have
typedef enum Direction
{
Up = 0, Right, Down, Left, Homing
} Direction;
#include "Enemy.h"
#define SPEED 1.0f
Enemy::Enemy(){}
Enemy::Enemy(char* fname, MyFiles* fh) : GameObject(fname, fh){}
Enemy::~Enemy(){}
void Enemy::Update(Game* G)
{
switch (Dir) //what direction are we moving in
{
case Up:
break;
case Left:
break;
case Down:
break;
case Right:
break;
default:
break;
} // end switch
}
We will fill in those case statements with proper movement and detection shortly, same
way we did with the player . Let’s try generating these chaps in the init of the Game Class,
but before we do, we need to add a random function; ideally to our Game .h (it needs a
proper home soon!)
We’ve used this before in our Kamikazi and Invader games . The Random placement will
look like this, remember to add Enemy .h to the Game .cpp file and this is placed in our
Game::Init routine after the player is generated .
And in Game when we create him don’t use and push a TP value use ThePlayer instead .
Let’s create these exactly the same way we did with the enemies .
Homing in
We’ve got enemies moving in nice predictable ways, reacting to the obstructions in the
map, and we’ve got things to pick up . This is a pretty well-established game mechanic, but
if we’re honest it does not really do a lot . If we keep our wits about us we can avoid the
enemies . So let’s add that extra feature . Homing, where we said we would let the closest to
the man switch to a homing system, not unlike our Kamikazi divers .
How can we determine the closest one? It’s pretty simple for us to calculate range; our
old friend Pythagoras did that for us . But do we really want to scan through 20 enemies
every cycle to work out who’s closest?
Well, yes, we actually have no choice, remember what we humans do is look and visu-
ally compute distances, but computers can’t do that, they literally have to test every single
relevant object and compare with the others .
But there is a logical way to do it . We already have a nice loop doing the updates in our
Game loop, so if we can compare the range of each relevant object in turn we can compare
with the last best, and if that is now better we will by the end of the loop know which is
closest .
Notice I said relevant object . We have three types: (1) family, (2) player, and (3)
enemy, we only want to test the enemy . There’s at least two ways to do that, we could add
a type value to the GameObjects, and in the constructor of each object set that up, it then
becomes a simple test to see if it’s an enemy or we could add a range value and allow all
nonenemies to set that to a very large value meaning they’d never been chosen, and let the
enemy put correct values in .
Range is actually quite a useful concept in a lot of games, so I’m going to jump for that,
and add Range as a value in GameObject so that all types of game object have some value .
But only the Enemies will actually calculate it each frame .
float GameObject::DistanceFrom(GameObject* P)
{
float XDist = P->Xpos - Xpos;
float YDist = P->Ypos - Ypos;
return (XDist*XDist) + (YDist*YDist);
}
Notice, even though this is a Pythagorean calculation I am not bothering to SQRT it,
because I can work just as easily with the square of the value as I can with the square root,
this used to be an important optimization, perhaps not so much now but SQRT is still a
time-consuming function, so if you really don’t need it, don’t use it .
As we parse through objects in the vector, to update them, we now just need to add
a call to this routine in the Game Class, which if it finds a closer object than we currently
have it will keep his details in the CurrentClosest variable .
Remember to add a descriptor for this function in the Game Class, we have one more step
to do, which is to add to our game update to call this nice range checker like this:
case Homing:
{
// calculate where we want to move to, then see if its safe
float PX = G->ThePlayer->Xpos - Xpos; // vector toward the player
float PY = G->ThePlayer->Ypos - Ypos;
Vector2D MoveThisWay(PX, PY); // use these to make a vector
MoveThisWay.normalise(); //normalise it so we can make better use
It’s nice to see our old friend the Vector2D Class coming back in, we really should have
used it for the whole game, hint hint . A nice small technical thing to note, I usually prefer
to have my case statements in brackets, it’s not always needed, and you’ve seen several
instances so far where I didn’t, but here I absolutely had to, because I was creating some
local variables to hold temp values . If I had not put the case statement in brackets to con-
tain the scope of the statement it would have thrown a confusing error when trying to
compile . (Translation, I didn’t put it in brackets and it threw up a confusing error! I suffer
so you don’t have to .)
A much more important point to make here, we did something we will do again and
again in future, we worked out where were wanted to go, used that for testing but only
updated the position of the sprite when we decided that it was a valid point . We will repeat
that process a lot in future, movement in any game world often depends on knowing where
you want to go to, rather than where you actually are .
Wrapping It Up
So now all we need to do is deal with some gameplay conditions . Clearly contact with
enemies is a death, so we need to store the numbers of lives we give, and if we catch all the
family, it’s a win and then game over, with a nice little triumph message .
I’ll let you do those, so that we can move on with this book, as you can tell from the
very basic graphics and poor gameplay, this isn’t a serious game, it’s just a demo . But let’s
make it playable at least and add some scoring and end-game conditions . For later games
though you should refer to the support site final versions source code, for game state info,
as its going to be pretty much common code/methods for each game . You’ll find a finished
version on the support site to compare against your own efforts, but really how you want
to tackle this is up to you .
Clearly we have a lot more colors and more importantly these tiles can represent different
types of things we might stand on, dipping into a pixel buffer no longer makes sense . But
we do still have access to the tile or character map that can give us information .
We can see for example, that tile 0, appears to be a blank tile, tiles 1, 2, 3, and 4 are
solid, tiles 5 and 6 are water, 7–13 are solid, though the pipes might be useful for other
concepts, 14 very interestingly is a ladder and 15 is another pipe .
So we’ve got 12 of our 16 tiles seem to be some kind of ground, 1 is a ladder 2 are water
and the 0 is blank, but might have a use .
Here’s a simple play map set up with these graphics, nothing too fancy . I put this
together in 10 mins using a lovely shareware package called Tiled, from Thorbjørn Lindeijer,
it’s available free from http://www .mapeditor .org/ but if you plan to use it please donate a
few pennies to allow him to continue to support it . If you want to design your own maps
this is a great app to own . Of course, you should put more care into your design than I have!
Where TILE_X and TILE_Y are values defined elsewhere that give the size of the tile
(usually the same, but never assume!) .
The fact we can check any point, means we can use our objects reference
point, and/or any offsets from the reference point we want to use . That gives us
great flexibility to test the middle, top, bottom, sides, and even corners of a rectangular
sprite .
Ok, so that all works, we have a new collision system, as long as we know the type of tile we
can assign different attributes to it and react accordingly when our characters detect they
have come into contact with different tiles . Now let’s work on a project that really makes
use of these cool graphics and collision systems .
BaseAnim = 0;
this->Image = Images[BaseAnim+AnimIndex];
(AnimIndex += 1) &= 3;
Each direction sets its own base, though climb up and down share, and I can make the
choice of frame quite easily . Using base 0 for Right, 4 for Left, and 8 for Climb . Since this
game is now looking side on, we have to consider how we’re going to move . We’ve created
a playfield where we recognize the concept of down being at the bottom of the screen, and
presumably we adhere to some form of gravity, we want to walk on the floor, and any other
platforms in our map and we should fall when we’re not actually jumping up .
Let’s start by introducing the basic idea of falling in our Player::Update code, add
these two lines before the key reads
Ypos += SPEED;
if (Ypos > SCRHEIGHT - 48) Ypos = SCRHEIGHT - 48;
We can see we are simply adding speed to our Ypos, unless we’re on the ground (allowing
a slight offset for the fact our sprites reference is its top left corner) .
Run this and you will see that we do indeed fall, while still retaining some ability to
move left and right, and apparently stop when pressing up and falling faster when pressing
down, can you understand why up and down are doing such odd things? They are effec-
tively negating or compounding our fall system .
But clearly this isn’t a very convincing fall, falling with gravity is an accelerating pro-
cess . As anyone who’s ever jumped out of a plane can tell you, you fall faster and faster
Adding the speed this way gives us a much more convincing fall, and also we must remem-
ber to still test if we hit ground and if so we either went splat or simply stopped falling, so
we can reset the speed .
That’s pretty neat, by always adding a gravity value each update we move but now
we need a small test condition, giving us the power to stop gravity at a condition point
we recognize as the ground . Let’s try adding some jumping now and see how that works .
Jumping, requires you to exert a force to overcome gravity, I’m reliably informed by
my much fitter nongame developer friends that to overcome gravity, you have to exert a
force with your legs, which must be greater than the force of gravity, but it can only be
applied once as an impulse at the point of the jump . Since in this case, we have a positive
gravity force, to overcome it we need a negative jump force, add this line to your Up key
get code before or after the animation
Yspeed = -SPEED*6;
It’s nothing fancy, but that will mean when we press up, we are giving ourselves a negative
force, which will propel our man up, but and this is the cool important part, gravity does
its job every cycle, this happens only once (though for now we will trigger it as long as the
key is pressed) .
Try it and see, but don’t hold the up key . So now we have jumping, our acceleration
up is added to or position, but our acceleration up is constantly eroded and eventually
overwhelmed by our gravity update, just like real life .
We need to stop the fact that we can continually apply the force if the key is pressed
though, it should only be possible if we are currently on the ground .
There are two ways to do this, have an indicator flag to tell if we are in the current
process of jumping, the flag gets set when the jump triggers and resets when the fall stops .
This is quite a nice way, but another way is also possible and will aid our design . We can
only allow the jump to trigger if we are on the ground to start with which at the moment
would be done with this .
if (Ypos == SCRHEIGHT - 48)
Yspeed = -SPEED*6;
int* WhichMap;
This will provide us with a pointer to a location which is an int, and as our Maps are made
of arrays, we can tell our Game Class to set the relevant map we want into this location and
any object with access to the Game Class public members can get this .
Sadly, though we can pass the address of the base of an array quite easily, we can’t
pass the dimensions of a 2D array quite so easily as a pointer, so we will have to do a bit
of gymnastics because I don’t want to alter our Update entry parameters to make passing
an array simpler .
In your Game init Class be sure to let the WhichMap variable know where the map
we are using is with this command:
WhichMap = &Map2[0][0];
Now when our player is updating he can update gravity as before, but now is able to alter
the test condition that stops or allows jumps, to take account of his position in the map
like this:
Now we’ve got a lot more flexibility and a pretty convincing jump, but already you see the
jump only really lands properly if we have that center bottom point on our map . We need
to make some small changes now so that we can get this kind of control .
We want to fall in empty areas, land on solid, and be able to detect if a wall is going to stop
our X-axis movement . What will that look like if we could see the tiles as tiles is depicted
in the following image . Let’s create a small list of attributes, which will relate directly to
our 16 different types of tiles:
This very simple list tells us if something is solid, 1, or not 0, and there is the special case
for the ladder, which is 2 . We could also consider these as binary bit patterns for multiple
attributes on each tile!
So now place that list in our Game Class under the map . Remember we will once again
need to access this array in our player and later Enemy Classes, so have another pointer
to the data called
int* WhichAttributes;
Notice we don’t need the & address symbol or [][] because it’s a single-dimensional array and
C++ actually knows that such arrays are already treated as addresses . With the attributes
for each tile now recorded, another small change to our ground test has it looking like this:
int WhatsUnderOurFeet = G->WhichMap[YMap*64+XMap];
int Attrib = G->WhichAttributes[WhatsUnderOurFeet];
We are now seeking out the attribute associated with a tile at the players feet, and if it’s
non-0 we will stop it, do the same for your jump condition:
if (Attrib != 0 && Yspeed >= 0) Yspeed = -SPEED*6;
Point-to-Point
As you might imagine, point-to-point is simple a case of going from one place to another
and back . And we actually already know how to go to a point, we did it in Kamikazi . The
only real change then is that we have two points, and we need to detect if we have reached
them before we change the point to focus on .
So as long as they are moving left<>right they are simply performing a straightfor-
ward action . We don’t even really need to have them check if they are standing on a plat-
form . We can comfortably place them at a point where there is no need to move up or
down between point A or B . This bit of code will do just that .
We will of course need to define the two points somewhere, as this is essentially an
enemy specific behavior, then we can put it in the Enemy Class, or perhaps more wisely
create a new class that inherits enemy? If we plan to add multiple enemy types this makes
sense, since we can isolate the update systems for each type of enemy and keep specific
variables in the relevant class . The Enemy Class itself can now be restricted to handling
animation, drawing, and testing for collision with player . So our Point2Point header can
look like this .
#pragma once
#include "Enemy.h"
class Point2Point : public Enemy
{ public:
Point2Point();
Point2Point(MyFiles*);
~Point2Point();
void Update(Game* G);
Direction OurHeading;
Int Xmin, Xmax;
int Ymin, Ymax;
};
Update should by now be obvious, and then there are a few variables, which our update
routine can make use of to do its moves .
In our Point2Point .cpp file, we just need to create the constructor, a simple default
one and one which is going to load our images into the vector, so it needs the address of
our file hander .
if (graphics.find(fn) == graphics.end())
{
printf("New graphic to be added %s \n", cstr);
// we never found it
Surface* T = new Surface(fn,FH);
graphics.insert(std::make_pair(fn, T));
return T;
}
else
{
printf("Graphic previously loaded and now reused %s \n", cstr);
return graphics[fn];
}
}
I’m putting this in the Enemy Class because it’s really only the enemies that I need to take
care of . This system depends on something called a map, a very nice special form of array
that instead of an index, looks up things based on other things, in this case a filename .
We will need to define our map in our enemy .h file, which I want to make static, as
there should only be one map . Our new enemy .h header now looks like this:
#pragma once
#include "GameObject.h"
#include "Game.h"
#include <map>
As long as we also have a definition of this map in another file, ideally in the enemy class
like this
We are now able to use AddorReturn as a loading system, which will do exactly what it
says . It does, however, create a very dangerous precedent . We will be creating new surfaces
and storing them in this map, providing the calling function with the surface, but we are
passing on the responsibility to delete those surfaces to whichever class creates the ene-
mies and, in turn, destroys the map, normally if an object makes a new item it’s up to that
object to delete it, in a clean-up or destructor function . But since more than one enemy is
going to be using these graphics, it’s not practical for the first enemy to delete the surface
potentially damaging the graphic integrity of any remaining enemy .
AddorReturn could just as easily be based in our GameObject Class too, it’s up to you
really, I would like to have a separate collection of images only for enemy graphics in this
case, so there’s no real reason to have it in the GameObject Class .
Let’s get the constructors built, I’ll just detail the ones with files, you can manage the
others .
Point2Point::Point2Point(MyFiles* FH)
{ // we can test our map to see if the files already exist, if so use the
existing
Images[0] = AddorReturn((char*)"../Assets/fungus_1.png", FH);
Images[1] = AddorReturn((char*)"../Assets/fungus_2.png", FH);
Images[2] = AddorReturn((char*)"../Assets/fungus_3.png", FH);
Images[3] = AddorReturn((char*)"../Assets/fungus_4.png", FH);
}
Here we can see how to use the AddorReturn function, when we create the first Point2Point
enemy, we will see that all four of them will load, but the second time, it will reuse the
graphics . We will expand this a bit more as we find a need to initialize variables .
So a simple dumb left/right or up/down motion is fine and if we are careful to place
them in the right part of the map, we will never have to worry about them appearing to
walk off the edges or through walls . These systems are ideal for flying types of enemy, and
the system is simple to expand with a counter into an array of points, and using normal-
ized vectors to move to them .
But if we want them to do things a little more cleverly we need them to have much
the same understanding of how to move around the map as Bob has . They should adhere
to the concept of gravity unless flying and interaction with the map, meaning that they
should fall when there is nothing to support them, they should stop if they encoun-
ter a wall, and they should try to make it look as if someone, or something is actually
controlling them .
Patrolling Enemy
So let’s have the first semi-smart baddie, we’ll use the same graphics for him and his header
file isn’t a lot different, but this one has no need for min and max points . Create a new pair
of file for a Patrol Class, this is the header:
#pragma once
#include "Enemy.h"
Basically, nothing more than a stripped back Point2Point . Like before we won’t actually
use our Default constructor, but it’s good to leave it in place .
Our Filehandle using constructor is going to perform the same duty of loading
the files in for the images . Even though we only plan to use 2 again, there’s no harm at
all in loading them all if we find a way to make use of them another time they are there and
ready for us, but if it worries you change things to use a two entry Images array .
Patrol::Patrol(MyFiles* FH)
{
// we can test our map to see if the files already exist, if so use the
existing
Images[0] = this->AddorReturn((char*)"../Assets/fungus_1.png", FH);
Images[1] = this->AddorReturn((char*)"../Assets/fungus_2.png", FH);
Images[2] = this->AddorReturn((char*)"../Assets/fungus_3.png", FH);
Images[3] = this->AddorReturn((char*)"../Assets/fungus_4.png", FH);
BaseAnim = 2; // 0 is static frame 1 is s stand, 2 and 3 are the walk
frames
Yspeed = Xspeed = 0;
AnimIndex = 0;
Image = Images[0]; // we need one to start
Dir = Right;
}
So only a small difference here, mainly I am making sure I have an Image surface ready to
be used before the update because I will need to gain access to my images sizes . The Update
is where the main action is now going to occur . After working out the timing for animation,
we again have a switch, this time for two directions as I won’t make him go up and down .
if (Attrib & 1)
{
Dir = Left;
break; // break the loop we are done
}
You can create a Patrol fungus in your GameInit, in game .cpp, with this,
// make a standard patrol
Patrol* Pat = new Patrol(FileHander);
Pat->Xpos = 210;
Pat->Ypos = 20;
Pat->Dir = Enemy::Right;
MyObjects.push_back(Pat);
Compile and run, and our smoother moving patrol fungus can now be seen going off to
the right and when he hits the bricks, turning back .
It’s not exactly cutting edge AI, but it’s a good example of our enemies making
decisions about their direction of travel based on the environment around them, just
as we do .
Homing Enemy
Finally, one more type of Enemy can be added, a homer of sorts . We’ll use what we know
about calculating distances and have a fungus that sits quietly until Bob gets close then he
tries to attack, hunt him down as far as the environment will let him . The movement parts
are going to be the same as our Patrolling fungus, but to make them a bit more interesting
#pragma once
#include "Enemy.h"
class Homer : public Enemy
{
public:
Homer();
Homer(MyFiles*);
~Homer();
void Update(Game* G);
float GetDistance(Game*);
Direction OurHeading;
float Distance;
bool Moving;
};
Not really a massive difference, there’s an additional variable for distance, and a bool,
which we will need to initialize to a default false when we make the constructor .
The default constructor again will be empty with a File handle constructor looking
like this:
Homer::Homer(MyFiles* FH)
{
// we can test our map to see if the files already exist, if so use the
existing
Images[0] = this->AddorReturn((char*)"../Assets/fungus_1.png", FH);
Images[1] = this->AddorReturn((char*)"../Assets/fungus_2.png", FH);
Images[2] = this->AddorReturn((char*)"../Assets/fungus_3.png", FH);
Images[3] = this->AddorReturn((char*)"../Assets/fungus_4.png", FH);
BaseAnim = 0; // 0 is static frame 1 is s stand, 2 and 3 are the walk
frames
Yspeed = Xspeed = 0;
AnimIndex = 0;
Image = Images[0]; // we need one to start
Dir = Left;
Moving = false;
}
Aside from initializing the Moving flag it’s not doing much more, the real meat in this
class once again is evident in the Update functions, which are now being altered depend-
ing on the state of that Moving flag .
void Homer::Update(Game* G)
{
float Speed = 1.2;
Yspeed += 1.2f;
if (Moving == false) // lets check if he's moving if not should we make him
{
BaseAnim = AnimIndex = 0;
Distance = GetDistance(G);
if (Distance < 16 * 6) AnimIndex = 1; // pop up and show interest
if (Distance < 16 * 3)
I don’t need to paste in the full routine because it should be very obvious to you now what
to do, the tests for the distance, are taken care of, and if we are not moving we do the old
movement systems, but be careful to close your else brace before the gravity check because
we want that to function even on a sleeping mushroom .
Create a new Homer in Game, but, of course, adding Homer .h, and then in the game
init, add this:
Compile and run, you should see that as you get close to the mushroom, he will pop his
head up, if you get even closer he’s going to charge at you, then move away and once out of
range revert back to a sleeping mushroom .
As all three of our enemies have such similar code, some of it actually even repeats,
it’s tempting to have a single file with one Enemy Class and use a type variable to just
choose different initialize and update routines . It might even be tempting to have each
more advanced one inherit its primitive version .
But keeping the types separate allows us to make changes very easily without having
an impact on the other types, it also lets us maintain small and focused files, which do
specific things for specific types, which is an important mantra in C++’s OOP-coding
concepts . There are small but subtle differences in our three fungus types, which might get
lost if we have one large and confusing file or overdo the inheritance . But there is a valid
argument for separating out the exact same movement code in the patrol and homing
As it stands it is blindly allowing a jump but using the climb animations, let’s alter things
so that we can take into account when he’s in a climbing mode we can use climb anima-
tions . Also we need to test when he can transition to a climbing mode and when he needs
to exit it .
Add a bool flag called Climbing to your player .h and be sure to set it to false when
you init your game . Also make sure you add the final two frames for the jumps in the
constructor
Images[12] = new Surface((char*)"../Assets/brianJumpR.png,fh",fh);
Images[13] = new Surface((char*)"../Assets/brianJumpR.png,fh", fh);
Simply wrapping the test code in an if/then/else condition will let us use the bool:
if (IH->TestKey(KEY_UP) )
{
// this is the climb
if (Climbing)
{
BaseAnim = 8;
this->Image = Images[BaseAnim + AnimIndex];
(AnimIndex += 1) &= 3;
}
We can compile and run this and see what happens, if we press up, we jump without a
frame change but the ladder itself is treated as a platform and allows us to jump up, so let’s
add a little more tests for that climb, so it now looks like this:
if (IH->TestKey(KEY_UP) )
{ // this is the climb
if (Climbing)
{
Yspeed = -SPEED * 6; //<<<<<<<<climb speed
BaseAnim = 8;
this->Image = Images[BaseAnim + AnimIndex];
(AnimIndex += 1) &= 3;
// we need to test if the climb is over?
if (Attrib != 2)
{
Climbing = false;
}
}
else
{ // now we check if we are on a ladder
if (Attrib == 2)
{
Climbing = true;
}
if (Attrib != 0 && Yspeed >= 0)
Yspeed = -SPEED * 6;
}
}
This looks good, we are changing the animation at the right time, the only issue now is
that we are still basically jumping up the ladder, so reduce the amount of speed we add
when we are climbing, to give a more step-like value, 2* Speed seems to work well .
This method allows us to clear the final top part of the ladder by allowing it to revert
to a jump at the final step, is that something we want? Ideally we could have a little clam-
bering up animation but sadly we don’t have the graphics . So we have to leave it like this .
Now try to move down! We have a problem, don’t we? Our attribute test is looking at
our feet and right at our feet there’s no actual ladder it’s a row below our feet, so we need
to do a second test and rewrite the down routine like this:
if (IH->TestKey(KEY_DOWN))
{
int WhatsUnderOurFeetplus = G->WhichMap[(YMap+1) * 64 + XMap];
int Attrib2 = G->WhichAttributes[WhatsUnderOurFeetplus];
if (Attrib2 == 2) Climbing = true;
if (Climbing)
{
BaseAnim = 8;
It’s a little different from the up routine, especially as you now have two attributes to test:
the one directly under our feet and the one a tile down, this will allow it to travel to the
bottom of the ladder before the Climbing flag is cleared . Compile and run… .and it’s not
quite right, is it?
As we have gravity acting on our player at all times, we need to deal with the fact that
gravity should not work while we are in climbing mode . This is why, Down is making a
direct change to the Ypos, because the gravity should not allow us to drop while we’re on
the ladder .
We need a small change to the gravity section of the Game::Update function, to take
account of this one off condition:
if (Climbing == false)
{
Yspeed += SPEED;
if (Yspeed > 9.81f) Yspeed = 9.81f;
Ypos += Yspeed;
}
else
{
Ypos += Yspeed;
Yspeed = 0;
}
So gravity works the same if we’re not climbing, otherwise we add the speed, which Up climb
will use, and make a point to immediately clear the speed so that the down will not fall .
But what happens if we do something we have not anticipated, try climbing the lad-
der, and then walk off .
Hmmmm not quite right is it? Our walk systems do not have tests for climbing, so if
we are in climbing mode, and we walk off the ladder, we are still technically in climbing
mode and gravity will not work . What’s the best solution? Do we lock out/left right motion
while in climb mode, or do we automatically clear the Climbing flag when we move left or
right? Both are valid, try them out .
Animation might also need a little helping hand, we should probably return the player
to one of his walk frames, or when jumping to one of his jump frames . But which one?
Left or right? We don’t keep any note of his direction of travel, we should keep a note in
our players direction when moving, remember we have that nice Direction Dir; value
By testing two points, less than a tile width apart around the center point we already test
we can make sure that the climb will be central, try adding these new methods for the
player:
bool Player::TestClimb(Game* G)
{
int YMap = (Ypos + 33) / 16;
int XMap = (Xpos + (Image->GetWidth() / 2) - 6) / 16;
int WhatsUnderOurFeet = G->WhichMap[YMap * 64 + XMap];
int Attrib = G->WhichAttributes[WhatsUnderOurFeet];
Don’t forget to add the prototypes for these in your Player .h file . It’s hopefully pretty clear
that they are looking for two points within a tile width from the center and returning true
if both of them are true . We can now use these routines to trigger up and down climbs,
and the simple center tests to continue . Also be aware that the down test routine will cause
a crash if you try to use it on the bottom .
Yeah, this works pretty well, we can only drop down the ladder when we are pretty
central to it, we can adjust the tolerance a little to make those ±6 values bigger or smaller as
we need so long as the distance between the two points is not more than the width of a tile .
Are we done yet? Nope, sadly there is still a small issue, try going UP when we’re not
quite central . What do you think is happening here? He’s not climbing, because our new
tests will not set the Climbing flag, and he’s also not animating . Basically, he’s jumping,
and at the top of his jump he’s recognizing a solid thing to land on . Hmmm, this is a prob-
lem and most of it stems from the fact our test for gravity is based on our attribute being 0
or not solid . But ladders are two . And, in principle, therefore solid . Arrghh what??
Here we’ve got a great example of a few simple changes to the basic rules of our
movement causing knock on effects to other rules and cause us to make new functions to
accommodate . This is a continuing issue with most interactive programming, you have to
try to anticipate and correct for things you allow your player to do .
Having a tile being only one attribute at any one time is a pain, and means we need
to write exceptions all over the place when we do our condition tests . But attributes can
allow us some flexibility, rather than define the Ladder tile as a nonzero and solid tile, let’s
have multiple attributes for each tile and encode them in bits . So a Ladder can therefore be
defined as nonsolid but climbable .
In Game .h add these lines:
#define SOLID 0b1
#define LADDER 0b10
#define WATER 0b100
#define METAL 0b1000
#define EARTH 0b10000
This gives us a series of binary values or masks, which we can encode into our attributes
like this:
int Attributes[] =
{
0, //0
SOLID, //1
SOLID, //2
SOLID, //3
METAL+SOLID, //4
WATER, //5
WATER, //6
SOLID, //7
SOLID, //8
SOLID, //9
For now, they are still mostly single values, though metal has two attributes that might
allow us to play a tip–tap sound when our player walks on metal, but the ladder now has a
LADDER attribute, and not solid, and it can have other attributes associated with it if I so
choose, so if my test for landing on a solid now looks like this .
I can be a lot more confident that the gravity is going to ignore the ladder, and the ladder
tests now using & LADDER are able to function much better . For example:
bool Player::TestClimb(Game* G)
{
int YMap = (Ypos + 33) / 16;
int XMap = (Xpos + (Image->GetWidth() / 2) - 5) / 16;
int WhatsUnderOurFeet = G->WhichMap[YMap * 64 + XMap];
int Attrib = G->WhichAttributes[WhatsUnderOurFeet];
You’ll need to go through the code now and make changes to any attribute check, to use
this mask system rather than the ugly hard-number system, and in future you should
always use this kind of method . A 32 bit int will allow 32 possible attributes in a tile . More
than enough to cover a wide range of possible properties or combinations of them .
It really does not have a lot to do, just provide that vector of vectors and I’m giving myself a
function to load and set the size of the map . I’ve assumed (I keep breaking that no assump-
tion rule so let’s say, I insist that) I will be using text-based maps, rather than binary, but
you can just as easily add a binary reader to this . We might want to add a few different
load functions, for example, if we discover that our map editor outputs some scripts, but
the simple maps I’m using only contain raw tile data, so the load function is a very simple
CSV parser pushing tile numbers into TheMap .
Unlike some of our other simple classes, this class has a few responsibilities, which
is why I called it a MapManager rather than just a map . If we load things, we need to be
careful that when we are done with them we release the memory . In this case, our mem-
ory is going to be gobbled up by TheMap vector . I will make sure the load routines and
the destructor take care to clear or reset it when I kill this class or load a new map .
So in the cpp files our constructor and destructor will look a little like this:
#include "../Headers/MapManager.h"
MapManager::MapManager()
{
}
MapManager::~MapManager()
{
//delete the map
if (TheMap.size() != 0)
{
printf("A map still exists and is now being erased\n");
As is often the case, our default constructor does nothing, we need to pass information,
so we do most of the work in the load/init . But you can see the Destructor is being a good
little function and checking to see if there is a map in place when it gets called, and care-
fully removes it .
Now, our load routine needs to actually load or more accurately stream the file, and
then create the vectors as it parses through the file, so it might end up looking a bit com-
plex, but the basic idea of setting up vectors inside vectors looks just like this:
This will fill an array full of 0,1,2,3,4… . on each row, not very practical but it helps to
visualize things .
All we really need to do is insert some code to grab a line from our text tile in the outer
for loop, and convert data to ints, in that inner for loop, so that instead of pushing back the
x value we push back a converted ASCII value and loop through until we are done .
This is why knowing the size of at least the number of rows in the vector is useful,
though we can be ignorant of it as long as we are confident in the formatting of the file we
are loading . If it’s from a map editor or other package we can be sure that we will have X
number of values before the CR (Carriage Return) and the end of the line, and if it has a
proper EOF marker we can scan down the lines to determine if we have finished, or con-
tinue with the amount of lines we want . So our formatting of the file itself will set up the
size of the data, or we set hard values .
So, let’s add some file handling, and as a safety measure, some checks before we try to
build it to ensure that we have a valid file and valid sizes . I’m also being fussy and print-
ing out the data in a visible format, just so I can see what’s happening during debug . We
should, of course, come up with a better way to report things, because at some point we
have to go through all our code and comment out the masses of printf’s that are happen-
ing . Also we will allow for the possibility that size errors might occur and we’ll make sure
that the loader can’t make a map bigger than the data in the file . That can be handy if we
are guessing the maps size, as long as the map has lines ending in \n, asking it to load a
massive map will only have the effect of loading the map in the size it was saved as .
if (xs == 0 || ys == 0)
{
printf("Map has a 0 value which is not allowed\n");
return false;
}
std::ifstream t(fn); // open an input stream for our file
// now load each value in turn store them into our vector make our vectors;
for (int y = 0; y < ys; y++)
{
std::vector<int> T; // a vector for the columns
Nice, it’s gained a few kilos in safety checks, outputs and pulling and then converting data
from the input streams, but you can see still the core of this is a simple pushback of a num-
ber into the vector . All we need do from this point is load the map, provide real, or fake sizes
for it to function and at the end of it we’ll have an easy to access 2D vector of TheMap . As
long as we don’t mess with the data our tile editor spews out, this will be enough to work
with for now . There are still a few minor issues, you may or may not come across them, but
we should, of course, also add a check that the file name is valid and abort the load if not,
you can add that yourself, hint, the object t has things you can test! There’s also a slight flaw
in this logic, if using TileEd to save your CSV you will find it has one blank line at the end .
We will load that, and even though we won’t put a final column vector in there, we will put
in a final Row vector . Any idea how to fix this? It can be done here and should be, just test
when you get a CR, that you do, in fact, have a column vector with some values . Or at the
end of the routine test that final Row and see if its column vector has values .
Ok, so now that we know how to do this, we can create similar parsers, which load data
for many other resources we are likely to use . Simple CSV, text parsers like this are very
useful, but you will probably get more complex as you need more data in your managers,
but as long as you use a “,” to separate things you can pull tokens instead of strings of
numbers and set up switch/case conditions to handle them .
As this is also a nice compact routine, which is likely to be used for at least the next
couple of projects we should store it in our MyLibs library .
Look at the position of the character in the previous image, he’s what we will call the
controlling influence . It’s clear that the screen only shows a part of the map, and that he’s
part of the map, but Bob here is the one who we are interested in and who is controlling
everything . He will pull the screen around the map for us allowing us to see the different
parts of the map that he moves around relative to his position .
The screens top corner is (nearly) always a fixed distance from where our friend
Bob is . And it therefore has a position in the map . We can make small adjustments later
We will need to make adjustments to our draw map function, but for now let’s explain how
we can load these important screen values .
We need now to remind ourselves that our Xpos value relates purely to our position in
the map, and we will need to draw our sprites as a relative view from the screens location
in the map, our draw systems then need to be given a new improved screen coordinate,
which we will call SXpos and SYpos for Screen Xpos and Screen Ypos, add this to your
GameObject Class because from now on all our game objects will need to have this to
allow them to draw:
float SXpos, SYpos;
Our player Bob is the only one who pulls the screen along, so he is the controlling influ-
ence and as such it becomes his responsibility to work out the screens position every time
his own position changes .
Add this code to the end of your player update code after all the other moves are done
and a new Xpos and Ypos are calculated .
G->ScreenX = Xpos - (SCRWIDTH / 2); // get the trial version of the screen
G->ScreenY = Ypos - (SCRHEIGHT / 2);
// keep in boundries if it is outside limit it
if (G->ScreenX < 0) G->ScreenX = 0;
if (G->ScreenY < 0) G->ScreenY = 0;
if (G->ScreenX > 64 * 16 - SCRWIDTH) G->ScreenX = 64 * 16 - SCRWIDTH; //
we know the map size
if (G->ScreenY > 40 * 16 - SCRHEIGHT) G->ScreenY= 40 * 16 - SCRHEIGHT;
// calculate a screen position for our player
this->SXpos = Xpos - G->ScreenX;
this->SYpos = Ypos - G->ScreenY;
This should all be fairly easy to follow, we are basically making sure our screen coor-
dinates are a certain distance away from our player sprites reference point and a few
tests to make sure we don’t go outside the map area . Then simply work out a new screen
coordinate as the players position minus the screens newly formulated map position!
Ok, so we know what part of the map our screen starts to draw, now our map draw
system also needs a slight overhaul, we’re no longer content with drawing at 0,0, and we
want to draw at the point we just calculated .
We do need to make sure that our draw is done AFTER the player has updated, other-
wise the data will be inaccurate and we also need to be clear that the player, our controlling
Almost there, we still have to make a small change to the draw system so it can use the
new values:
We create a couple of temp values, which we use for the index into the map and add the
offset given to us by the for loops, which also acts as the position on the screen to draw .
Right, let’s get this compiled and running, and see what happens . Now we can see our Bob
is nicely on screen, and if we move to the right, the screen is scrolling, but it’s not very
smooth at all . We are not taking into account that Bob is moving in pixel values, and our
Drawmap is only drawing in Tile values so we only scroll when Bob has moved 16 or more
pixels, we need to take into account the 0–15 pixels that a tile is made of and account for
them in the draw .
First step is to actually work them out and convert them to easy to use ints . At the
entry of Drawmap before we calculate Tmapx add these lines:
The fmod function returns the remainder of a division on a float number, in this case
by 16, so that will provide us the offset into a tile .
Now we have only to factor this into the actual draw with this line:
We could drop the () around the *16 because the compiler will always do a multiply before
it does the subtract, but for clarity it’s always good to wrap steps of a calculation in a
bracket, it does little to the compiling of our code but helps us and others visualize our
intent better .
Clearly they are connected to our scrolling as they grow as we move, and it’s really pretty
simple, we are drawing 20 tiles, but because of the offset we actually are displaying 21, the
tiles that are about to come on screen need to also be drawn . We can fix this by simply
adding a +1 to our for loops so our finished draw maps now looks like this:
void Game::DrawMap(float mapx, float mapy, Surface* a_Screen)
{
int offsetX = fmod(mapx, 16);
int offsetY = fmod(mapy, 16);
int Tmapx = mapx / 16;
int Tmapy = mapy / 16;
for (int y = 0; y < (SCRHEIGHT / 16)+1 ; y++)
{
for (int x = 0; x < (SCRWIDTH / 16)+1 ; x++)
{
Tiles[Map2[Tmapy + y][Tmapx + x]]->CopyTo(a_Screen, (x * 16) - offsetX,
(y * 16) - offsetY);
} // x loop
} // y loop
}
And that, basically is that, run this and we will now have a very nice smooth pixel
scroll which allows us to explore our map with ease . We now only have to add a small
GetScreenPos method to our GameObject so that the enemies and other drawn, nontile,
objects in the world can calculate their relative screen position and be drawn correctly . I’ll
let you work it out for yourself, hint . Only Bob is allowed to create the ScreenX/Y values
but everyone creates their SXpos/SYPos the same way .
Conversion of our single-screen game to a multidirection scroller turned out to be
a pretty simple task, adding a few variables, introducing the concept of map, or world
coordinates, and detaching our screen draw systems from the logic . It is very important
from now on that we remove the notion that our screen coordinates are our map/world
The gamer in you can see that ship/plane flying along and the terrain underneath having
highs and lows that we can easily use to hide rockets and other things to attack us . The
programmer in you should be seeing the nice tidy tiles that make up that image . Later, we
might use better quality tiles to make the tiles less obvious and allow the appearance of a
more seamless large play field . Let’s look at the map:
It’s a classic, right to left scroll traveling through a tunnel, avoiding waves of enemies, with
missile silos to add to our pain . The tunnel we travel through also is filled with zero tiles,
we’ll come back to that soon as it’s got a second useful value .
We’re still using the TileExample framework, but I’ve cleaned it up a bit and put it
on the site as Skramble . Please download it and get it all compiling . I’ve set it up to load
// scroll
if (ScreenX < OurMapManager->TheMap[0].size() * 16 -SCRWIDTH)
{
ScreenX += 1;
MyObjects[0]->Xpos += 1; // his Xpos needs to change
}
else
ScreenX = OurMapManager->TheMap[0].size() * 16 -SCRWIDTH;
The big difference is that this code is now located in the Game update loop before the
objects are updated . As horizontal scrolling is now an automatic process only the ship
MyObject[0] needs to be influenced by it . The Ship still has some control over the Y scroll,
so we can leave that up there . But now we are making it clear that our scroll will move the
screen and our ship has to go along with it .
Of course, we need to do something with the control of our player, it still needs to test
that it’s been asked to move as before, but take into account it is tracking the scroll, and we
will allow it to move up to ¾ of the screen to the right but not allow him to drop-off the
screen . We can also create limits of tolerance in its Y movement, but really I don’t think
we need to in this case so we’ll let the collision tests stop it going outside the tunnel . So his
movement routines in the Player .cpp file now look like this:
He can now travel with the scroll, get a little ahead of himself but be brought back if he
goes too far . Get this all up and running and you should now have something like this:
The net effect is that our Xpos is always increasing but we are testing that it’s not going too
far and if we do accidently go past the ¾ line we bring him back .
One final touch, I mentioned those zero tiles, they look a bit off, don’t they? So let’s not
draw them, change the DrawMap routine to this:
void Game::DrawMap(float mapx, float mapy, Surface* a_Screen)
{
int offsetX = fmod(mapx, 16);
int offsetY = fmod(mapy, 16);
int Tmapx = mapx / 16;
int Tmapy = mapy / 16;
for (int y = 0; y < (SCRHEIGHT / 16)+1 ; y++)
{
for (int x = 0; x < (SCRWIDTH / 16)+1 ; x++)
{
if (OurMapManager->TheMap[Tmapy + y][Tmapx + x] != 0)
Tiles[OurMapManager->TheMap[Tmapy + y][Tmapx + x]]->CopyTo(a_Screen,
(x * 16) - offsetX, (y * 16) - offsetY);
} // x loop
} // y loop
}
You can see I’ve added a test, if the tile is zero, I won’t print it, but how does that help, it
just gives me a black tunnel area and it leaves artifacts of undrawn tiles (we have no clear
Now it does not look quite the same as our DrawMap but you can see that it’s not widely
different, we could indeed display another map entirely with a little modification, but for
now I’m making it draw one tile and doing a bit of messing with the position values . Run
it and have a look .
Cool, isn’t it? This is a concept called parallaxing, where the background scrolls at a dif-
ferent, usually slower rate than the foreground . We could put any kind of background in
there, a nice static screen, a noninteractive background map, or in this case, a single easy
to access and easy to loop tile .
This particular version isn’t that great because of our lack of Alpha values in the tiles,
our CPU-based 2D system can’t really cope with having tiles with see through areas, but
we can always have our undrawn zero tiles to act as a window to the background . We could
add alpha blending to the Surface Class copies but in due course we are going to move to a
better more technically advanced framework, so I’d rather leave things as they are for now .
But if you feel the need to have better alpha values, please go right ahead .
◾ Missiles which launched from the underground bunkers by launchers are our
main targets
◾ Gun Towers which will shoot mines at us, and increase their rate of fire the closer
we are to them
◾ Baddie1, for lack of a better name, is a green tractor-like dumb baddie, who just
flies through the scene launching bullets at us . And as you can see his images are
in one file
◾ Baddie 2, is a little more interesting, it’s an animated eye character, so let’s have
him circle around a spot, also launching bullets at us
Our big baddie, is a bit of a semi-smart baddie who will duck and dive at the end of the
tunnel launching bullets and mines at us, he’ll also need several hits to kill .
int MissileStartPoints[][2] =
{
{ 16, 25 }, // these are the locations for a silo
{ 49, 23 },
{ 75, 21 },
{ 171, 23 },
{ 182, 23 },
{ 186, 24 },
{ 190, 24 }
};
Now this is clearly a 2D array, but I don’t need to enter the size of the row(y) component,
since the actual initialization of it will fill that in, however if I end up typing a lot of them,
or editing this array I may lose track of the number of entries . It’s not possible to use
These, of course, go in the GameInit function (which is a good prototype for a level
manager) . Once you have this in, compile and run, you should see some nice deadly nukes
flying and blowing up on the roofs as they hit them .
So you’ve got the idea, repeat this process for these arrays:
int GunTowerPlacement[][2] =
{
{ 10, 18 },
{ 33, 17 },
{ 63, 13 },
{ 93, 15 },
{ 113,19 },
{ 127,21 },
{ 145,18 },
};
Not bad eh? Feel free to add more entries to the arrays, put more things on screen, or alter
the behavior of things as you wish in the classes . Why not put your own FuelPickup place-
ment array in there and generate some fuel cells . Now that all the components are all in
place, let’s get the collisions and gameplay features sorted .
Process Everything?
Our previous games had modest-sized maps, and in turn had enemies that were in it,
which tended to stay in a particular area, so even if they were offscreen they didn’t really
interact with the player if they went through their simple logic regardless of their visibility .
Here though we have potentially hundreds of bullet hurling aliens all coming right
at you and, in turn, hundreds of bullets coming at you from the dark reaches of the end
of the level, which is a kind of the idea, but bullets travel pretty fast and if every enemy is
shooting at you it won’t be long before we have a screen full of bullets . So we want to limit
ourselves and make sure that only enemies who are inside a reasonable range are going to
actually do anything . The rest of the time we are going to leave them dormant .
1. We’re processing objects that for the bulk of their existence do nothing .
2. We’re creating objects which even with careful use of their graphics are taking up
memory .
Now, we probably have the horsepower to processes several hundred enemies, so it’s not
all that apparent, but at some point we may hit a wall and start to see the update rate drop,
especially if you are using a slow target . There are several indisputable facts in game pro-
gramming, one of them is you can never have enough horsepower, the more content we
add, the more horsepower we use and then we are going to run out of that horsepower . We
are beginning to see the wall, even if we’ve not hit it yet .
So we have to reduce the amount of processing we do . There are a couple of hints in
the Skramble code that should give us some ideas . The Missile Launcher . Eye’s and Squids .
generate or spawn more enemies, and do so only when its practical do to this . This concept
means that we can place a single enemy or triggering object on screen, and simply test for an
appropriate point for it to go into its emitter mode . That way instead of testing a few hundred
enemies all waiting for our ship to get into range, we only need test a few dozen launching
systems . They don’t even need to be visible on screen (though out current system does insist
on having something to draw) . It’s always faster to not do anything, than to do something
and even a simple range test repeated a few hundred times is processing we’d like to avoid .
Equally valid, we could place enemy emitter tiles within the map data, or as a spe-
cial class of object, which is designed to generate multiple enemies once the player comes
into range . This also reduces our Game Init quite a bit, we can focus on things that have
to be at specific points, such as our gun towers and generate our enemy waves with a
simple generator object . Spawning objects in this way takes the responsibility away from
the initialization routines and lets you dictate a more variable range of values, perhaps to
increase difficulty or intensity of generation, rather than having multiple arrays in your
level manager .
void Player::RemoveBullet(GameObject* b)
{
// remove the bullet from the set of collidable bullets
MyBullets.erase(std::remove(MyBullets.begin(), MyBullets.end(), b),
MyBullets.end());
}
Not exactly simple is it, and you need to add #include <algorithm> to your headers
in Player .cpp . But that is one reasonable way to remove a thing from a vector .
I should say when I say reasonable, I mean in fact entirely unreasonable, as it’s a slow
and nasty scan and remove system, but until we use a better method this will work .
If you are totally sure you plan to do exactly the same thing with every enemy type, there’s
no reason why this can’t fit in your currently empty Enemy Update(), and use this:
Enemy::Update(G);
At the end of each type update . Or even have a general Enemy::CheckForDead() method,
it’s all up to you . For cleaner code this makes sense, for simplicity and allowing for experi-
mentation use the end of update test .
This is the basic info return of most standard mouses…mice, meeces? Hmm I don’t really
know the plural of a desktop mouse, I’ll assume it’s the same as the rodents! As far as
we are concerned we’re only looking for the standard buttons and, most important, the
positional information . What we have to be very careful of though, is that the mouse does
not return an actual fixed position to the handling routine; we simply get the amount it
moved, which is added to the last position, so our position value is only accurate if we
know the original start value . And the only way to know the original start value is to set it
up before we use it . So during our initializations, we have to make sure we do that, ideally
in the center of the screen . Also remember input needs to be initialized, so do the pointer
position init just after the instance init . Like this:
TheInput.Init(); / kick it off
TheInput.TheMouse.PositionX = SCRWIDTH / 2;
TheInput.TheMouse.PositionY = SCRHEIGHT / 2;
Mice also need an indicator on screen, so we need to be sure that we create a nice pointer
object that can then be used to draw a pointer . Be sure to create a pointer to a Surface
called MousePointer then you can add this to your mouse initialization lines:
MousePointer = new Surface("../Assets/MousePointer.png", FileHandler);
Now, we do have to be careful of one thing, our mouse is being tested on a process called a
thread (we’ll go into these later), which means that no matter what our program is doing,
as soon as the mouse moves or a mouse button is clicked, the CPU is probably going to go
and deal with the mouse code . I say probably, because it will also depend on the refresh
rate of the mouse . But that usually is a prompt response by the CPU to deal with the mouse,
which means if we move our mouse around a lot and keep moving it, as you would do in
most games, it’s entirely possible that our Mouse variables are going to change at different
points in our game-loop processing, if we use them in one section and then in another it
may cause odd things to happen . It may even have a tiny impact on performance .
Try it now . You should have a blank screen with a mouse pointer .
Let’s start with a simple firework particle system, which on a click will place a number
of particles on screen ready for updating a main loop .
The particles themselves will fly out randomly in all directions, losing some speed
as they do and falling because the effects of gravity on their y motion, and also changing
color from white hot to dark red before dying off, perhaps turning into ash and fading
away?
So clearly we need a Particle Class, create a new header file called Particle .h making
sure to store it in your Headers folder and filter locations, and enter this:
It’s very simple as you can see, like all our simple 2D games it’s going to use an Objects
Class, which contains the info for position and speed .
Now to write the class itself . Add a new item to your Source folder and make sure that
you navigate to the source directory when you create Particle .cpp
// Our particles are basically surfaces, but most of the time they will be
empty or 1 × 1 buffers,
#include "Particle.h"
Particle::Particle()
{
Xpos = Ypos = Xspeed = Yspeed = 0; // clear these
this->MarkForRemoval = false;
this->Image = new Surface(Rand(2)+1, Rand(2)+1); // make a small surface
this->Image->ClearBuffer(Colour);// using a fixed colour(hmm ok for now)
Time = 4.0f ;
Again very simple, the constructor sets things up, I’ve left an empty file-based constructor
you can add code to handle that if you want, and I allowed the size of the empty surface
it creates to vary a little for variety . The meat is in the update, where I am using the fact
I derive this class from the Objects Class, and because Objects Update, has functionality
to move an object and bounce it off the edges I’m going to reuse that routine . Even though
I am overriding that routine, the base routine is still accessible; that can be very useful .
Then I do a bit of a tricky bit of color manipulation . This is a bit messy on this 2D buffer
system, but it will work . I am splitting the RGB values into unsigned ints and shifting
them to the lower 8 bits, to make it easier to decrement them . I am checking for >255
because an unsigned int cannot test for <0 .
I don’t reduce the red component by as much, so I get that white hot to dull red appear-
ance I wanted and I do a test to see if all the RGB parts are reduced to 0, in which case I will
kill the object regardless of how much time it has left .
Then the speed damping, when you multiply a positive or a negative number by a
fraction, it reduces the scale of that number making it a neat way to reduce a value to
almost 0 .
if (TheInput.TheMouse.LeftButton == true)
{
for (int i = 0; i < 40; i++)
{
Particle* p = new Particle();
p->Xpos = (float) TheInput.TheMouse.PositionX;
p->Ypos = (float) TheInput.TheMouse.PositionY;
TheObjects.push_back(p);
}
}
I’ve set it to create 40 new particles, which generate at random speeds in their constructor .
This is not an elegant way of doing things though, so we won’t be keeping it, but for now
it will serve its purpose .
Fire it up and press your left mouse button and we should get some nice pixels appear-
ing white and then fading to red .
Quite pretty, we can essentially create lovely, though slightly square, starbursts on the
screen and keep doing it . But the particles only appear at the point we press, because of
that inelegant game loop I added, I’d like something a bit neater, that allows us to choose
where we generate the particles and allow us to track to another point on screen, so let’s
create an Emitter instead .
An emitter’s job is basically to generate a small number of particles each update
for as long as they are told to . We can also decide on how the particles are generated,
in a burst or as a plume from behind a rocket; for example, maybe even different
types of particle . We’ll stick with a burst for now and if you want to add more types
that’s fine .
Our emitter should be able to create different types of particle though, so we need
it to know what those are, and because each type of particle has different actions it
would make sense for us to create new classes for the different types . Why not then also
have different types of Emitter? Well why not indeed; we could have a plume emitter, a
starburst emitter, a random spark emitter, and all sorts . But because their basic func-
tionality is simply to produce particles I’m not sure there’s quite enough variation in
behavior to create different classes for the different emitters, so this is for me at least,
one occasion where I’m going to simply create a list of enums, and have one Emitter
Class, but creating different styles of particle . The choice is yours, if you find your emit-
ters start to become complex then breaking them down to different classes for each
type makes sense .
Like a Particle Class, and anything we want to update in our list, our Emitter is a
type that can use the base class Objects, so we will derive it from that . Unlike most of our
Objects though, we actually don’t plan to render it, but we still need a dummy surface in
// Emitters
#pragma once
#include "Objects.h"
#include "MyFiles.h"
#include "Particle.h"
#include "Game.h"
class Emmitter: public Objects
{
public:
Emitter();
Emitter(char* fname, MyFiles* FileHandler); // allow us to make particles
from (small) images
~Emitter();
void Update(Surface* s); // of course we need our update
// to add to the vector/list we need to know where Game is;
void Init(Game* G);
float Time; // how long do I emit for?
int HowManyPerUpdate; // how many should I update
Game* WhereIsGame;
};
Surprisingly small, isn’t it? It will get a bit bigger when we add different types, for now we’re
just going to focus on making that squareburst system . Time to make a new Emitter .cpp
file and that looks like this:
#include "Emitter.h"
// constructors and destructors
Emitter::Emitter()
{
this->Image = new Surface(1, 1); // we need a dummy surface
};
Emitter::Emitter(char* fname, MyFiles* FileHandler) {}; // allow us to make
particles from (small) images
Emitter::~Emitter() {};
void Emitter::Update(Surface* s)
{
Time -= (1.0 / 30.0f);
Again surprisingly simple, though I’ve allowed for expansion with a constructor using
a file name, I’ve not used it yet, but I know some particle systems might want to use an
image . Notice that Init function? It needs to be called to set up the Emitter, and we can do
that by altering the generation code in Game .cpp to now look like this:
if (TheInput.TheMouse.LeftButton == true)
{
Emitter* Emit = new Emitter();
Emit->Init(this);
TheObjects.push_back(Emit);
}
Remember to add Emitter .h to your cpp of Game .h list of includes . We could make the
constructor do all this setting up, but remember we’re planning to add different types, so
let’s keep the constructor as simple as possible and we can expand the init or even add
a different init or inits for different types .
Now compile and run, to try this code for size and notice that when you press and
release you get nice individual squarebursts . Try not to keep your finger on the mouse
key though, as you’ll end up generating an awful lot of particles, and that will strain our
systems, something I’ll discuss shortly .
Why exactly are we getting squarebursts though? Why not nice round blooms? It’s
because of the way we are using clamped random values on the X and Y, they will produce
vector directions going out from the point of origin and since they have an effective clamp
not based on a radius, it can’t do the nice round effect we want, so it looks unnaturally
square, even if it is kinda cool it’s not really what we want . So let’s expand the system now
to create different types of emitter so that we can have a bloom, and also different types of
particles to get us going .
};
As you can see, just a simple constructor/destructor pair and an update system, since
I am inheriting from Particle, it will have the Time value, and since Particle inherits
from Objects, all the other useful data are in there . This time I have not left a constructor
for a file in here, since I’m quite sure that on this occasion I’m not going to use it .
Make a new HotDecay .cpp file, again perhaps by adding a filter in your solution, and
add this code:
#include "HotDecay.h"
HotDecay::HotDecay()
{
// we can set these up here a defaults but the emmiter should change them
Xpos = Ypos = Xspeed = Yspeed = 0; // clear these
this->MarkForRemoval = false;
this->Image = new Surface(Rand(2) + 1, Rand(2) + 1); // make a small
surface slightly different sizes
this->Image->ClearBuffer(Colour); // create a blue dot
Time = 4.0f;
Xspeed = 3.0f - Rand(6.0f);
Yspeed = 3.0f - Rand(6.0f);
}
HotDecay::~HotDecay(){}
Image->ClearBuffer(Colour);
Image->CopyTo(mScreen, (int)Xpos, (int)Ypos);
// finally assume they are affected by gravity and reduce the Ypeed
Yspeed += 9.81f / 100; // use a real world value but scale to suit.
It should be pretty clear that this is the old Particle code itself, now with the logic separated
into this class . The Particle .cpp update routine has had this code removed, and now looks
like this:
Finally, our Emiter .cpp file needs to change a little, instead of creating particles it now
creates HotDecay objects .
void Emitter::Update(Surface* s)
{
Time -= (1.0 / 30.0f);
Ok, let’s try that, compile and we should get basically the same result as before, we’ve not
added new types yet, so that’s our next step . Remember to backup your work now .
Time to add different types of particle, let’s do another simple firework-style particle,
we’ll call them ColourFades, the particle itself will be much the same as the HotDecay
but let’s not have gravity interact with it, and allow for a range of colors which will fade to
black and die .
As before add ColourFades .h and ColourFades .cpp, ColourFades .h is basically
unchanged from HotDecay but we still need to define the values so copy and paste it and
change all the HotDecay to ColourFades . But add three new values to the ColourFade
Class members .
float r, g, b;
Now, return to your Emitter .h file, we need to allow the emitter a means to understand
what the different types are and to generate them accordingly, most of that we can do with
Emitter();
Emitter(char* fname, MyFiles* FileHandler); // allow us to make particles
from (small) images
~Emitter();
};
I have prefixed the enum names with T_ just to avoid confusion with the class names, also
I made the enum start at 1, because 0 can be a useful means to force a default error .
That should be all we need to do to the Emitter header for now, let’s return to the
Emitter .cpp and make a change to allow use of these different types .
Aside from making sure we add the relevant header files each time we create a new
particle type, the action is going to take place in the init and Update functions, let’s start
with Update .
Which looks like this:
void Emitter::Update(Surface* s)
{
Time -= (1.0 / 30.0f);
if (Time < 0) this->MarkForRemoval = true;
switch (PType)
{
case T_HotDecay:
{ // I like to keep my case in brackets
for (int i = 0; i < HowManyPerUpdate; i++)
{
HotDecay* HD = new HotDecay();
HD->Xpos = this->Xpos;
HD->Ypos = this->Ypos;
WhereIsGame->TheObjects.push_back(HD);
}
break;
}
case T_ColourFade:
I only have the HotDecay code at the moment but that’s fine . Since we’ve added a concept
of a type to the Emitter, we will need to provide that when we set it up, so in the Game .cpp
file, change the trigger system to this:
if (TheInput.TheMouse.LeftButton == true)
{
Emitter* Emit = new Emitter();
Emit->Init(this);
Emit->PType = Emitter::ParticleType::T_HotDecay;
TheObjects.push_back(Emit);
}
And that’s that, our system is set up so that when we Press a Left mouse button we will pro-
duce a hotdecay squareburst . Time to add a more subtle ColourFade to the RightButton,
You can guess the code you need to add to the Game Class just after this, can’t you? Do one
for the Right button that generates ColourFades .
The ColourFade particle needs to have a color value to fade with, and the standard
default particle sets a hard value as a default, and it’s of a Pixel type, which is an integer
value . Integers don’t work terribly well for smooth scaling but it is what we are forced to
work with . But there is nothing stopping us from keeping float copies of our values, which
is why we added float r, g, b to our ColourFade header file . We should also add a new con-
structor definition to that header file, to turn our int color values into floats, so in there
add this after the default constructor:
ColourFade(Pixel Col); // set up the colours
In the ColourFade .cpp file, that new constructor looks like this:
ColourFade::ColourFade(Pixel Col)
{
// we can set these up here a defaults but the emmiter could change them
Xpos = Ypos = 0; / clear these
this->MarkForRemoval = false;
this->Image = new Surface(Rand(2) + 1, Rand(2) + 1); // make a small
surface slightly different sizes
Time = 4.0f; // set a reasonable lifetime
Xspeed = 3.0f - Rand(6.0f);
Yspeed = 3.0f - Rand(6.0f);
Colour = Col;
this->Image->ClearBuffer(Colour);
r = (float) (Col & REDMASK);
g = (float) ( (Col & GREENMASK) >> 8 );
b = (float) ( (Col & BLUEMASK) >> 16 );
}
case T_ColourFade:
{ // I like to keep my case in brackets
for (int i = 0; i < HowManyPerUpdate; i++)
{
int Colour = 0xffe0e0e0; / reddy yellow
ColourFade* FD = new ColourFade(Colour);
FD->Xpos = this->Xpos;
FD->Ypos = this->Ypos;
WhereIsGame->TheObjects.push_back(FD);
}
break;
}
So we’ve got color the new constructor breaks it down into floats and it’s time for the Update
in ColourFade to now use those floats and covert them back to int, so it will look like this:
Yeah that casting of floats back to ints is pretty horrible, but we can rest easy that we won’t
be using such horrible methods when we have some proper maths systems to play with, for
now it’s a side effect of our Pixel system encoding its colors as bytes .
And there you have it, you should be able to press the left button to get a gravity-
affected squareburst with white to red particles, and right button for a nongravity-affected
erm squareburst . Time to change that .
Our Emitter is using the Particle default random values for speed, but we can alter
those, we want to create a bloom, which basically is a circular pattern, so let’s randomly
pick a certain number between 0 and 2PI as our angle (remember 2PI is 360° represented
as Radians), don’t forget to add #include <math> to our list of headers in this file because
we’re going to use some standard maths functions .
case T_ColourFade:
{
for (int i = 0; i < HowManyPerUpdate; i++)
{
int Colour = 0xffe0e0e0; // chose
ColourFade* FD = new ColourFade(Colour);
float deg = Rand(2*PI); // somewhere on the circle
float radius = 10.0f; //any value will do
Xspeed = cos(deg)*radius; // store them in our unused speed values
Yspeed = sin(deg)*radius;
// normalise the range
float Mag = sqrt(Xspeed*Xspeed + Yspeed * Yspeed);
FD->Xspeed = ((Xspeed / Mag)*Time)* (3 + Rand(1.0f)); // add a bit of
randomness
FD->Yspeed = ((Yspeed / Mag)*Time) * (3 + Rand(1.0f));
FD->Xpos = this->Xpos;
FD->Ypos = this->Ypos;
WhereIsGame->TheObjects.push_back(FD);
}
break;
}
Just a bit of maths to ensure that the values are normalized to less than one and then mul-
tiplied by a scale factor . I added a tiny bit of randomness to the speed to avoid too much
regularity creating interference bands .
Ok add this, run it, and hopefully be happy with the result, but when we stop the
demo, we may start to curse the fact we cannot redirect the mouse clicks away from the
GUI and find we have multiple windows open on our GUI, which opened up when we
pressed the right mouse . This is indeed annoying and something I am chasing Sysprogs
about, but never fear, the simplest thing to do on a Raspberry is ctrl/alt/F1 all at the same
time, and now we’re in the terminal mode, where the mouse does not give us any hassle .
Though you may find yourself with a long list of terminal gobbledegook if using key con-
trols, it’s not likely to be anything more than a syntax error . Our mouse, however, can do
real damage because we have no idea what we are opening/closing/saving/sending our pin
code . If you want to go back into GUI mode (not wise at the moment), type start and enter .
We can now add any Particle Classes we want, just extend the enum in Emitters .h, write
the particles behavior in a class, and provide the correct emitter for it . We’ll do more of these
later, but it’s time to take stock of what we can do with our Double Buffer style architecture .
I’ll give you one more simple type and you can work out how to add it yourself now .
It’s called a plume and probably best used for engine exhausts or things that need a con-
stant stream of particles that never ends . I won’t give you a walk through on this one, you
know enough now to add this yourself, use the middle button to place it though consider
that it could be attached to another object? Here’s the h file:
Note, I’ve added another variable specific to this type, ThreeQuarters, since I want to
allow for some time to pass before I modify the red component of the color, as you can see
in this cpp file:
#include "Plume.h"
Plume::Plume() {}
Plume::~Plume() {}
Plume::Plume(Pixel Col, float LifeTime)
{
Xpos = Ypos = Xspeed = Yspeed = 0; // clear these the emitter will set
them
this->MarkForRemoval = false;
this->Image = new Surface(Rand(2) + 2, Rand(2) + 2); // make a small
surface slightly different sizes
this->Image->ClearBuffer(Colour); // create a dot
Colour = Col;
this->Image->ClearBuffer(Colour);
r = (float)(Col & REDMASK);
g = (float)((Col & GREENMASK) >> 8);
b = (float)((Col & BLUEMASK) >> 16); // its going to be 0 but allow for
other options
Time = LifeTime;
ThreeQuarters = Time * 0.75f;
}
void Plume::Update(Surface* mScreen)
{
Particle::Update(mScreen); // add the velocity
// colour is variable, we don't need blue, and we want red to only decay at
1st, so that the tail becomes more red use time to increase the red content
if (Time > ThreeQuarters)
{
r *= 0.98f;
if (r < 0.05f) r = 0.0f;
}
g *= 0.98f;
if (g < 0.05f) g = 0.0f;
Image->ClearBuffer(Colour);
Image->CopyTo(mScreen, (int)Xpos, (int)Ypos);
// and dampen the speed
Take care to notice, there are two new variables added to the Emitter Class not shown
here: (1) Direction and (2) Range . Direction is naturally enough the direction in which the
particles flow toward, expressed as a degree value in a float, and range is a ± amount we
can deviate around that direction to create a cone . I’ll let you work out how to add them
in your header .
All three systems working together on one spot do create rather a nice image, hence
why it’s called Fireworks .
HowManyPerUpdate = 4;
From 4 to 40, make sure you can see your debug output on your debugger and try again .
Press and hold the button and move around .
Wow…now…that’s what I call a slow down, and look how quickly it happened . We
finally can clearly see we have very distinct limits on how many things we can process
and update, and as we increase the numbers of things being drawn, our game gets slower
and slower, which is sad as it’s only really drawing a few thousand pixels, we’d expect to
do so much more, but the processing needed to make those few thousand things work is
immense! We’re starting to get the idea, that no matter how powerful our CPU is, there are
limits that can quickly be reached just by repeating a few functions often enough . It’s not
the end for this system, we can still use it for simple games and even for moderate use of
particles, but clearly its time we took a step up, and try to maximize our use of our targets
power not just its CPU .
With this in place your game will now run totally free and you can ask the GPU to update
the instant you have sent the last values to it, though you should also recall that GPU
instructions can take a little time to process, so you might also look into adding this at the
end of each game update after you swap the screens .
glFinish();
Which will hold the system until the GPU is done allowing you to proceed . Though it will
work fine, if you are running without a lock, you have no real idea where or when the draw
is going to start, and as a result you will almost certainly get sheering, but, it will be faster,
and sometimes slower, and then faster again, inconsistency can be quite annoying, we’ll
discuss that soon .
These lock functions make more sense in 2D games than 3D, simply because in 2D,
things tend to be a little more consistent, but as we’ll see in 3D, a lot more data flying
around deciding what gets drawn and not drawn can make the speed of your processing
and graphic loop so much more variable and then it’s time to make a choice to lock your
frame rate down, to something you know can handle a worst case situation, 15 or 30 fps,
or unlock your frame rate and aim for a desired rate, while keeping accurate time of how
long things have taken and use compensation systems to allow for the fact some frames
run slower than others . We’ve touched on delta timing before, but we’ll soon start to use
it more seriously .
The world of 3D may seem a radical change from 2D, though it seems to be simply a case of
an extra direction, the whole means of display and interaction totally changes in 3D . Our
code needs to be able to handle that extra dimension and our graphics systems change
considerably . BUT, the concepts of the games themselves don’t really change that much .
You still have objects that you manipulate in normal code, you still have an environment
to interact with, and you still have the basic game loops . 3D certainly adds a new direc-
tion, but, in general, the gameplay code isn’t going to change so much that you can’t adapt .
Its ultimately up to you how much you want to involve yourself in the technical side
of 3D rendering and other technical skills, by now you should feel a bit more comfortable
in your coding, getting some games up, and running in our simple 2D system should have
removed a lot of fears . We’re passed our baby steps and it’s time to take some long strides
in our walking .
That said, there is no escaping the fact we’re about to venture into a pretty heavy
and scary chunk of technical stuff, we need to make direct use of our hardware through
OpenGLES2 .0 and gain a greater understanding of how 3D works, to get it working the
way we want .
This will be a BIG leap for a beginner and even taking it slow we’ve got to absorb a
lot . (Translated… I’m going to waffle about tech for a hundred pages or so, this is a BIG
chapter) . It would be so much easier for us to proceed by simply ignoring the rendering
as much as possible; I could just give you the framework and we could then move into the
275
games . But I think we’re here to learn how to push ourselves and our targets; so I’m going
to spend this chapter exploring the development of our 3D framework .
So get ready, make a fresh pot of caffeine-based beverage and order some pizza .
The next chunk of heavy text is going to go into that, but take heart, once we have a
functional rather than excellent rendering system, we’re going to move into gameplay
mechanics and not worry too much about the technical stuff, which we’ll add in smaller
easier to digest bites .
Are you ready? Let’s get started!
A Short Explanation of 3D
Now so far we’ve focused on controlling a character/avatar, which we move around the flat
world it inhabits either side on or top–down, and we simply show what’s around it to create
some form of interactive environment . Movement of all objects is done in X and Y planes
and we have ignored the concept of depth, which in 3D is represented by the Z-axis . But
even though we ignored it, it was always there, set to 0 .
Three-dimensional games do exactly the same kinds of logic and interaction as 2D,
BUT, that pesky third dimension now comes into play . However…consider we are still
moving an avatar left and right, up and down, inside of some kind of world space, as we
have done already in map-based games . It’s not really a massive leap to consider moving
our character in and out . In fact, that’s pretty much all we need to do, add Z concepts to
our code and let the characters move in three dimensions .
There’s a bit of trickiness insofar that we need to figure out some way to add a control
for that in and out movement, we also need to change some of the interaction system, and
we no longer have simple tiles we can check . The biggest technical problems actually come
from being able to represent or render what is going on in our play world on a screen . We
do understand the concept of direction, we did it in our top–down chase game, but sup-
pose instead of eight directions, we have full 360° of direction in three axes, that is, point
it to that third star on the left and up a bit, and keep going till morning .
So if it’s that easy why is it so hard to write 3D games? Well, movement is easy, we know
how to use vectors now, and 3D vectors are simply the addition of a third Z element, but visual-
ization of our objects on a 2D screen, is rather more difficult and requires us to think differently
about how we draw or render our world . We also need to consider things such as orientation
and distance from our viewpoint, giving us rotation and scaling issues, and working out what
to draw on screen and what to cull, and dealing with things going in front of other things .
That third dimension really adds a tonne of new visual features but also a tonne of
problems to overcome .
Now it’s important, very important that you think about this, because this will keep
you grounded . You need to be confident that though visualization is complex; manipula-
tion of objects in your world space is not . We know that we can detach drawing from our
object manipulation, so even if you find the technical side of rendering hard, all we need
to do is just get the basics up and running and then we can focus on movement .
Movement is a case of taking the 2D code we have, adding a third dimension,
and reworking some of the motion maths to cope with 3D coordinates and terrains .
Manipulation of our characters when kept separate from the rendering is quite a trivial
◾ Matrices, which are fancy mathematical arrays that contain information that
relates to position, rotation, translation and scale, or size of the object .
These two concepts are key to the way we can move things around .
So, there’s no way to escape the fact the visualization is hard, very hard, but again it’s
pretty much a very well understood series of mathematical processes that work out how to
represent the 3D we want to view onto a 2D screen . We don’t need to reinvent the wheel,
we just need to make sure it’s round and has enough tread on it to do the job .
As with our 2D games we’ll start with some simple systems to allow our framework
to grow and handle the new problems we’re about to incur when we write our first game .
Our 3D systems will be basic but functional and we’ll add features as we go .
What you actually see depends on where you are, what you are looking at, and also the
angle of your field of view .
Far plane
Notice also we also have a near plane, because we really don’t want to have things right in
our face even if we could see them, they would engulf our screen, so we take a small liberty
and produce a cutoff point .
Now we can cheat a little bit and have two types of view fields, a square one, also called
an orthogonal view, which is basically what have had in our 2D, or a more realistic pyra-
mid, which gives a sense of distance, making objects smaller the further they are from our
view point . This is called a perspective projection view . Whatever shape it is, it’s called a
frustum and it represents a defined area that we are going to consider as being the part of
the scene we can actually see and should draw onto our monitors .
So our Camera is going to look at a point and what it sees at that point and around it,
is what we have to draw to the screen using some 3D to 2D maths we’re going to have to
learn now .
Ok so far, now take a short break, relax, put on some peril sensitive sunglasses, and
get ready .
◾ We can rotate our object, that is called, ermm rotation, though sometimes also
Orientation depending on how you use it .
◾ And finally we can Scale, make things grow or shrink as we need them to .
In 2D that’s all pretty simple stuff, and it’s not usually needed to do anything more com-
plex than keep a single rotation value, which lets our sprite spin, a vector 2D for position,
and a value for scale .
But 3D is different and yet the same, mainly the difference is down to the extra dimen-
sion we have to take account of . Instead of moving along X and Y coords, we have X, Y,
and Z . Also orientation can change, we don’t just rotate around our Y-axis, which gives the
impression of spinning, but we can also do X and Z Rotations .
Finally scaling, like orientation we can scale by different values in all three directions,
though we usually do all three at once with the same value .
And then there’s another issue, I don’t know how far in the future this is going to
be read, but at the moment, we don’t have 3D displays, which means we need some kind
of system that will allow us to represent a view of a 3D world on a 2D screen, that lets
us maintain the idea that we are looking at 3D . This is called projection and that needs
another whole load of calculations .
No not that kind of matrix, though who can say for sure eh? No, a matrix is rather less
interesting, it’s a group of numbers that look like this:
Hmmm not the most imaginative thing ever is it, but you know, mathematicians are not
known for their imagination, but that group of numbers is able to do quite amazing things
if you create a function that allows you to manipulate a point in space so that it can be
moved to another . It can also let you scale multiple points and change their orientation .
The nice thing about matrices is that they are the key to making changes to vertices,
those points in space we’ve been using to great effect so far by moving sprites from point to
point as position vectors . We can produce different kinds of matrices, and then use them
to mess up a vertex or series of vertices to do very cool things . Try not to worry about how
to make a matrix yet, just focus on the fact that once made, it is used to modify all the
vertices in an object that we will draw to screen .
How these matrices are used though, is pretty cool, this is the real witchcraft! It is possible
to multiply a vector with a matrix and create a new vector whose data represent the trans-
formation action on the numbers in the vector, that’s any of the transformations! So if we
want to rotate around the X-axis, we plug-in the cos, –sin, sin, and cos of the angle into the
matrix and do a Matrix*vector multiply .
There is no actual arithmetic function in the C++ compiler that allows you to do this,
we usually overload the * operand so that if the compiler sees we are trying to multiply a
matrix by a vector to produce a new vector, it instead performs a fairly simple, but exten-
sive, set of multiplies and divides . The actual way this works looks a bit like this: 16 mul-
tiplies and 12 additions .
This friendly looking chap is on the support site for you to play with, but for now he’s doing
his best DaVinci Vitruvian man pose facing front (almost) and standing vertical, he has no
The Object rotated, but because it wasn’t at the Origin, the distance of each vertex, was
used to work out new positions… We really want this .
Exactly the same issues happen with Scaling, if the object is not at the origin, then scaling
actually also causes a move and a scale . We want something more like this .
Now it does depend how you think about this, we are either moving our models center to
the middle point of his universe or we’re moving the middle point of the universe to his
center…but either way we think of it, our model has to be at the 0, 0, 0 origin and then
moved back to his world position . Since we rotate around the origin, trying to rotate him
at his world position will actually move him around that origin, so move, or translation is
needed . Once his own center point is at 0, 0, and 0 though, we can spin him in any of the
three axes and then slide him back .
One good thing is that we generally always have our models based at a 0, 0, 0 location,
the translation systems we use simply move them after the scales have been done .
Which will include the main core GLM features or you can choose to only include the
features your files actually need like this:
// Include spefici GLM features
#include <glm/vec3.hpp> // glm::vec3
#include <glm/vec4.hpp> // glm::vec4
#include <glm/mat4x4.hpp> // glm::mat4
#include <glm/gtc/matrix_transform.hpp> // glm::translate, glm::rotate,
glm::scale, glm::perspective
#undef countof
I might also add it after the GLM includes . The reason is, some of our system files in the
video setups for the Raspberry, when using bcm_host .h, define this value, and so does
GLM, which can result in a compiler confusion creating a lot of errors when it attempts to
use one define rather than the other . It’s basically a fault in GLM when using C++11 as we
are doing, one of those arrghh moments in game dev we just have to deal with . I will make
sure all files on the support site using GLM have this fix and let you know on the site when
it is properly fixed by the GLM team .
Backward??? It’s because compilers work backward when they see a line of equal prior-
ity operands . Yeah I know, confusing isn’t it…but I’m here to stop you making the same
mistakes I made, and avoid losing both hair and computers, just be aware that there’s a
slim chance if you are using a new or nonstandard compiler, it might, just might, work the
other way around . So always test your matrices as soon as you can to assure yourself the
correct order is being done .
Model Matrix
This is the easiest one to explain . The model Matrix represents the combined value of the
Translation, 3xRotation, and Scale matrices for an object that we are going to mess with .
In the same way, sprites have a central point, and the corners are defined as offsets from
that point . A 3D object, will have a Model matrix, which can act on a center point 0, 0, 0
and the entire set of vertices that make its image are described as relative to that point .
When each of these vertices are multiplied by this matrix they will find themselves at the
world position they should be at!
The model matrix needs to be recalculated when any of the other matrices that con-
trol Translation, Scale, or Rotation are altered, but once created, the model matrix can be
used over and over again .
So basically, our Model matrix is the container of the rotation and scale and position
of our model in the world and can be used to move it around the world and represent its
scale and orientation in one fairly simple Matrix, all worked out and ready to use .
View Matrix
The View matrix is even more fun, it relates to the camera and harks back to a concept
I talked about in our 2D scrollers . Does the screen move or does the map? I chose to
think that the map moves around the viewpoint, and here we have a similar idea . Like a
Model, our camera itself exists at a 0, 0, 0 point in space . Now we can place it around but
mathematically it’s actually a lot easier to consider the point where the camera is to be 0,
0, 0, that means if the camera moves, the whole world has to move, or more accurately the
things in it have to move .
Now that takes care of all the nasty rotation stuff for us, at least for now it gives us a much
simpler idea that we can relate to, in terms we can understand .
Projection Matrix
Now this one is trickier still to explain, but effectively what we are trying to take the view of
what the camera can see, to mimic a human eye or camera’s field of view, and inside, which
there are going to be a lot of vertices relating to modes/objects . Think back to that box we
are peering into, the box has physical size and depth that define its edges; this is a way to try
to define those edges and use different shaped boxes in a mathematical way .
Once we have that box defined, then, somehow, we turn that in to a view that we can
then use to turn 3D vertices in that box into 2D screen coordinates .
Things that are closer to the camera need to appear bigger than things that are far away,
making things appear like a pyramid view and because we’ve turned a cubic area of a real-
world sample, into that pyramid view, we also have to deform/scale the objects a little to
make them appear correct… .I said this was tricky! Let’s try to visualize it!
Far clipping plane
Projection plane
Camera
eye
Our Eye starts at the beginning of sort of pyramid on its side, and at some point along
that pyramid, is our screen, the Pyramid is correctly called a Frustum, and because it’s
Parallel view
Zy
ow
ind
W
Back
plane
Front
plane
Parallel projection
Instead of our eye we have a window, or screen, onto which we draw the pixels, which
related to the positions of the vertices after the transformations are all down . Everything
that was in that original pyramid field of view gets squished or stretched to look right on
a screen . It’s really very hard to visualize but we’re in luck GLM gives us another of those
lovely helper functions to create a projection matrix
glm::mat4 projectionMatrix = glm::perspective(
FoV, // The Field of View
4.0f / 3.0f, // Aspect Ratio.
0.1f, // Near clipping plane.
100.0f // Far clipping plane.
)
A couple of these things need explanation, FoV, for example, is a new term . It stands for Field
of View . It basically describes the size of the horizontal angle you see when you look straight
ahead . Humans eyes can see about 180° in front of them, though in reality, we only perceive
movement at the extremes and not detail, our stereo view is limited to around 130°, though
still mostly related to motion and color, and our sharpest view is really quite a narrow range
in front of us, which varies from person to person . This is why we as visual animals, look
directly at things we want to examine, so we use our sharpest most detailed view .
But the point is we can relate the range we want to see as a value in degrees typically
we want to look at a narrower field than our own stereo range as we tend to focus our
gaze on our central point, so anything from a quite tight 30°–90° of view is a comfortable
human-like focus when we look at a screen, but you can create nice fish eye effect with
larger values if you want to play with them .
Detail
2° sharp
We tend to ignore the vertical field of view, though there are ways to incorporate it, I’ve
never seen much use, I’m sure that will change with Virtual Reality (VR) headsets though .
Aspect Ratio is something you can set depending on your screen, but it’s typically
going to be a 4:3 square TV style view for a window view, or a 16:9 wide screen view for a
full screen view . You can even make it the Width:Height of your screen . That will usually
equate to 16:9, it’s your call, we’re normally quite comfortable with 4:3 and 16:9 because
we see them all the time on screens, small variations because of odd resolutions tend to be
forgiven by the human eye .
The Clipping planes decide the points where we see things and don’t see them . If you
put your hand right up to your face, it’s going to obscure your view of things beyond
that . Our camera does not have any kind of physical limits on where it can move, so it is
more than likely it is going to move right in front of an object, which will then obscure the
view . Being able to remove things allows us to take away that odd going into something
and seeing in the middle of objects sensation . Though you can keep it if you want .
Likewise the far plane dictates the limit of our cameras view . As objects get further
and further away the ability to make out what they are is reduced and it’s more effective
then to ignore them . Infinity vision isn’t ideal for games, but who knows you may find you
want to make that number massively large for an effect .
Model coordinates
(Model matrix)
World coordinates
(View matrix)
Camera coordinates
(Projection matrix)
Homogeneous coordinates
First we take our model coords, and then we can turn it into world coords, using whatever
transforms are needed, such as translation, scale, and rotate .
Those world coords are then transformed by our View or camera matrix .
And then, our Projection matrix turns them into nice 2D coords .
Of course, we can then combine these different matrices into one super-duper, does
everything transform MVP matrix like this:
glm::mat4 MVPmatrix = projection * camera * model;
◾ Voodoo, black magic, and sorcery, which basically cover all the other matrix func-
tions I may have used once or twice in my entire career, but do exist and are pres-
ent in almost all maths Libraries .
I’ll explain these as and when we need them, as overloading ourselves with lots of complex
maths we’re not quite ready to use might overwhelm anyone with a genuine aversion to maths .
We can’t really escape the need for such maths, but we can try to introduce it as it
becomes relevant to our needs, and as such learning it becomes a little easier when we see
it in action .
Moving Around
There’s one more thing we have to think about, motion! So far, Up, Down, Left, and Right
are pretty simple concepts and we’ve been happy to slide up and down the X- and Y-axis .
Increment the Y to go up, increment the X to go Left, decrement X to go right (that seems
counter intuitive but using a camera, we are looking backward toward the 0, 0, 0 point in
OpenGL 3D) .
We can now also add to that, In and Out, using the Z-axis . Six degrees of motion you
might say moving along an axis, but what if we’re not moving along an axis?
There are two more concepts of motion we need to consider: Forward and Back
where we want to be moving in the direction we are pointing, which can be any of
360° × 360° × 360° . This is where orientation comes in and it gets complex . Though really
all we ever want is a vector to add to the current position so that we move in the direction
we are pointing . We’ll cover that as we need it as there are different ways to get this vector .
Forward and Back also lead us to relative Left, Right and Up and Down, or even tha-
taway, that is, depending on the direction we are facing, a relative Left might in fact be an
absolute up if you are rotated 90° on the Z-axis .
This…is why 3D gets confusing, but we’ll introduce these things slowly .
Ok, that’s the Maths lesson done…for now! We’ve earned a break, I’m going to let you
think about that for a while, since it’s a load of stuff to absorb, let’s do a little bit of simple
stuff before we bring this mathematical wizardry into effect . We really need a simple base
project where we can then experiment with these magical matrices .
We’re going to go into a lot more detail on Shaders later, but we do need to try to under-
stand how a Shader works, so that we can make use of matrices .
This particular Shader really does not do a huge amount . If defines a few values as
attributes, which are values that are input into the Shader . This one also does something
with textures, but for our simple triangle it’s not actually relevant so we’ll ignore it .
The output of a vertex Shader is gl_Position the code here is not actually perform-
ing any kind of manipulation if you look at it cleared up from the “”
void main()
{gl_Position=a_position;
v_texCoord=a_texCoord;};
All we are doing is loading the attribute that goes in, into the value that goes out . What we
want to do is use the MVP matrix we talked about so that we alter what we see depending
on where the camera is placed .
Copy your Hello Triangle project into a new folder called HelloMatrix, fire it up from
this new directory and make sure it’s all working ok .
Remember to add * .hpp and * .inl to the file types for the VisualGDB project properties .
All good, now to our HelloTriangle .cpp file you can add these headers:
// Include GLM
#include "glm/glm/glm.hpp"
#include "glm/glm/gtc/matrix_transform.hpp"
using namespace glm;
Compile and run, it’s not going to do anything yet, just make sure you can find all the files,
and you still get your normal triangle?
Ok, so far so good .
Now this part is messy and we’re only doing this to demonstrate the concept, but let’s
rework the draw routine a tiny bit .
Like this:
void Draw(Target_State *p_state)
{
UserData *userData = p_state->user_data;
// Projection matrix : 45° Field of View, 4:3 ratio, display range : 0.1
unit <-> 100 units
glm::mat4 Projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f);
glm::mat4 View = glm::lookAt(
glm::vec3(17, 23, 3), // Camera is at (17,23,3), in World Space
glm::vec3(0, 0, 0), // look at the origin
glm::vec3(0, 1, 0) // pointing up( 0,-1,0 will be upside-down));
);
// hard code a model matrix here as empty (I)
glm::mat4 Model = glm::mat4(1.0f);
// make the MVP
There are not too many changes here, but at the start you can see I created some Model, View,
and Projection matrices, using some hard numbers, which we would not normally do .
Also very important, notice where I commented LOOK!!!! Twice . This is a reference to
the fact that we are actually getting a handle from the Shader to the location of a uniform
called MVP, and in the second instance, we’re sending that MVP to the Shader .
But at the moment the Shader has not been written .
You can run this…it will compile, but it will not actually work correctly you’ll get a
blank screen, or perhaps even a Shader error .
So we need to write a new Shader, in the init code, replace the vertex Shader with this:
GLbyte vShaderStr[] =
"attribute vec3 vertexModelPosition ;\n"
"uniform mat4 MVP;\n"
"void main(){\n"
" gl_Position = MVP * vec4(vertexModelPosition,1);\n"
"}\n";
The quotes and \n values are a necessary evil at the moment, one we will dispose of soon
as we have our Shader manager in place . Removing those quotes though we get something
simple to read:
attribute vec3 vertexModelPosition ;
uniform mat4 MVP;
void main()
{
gl_Position = MVP * vec4(vertexModelPosition,1);
};
Now we can clearly see that gl_Position isn’t just the input data fed right back out again,
this time the input vec3, the vertexModelPosition, is being transformed by the MVP,
which we calculated in the draw and sent to the Shader .
No really, that is wonderful, that is our old friend the triangle, but now viewed from a cam-
era that is positioned some glm::vec3(17, 23, 3) Units away . Change the position
value in the lookAt function to glm::vec3(7, 3, 3), which is a bit closer and compile
again, cool eh?
I hope you are asking yourself why there so many more vertices, and why five in each line?
Well a cube needs a lot more vertices to make it up, 36 in fact, 6 per face, and 6 faces . So
are now 36 vertices here, each line of these vertices represents the relative point from the
center where the vertex is . But a vertex is only three values, the final two numbers actually
refer to a texture that we’re going to add soon, for now, ignore them, they’re here to save bit
of typing time later and demonstrate the skipping options when setting up the Attribute
Pointers .
Notice also in the Draw function where we send the attribute pointers and actually
draw the Triangles the values are different to reflect the larger amount of data we are
sending and interestingly that we are skipping some of the data (the last two entries per
vertex)
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat),
CubeVertices); // use 3 values, but add 5 each time to get to the next
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, 36);
But, aside from a different vertex list and a slightly more careful setup of the attrib
pointers where I am using the sizeof operator to check the internal size of a float
to allow the skip over the two final values stored in the CubeVertices, the drawing
system is pretty much the same for a Cube as it is for a Triangle! Try it out and you
get this:
HelloCubes
I don’t want to us continue with this rather poor extension of Hello Triangle, we’ve moved
away from that old school C style coding, and should try to build our new 3D framework .
So download and unzip HelloCubes from the support site and get it up and running .
Take some time to look at the code, it has pretty much everything the later 2D projects
had, independent graphic set up, a controlling Game Class, some shader management and
the ability to use base objects only this time I’m introducing a new kind of base object
called a ObjectModel . I could have extended our old 2D GameObjects but really these
are a little different and will work quite differently, so I prefer to keep the two base types
separate . When you run this you should now be able to see two cubes in space, and look,
they are rotating . If we choose to use some GameObjects we just make sure we update and
draw them separately .
Look a little closer at the ObjectModel Class, which now takes responsibility for mov-
ing the object, creating, and maintaining the model for each instance of ObjectModel, and
then having a draw routine, which sends the correct vertex or attribute list to the GPU . The
Draw and Update routines, however, are pure virtual, that is, they are defined as = 0, hav-
ing no content, meaning that any class that derives from this, must supply its own Update
and Draw functions . This provides flexibility, we know we are going to need matrices, but
we don’t really know what kind of object we are going to update or draw, so let the objects
that use ObjectModel as their base do that work . Let’s look at what it contains .
#pragma once
#include "glm/glm.hpp"
#include "glm/gtc/matrix_transform.hpp"
#include "MyFiles.h"
glm::mat4 RotationMatrixX;
glm::mat4 RotationMatrixY;
glm::mat4 RotationMatrixZ;
void MakeRotationMatrix(); // since these get altered alot
GLuint programObject, vertexShader, fragmentShader; // programObject used
for draw. vertex and fragment used to create programObject
glm::mat4 Model; // the model matrix will effectivly hold all the rotation
and positional data for the object
GLint positionLoc; // index handles to important concepts in texture
GLint texCoordLoc;
GLint samplerLoc;
Graphics* TheGraphics; // anything that uses a shader will need access to
the graphic class set up in the game app. Make sure this is supplied
GLuint vbo;
};
HelloCubes 305
One or two things won’t be obvious but we’ll cover them soon; however, already you can
see there’s quite a bit of matrix work, and seven different matrices, three of them just to
deal with rotations . But it’s still a very light and fairly easy to understand class . There are
familiar concepts of Setting and Getting position, a couple of easy for humans to under-
stand concepts such as WorldPosition and DegreeRotations, some MakeMatrix functions,
which our update methods will be expected to handle as needed, and… . .t hat’s pretty
much it .
A couple of new things that won’t be too familiar yet are these handles:
These relate to the Shader code we are going to play with a lot more later, but their use will
become clear before we delve into Shaders properly .
So as a header file, we can see we have two pure virtual functions, meaning any Shape
Class, which inherits ObjectModel MUST includes its own version of draw and update .
In our project, I create a vector or a list of this base type, and essentially all I want to
do is loop through that list and call the update and draw functions, which will then go to
the Shape Class versions of those functions .
Previously most of our 3D code has been set up in the main cpp file (the file contain-
ing the entry point main()), this is an homage to the original hello triangle project every-
one starts with and to keep things familiar . But now we know better than to expand on
old C style projects, so we are branching out, and reducing the overhead on the main file .
Its role is now correctly reduced to starting up our project, creating an instance of a Game
Class, letting that run, and on return, exiting the project .
The main draw and initializing functions, which were in the old C style demos are
now removed to a Graphics Class, which has the job of setting up OpenGLES2 .0, so we can
use it and of course process the draw calls . Later, we’ll expand on this a bit more .
We can now pretty much leave that file with its main Draw function and initial-
izations alone, we make our modifications now in the Draw routines of our model
classes; we can also plug in a different Graphic Class if we are using a non-Raspberry
device, so that from now on, the main bulk of the code, as we did with 2D projects is
unconcerned with the target, so long as we stick to the constraints that OpenGLES2 .0
puts us under .
Why is this happening, we know a GPU can handle 10’s even 100’s of thousands of verti-
ces, and our cube has 36 vertices, times 150 cubes, that is only 5400?
Well, that’s because its capable of handling 100’s of 1000’s all right, but we’re not in
those test lab conditions, and here we can see a classic problem that it’s not capable of
reading the data from the CPU’s area of RAM fast enough to set them up . For us to get
maximum performance we need those vertices to be sent once to the GPU and reused over
and over again without a reload . Right now that 150 writes each frame is a lot of dead time
and the more we have the more time we kill, the more frames we drop .
This is exactly the same principle when we were loading a sprite every time we created
it or animated it in 2D . The time it takes to set up the GPU is something we have to avoid;
it will utterly cripple our frame rates .
If we assume a modest game has 10,000 vertices, that can be represented by chang-
ing the for/next loop to 277 . On my system I get around 14 fps, and we’re not even doing
textures . That’s just not acceptable . And it’s so unacceptable that I want to resolve it now
before we move any further . We don’t want to be limited to 3000 vertices, or even 6000, we
want those 100,000+ the GPU promises to give us!
HelloCubes 307
noticeable jagged movement and our games start to run slower, no matter what impact
some CPU manipulation of movement using a delta time does, to make it appear to move
smoother . Our eyes can see the update rate is slowing down .
Why then is this happening? Well the key point is in that first sentence “… we’ve been
sending our graphical data from our CPU memory to our GPU . . .,” we need to think about
exactly what is happening .
On all current SBCs, our CPU and GPU share memory, but in fact internally, the two
units have fenced off their memory from each other, and claimed it as their own . The two
memory pools may even operate at different speeds and have different access methods and
address widths, so that when one unit, wants to talk to the other units memory, there has
to be some kind of way to jump over that fence .
Depending on your machine, this fence will take several forms, but bottom line is,
there’s a lot of inter chip negotiation that goes on when one unit wants to access memory
from the others pool, that negotiation takes time, and during that time, the data we want
to move have to sit on an area called a bus, which will take that data where it want to go
to, though like real-life buses it’s not going to do that at Ferrari speeds and it can only take
so many bytes at a time .
The size of our bus will vary depending on our systems, we may have 8, 16, 32, 64, 128
or more bit buses, but however wide it is, we have to put our data on the bus, let the chips
negotiate the transfer then move the bits to their new home .
All of this is incredibly slow, in fact many orders of magnitude slower than shifting
32/64 bits from CPU memory to CPU memory .
So when we are sending our attribute array data from our CPU to our GPU, it’s just
painfully slow . There are often Direct Memory Access (DMA) methods that can be used
that are a little faster, but for real speed we need the GPU to access its own memory,
because when the GPU accesses its own memory, it is incredibly fast . Clearly we need
to think a bit about this . We’ve just got to avoid bus transfers at all costs . How then can
we send graphic or geometry data to our GPU? What does OpenGLES2 .0 give us that
will prevent us writing the same attribute lists from the CPU to the GPU over and over
again?
Vertex Buffers
So, it should be obvious that a vertex buffer is there to keep vertices in one place, more
specifically in one place somewhere in our GPU memory, where it can be very quickly
accessed and its contents sent to our Shader for use .
Actually a VB, in keeping with the whole concept of an OAB, is a very versatile form
of buffer and it can actually hold pretty much any kind of sequence of data that the GPU
needs to use when it turns the values in the buffer into attributes for the Shader . It does not
even need to specifically work with vertices, it can just as easily work with pairs of texture
coords, or even floats, but its main function is to work with vertices . There are some per-
formance issues with odd sized data but we’ll deal with those later, for now nothing we’re
doing is going to create major speed issues .
I’ll explain a little later exactly what an attribute is, when we come to use our Shaders
in anger .
In very simple terms, we want to store our cube vertices, and the texture coordinates
that we use to draw it, inside the GPU memory and not have the GPU load and reload
them form CPU memory every time a draw call is triggered .
If we can store it there, then once we’ve set it up its there for our GPU to use over and
over again, no more loading the vertex list to the GPU, let’s try that .
We can set up a Vertex Buffer with a very simple command .
GLvoid glGenBuffers (GLsizei n, GLuint* bufferhandles);
But, all this does is allocate the concept of a buffer or buffers in the GPU, and return
handle(s), which we can now refer to, to access it . The buffer does not yet actually exist
HelloCubes 309
in any real sense, as nothing has been put into it, so it’s a buffer or buffers with 0 bytes of
content . But now that we have created it we can send stuff to it .
We write data with a simple gl draw command, which will initialize that buffer and
give it a size equivalent to whatever we write to it .
Now the data is there in GPU memory and it’s ready for us to use, we don’t need the
data in CPU memory anymore, so if we loaded it we can release it, from now one we’re
going to use the buffer .
We’re not quite done, if we have another chunk of model data we want to store in GPU
memory, we could potentially insert our data into this buffer, but we do need to first make
sure the buffer itself is big enough to take the subsequent values . We can only set the size of
the buffer once at the initialization . So it’s possible to insert new data after the first batch,
but not possible to append data to it .
Once we insert the new data, we have to just remember that the second model’s data is
going be at the end of the first set of data and the third after that and so on . For now though
I’m going to use a different buffer for each, just so we can get things going .
Next question is what exactly is this data? Well that is going to depend; if you look at
our current simple vertex Shader you will see it lists a couple of things as attributes .
And that’s the connection . These Shader attributes are data that is loaded each time
the Shader is called, from our buffer stored in the GPU .
We can have different buffers, for example, right now we have vertex data in one and
texture data in another, but it’s also possible and often desirable to interleave the data, so
that you have a chunk of vertex data, then a chunk of texture data . As long as the chunk
sizes are the same, you basically are skipping through the data sending the parts you want
when you need them .
All we have to do to parse through the different chunks of data that have to be loaded
into those attribute variables, is let the GPU know, how much data we are going to send it,
where it starts, and what kind of data its getting .
So suppose we have an interleaved set of data, three values for the vertices and two for
the texture coordinates . We know they are all floats .
Next question is where to set up the buffers? I hope you think of this as a kind of initial-
ization process, so setting up the buffers needs to be done when the model is created, not at
the point it is drawn (which we’ve already seen is very slow) . So we will add some code to
our model constructor, and set up a VB . Setting up a buffer will look like this:
glGenBuffers(1, vb); // generate a buffer and put handle in vb
glBindBuffer(GL_ARRAY_BUFFER, vb); // bind the buffer stored at vb
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * size, vertices, GL_STATIC_
DRAW); // write the data to it
glBindBuffer(GL_ARRAY_BUFFER, 0); // unbind to make sure
That’s pretty much it, our vertices which were in CPU memory at the location called ver-
tices . Are now happy and cosy and warm sitting in GPU memory . We don’t know exactly
where, the GPU does though, and it gave us a handle we stored in vb .
To draw that buffer we need only to bind it, set up the step values, enable the transfer
by indicating which attribute the Shader is going to use it, and draw our triangles from the
array with a glDrawArrays command, like this .
glBindBuffer (GL_ARRAY_BUFFER, vb); // we’re going to use this buffer
// Set up the attributes
There’s a small issue of an ugly (void*)0 cast, because this routine is dual purpose and uses
a CPU address when there is no bound buffer, and an offset when there is . So we have to
turn our offset into an address to make it work . It’s a small price to pay for 10+ times the
speed .
Let’s put this into action by modifying our cubes program to use a buffer .
Compile and run… .try changing the 150 squares to 1500, or 15,000… .we really are
seeing a massive boost in performance . I bet that makes you feel like a coding god! Just a
few simple changes and you see a hundred times increase in the performance .
We could have done this when doing 2D, but you really would not have seen the ben-
efit . Drawing tiles is a pretty easy thing; drawing vertices, however, needs a bit more care
and that care is rewarded with the programmer’s favorite reward, speed!
Attribute Pointers
We briefly skipped over the attribute pointer command but it is very important and is the
reason our single VBO is so flexible because it allows us to feed our Shaders with data . We
can feed several Shader attributes at one time, and interleave those data in the vertex data
we send .
Consider that attributes are variables in the Shader that are loaded with data from a
buffer each time the Shader operates . This suggests that the data are being transferred and
moved along to the next section of data ready for the next cycle .
The attribute pointer command is how we explain what to transfer, and where to
transfer it to, and how much to move along .
It also has a dual function where it can describe an offset in a VBO, or a direct mem-
ory location . We have to be careful not to get those mixed up .
Look at the command we just used:
glVertexAttribPointer(positionAttribute, 3, GL_FLOAT, GL_FALSE, stride,
(void*)0);
This is telling us that the attribute we associate with position, should be loaded with 3
floats, which are not normalized, then we need to jump stride amount of entries from our
current base point, and finally, the data start at a position 0 index in a VBO, or less likely at
the data at address 0 . We need to cast it to (void*) due to this dual nature of the command
where it can take an address or an offset .
So if we have a sequence of values like this:
GLfloat TheCubeVertices[] = {
-0.5f,-0.5f,-0.5f, // 1st set
0.0f,0.0f,
0.5f,-0.5f, -0.5f, //2nd set
1.0f,0.0f,
0.5f,0.5f,-0.5f, // 3rd set
1.0f,1.0f,
0.5f,0.5f,-0.5f,
1.0f,1.0f, >>>>continue
HelloCubes 311
We can see how these values relate to the command . Three vertices, then two tex coords,
five entries per vertex . We don’t just have to pass vertices, we also pass texture coordinates,
and we can pass other attributes as our Shader might require, perhaps adding three floats
for normals after the texture coordinates, increasing the stride to account for the new size
of the data .
We can have any number of different attributes in the buffer . The only concession is
that every set, will need the same quantity of data laid out in the same format .
Texture Buffer
We’ve used a texture buffer before, of course, but I glossed over the fact it was a buffer until
we were ready to discuss it . Now we can talk about it a bit more .
How texture buffers are used is really quite complex, as internally there’s a massive
amount of work going on which really makes our GPU come into its own we’ll touch on
that complexity very soon .
But basically, we want to store as many of our textures as possible in the GPU mem-
ory, there are some limits, of course, not just in terms of sizes . Memory usage is never
going to go away, I touched on the power of two (POT), and the fact things need to be
flipped, also we can actually only have a certain number active at any given time, which
might vary depending on the GPU . But the nice thing about texture buffers is that we can
store a decent number of them, and use them with different Shaders and even on a wide
range of vertices . Though usually we’ll have a set of textures associated with a particular
set of vertices .
We used one big texture image on our 2D sprites, which gave us the ability to display
each tile of the texture with two triangles, and we can still do that with 3D, the limitation
on the numbers of textures we can store, does not really limit the number of tiles we can
access . We’re not using proper textures just yet, but we will be soon!
Frame Buffer
This one is easy because we have been using it since our very first project, though the sys-
tem provided Frame Buffer is not quite the same as an application provided Frame Buffer .
The simplest way to explain a frame buffer is; it’s the place where the image gets pro-
duced, just before the swap sends it to the screen . When we have done all our different kinds
of renders and Shader work, the Frame Buffer is what will contain the final product .
When in a Frame Buffer, we can choose to save that data for later, grab and save it as a
raw data file or as we normally do, send it to the screen . We could add a few extra effects,
stencils, or other things, though it would need to go back to the CPU first, but essentially
it’s a temporary area set up after the rendering is done ready to be sent to somewhere for
storage or view .
We can have many instances of a Frame Buffer, and choose to send them in different
sequences, which is basically what we did with our 2D projects .
We can create application Frame Buffers using this command, which you have seen
before:
Which, of course, is basically the reverse of the create function with the same arguments .
There are a few advanced functions relating to framebuffers we’ll look at later . For the
moment though, the only other command that really is needed for a Frame Buffer is
Binding as we noted in the past puts this particular Frame Buffer into the GPU’s focus,
OpenGLES2 .0 can only ever focus on one Frame Buffer at a time, so no matter how many you
create, the last one you bind is the one OpenGLES2 .0 will work with and send data to or from .
Strangely the variable target will always be GL_FRAMEBUFFER, though maybe in
a later implementation of OpenGLES, it may change to allow for something else . The vari-
able framebuffer, of course, will be the ID/handle that is associated with that buffer .
Unless you are planning some post processing or have a reason to implement a triple/
quad buffer, you only really ever need the system Frame Buffer .
Render Buffer
Ah, now these are a bit complex, because in one sense they don’t actually exist in
OpenGLES2 .0 but are such a mainstay of normal OpenGL that we find ways to emulate
them . We’re not actually going to use them for a while, so we’re going to gloss over this
for now . It is, however, as the name suggests a buffer where things can be rendered into,
which is essentially what the Frame Buffers offer . A true Render Buffer can be sent to the
screen, but OpenGLES2 .0 only allows for one Render Target, the one we swap to view, so
we use a combination of Frame Buffers and textures to give us a fair approximation of a
Render Buffer . This can and will be useful if we want to render things in different ways
and combine them later . Rendering allows us to create buffers with different output types;
these are the types OpenGLES2 .0 supports:
We will use this later but for now just know that they exist and we will play with them soon .
HelloCubes 313
Sure, we can expand the size of our GPU memory, manually when needed, or allow the
dynamic GPU allocation on some SBCs to do it, but that then eats up into your CPU RAM…
You have to find a balance, which is going to directly depend on the needs of your game .
Sixty-four megabytes is a lot of memory, it really is, but it can vanish in no time if you
don’t think carefully about how much data you load into it .
Another thing we have to be very clear about, and best to say this now, the GPUs in
most SBCs are… .well, not to put it too politely, rubbish! They really are painfully bad .
Most are only two cores running at less than half the CPU speed, and most often use
shared slow RAM with the CPU . There is just no way to expect an SBC’s GPU to be as
powerful as even the lowest power PC graphics card . In fact they are less powerful than an
average current android phone .
So we’re faced with an interesting dilemma, OpenGLES2 .0 more or less allows us to
do anything that the big boys can do, but that basic lack of processing power means it’s
slow to do anything beyond basic tasks . This is when game development starts to become
painful, because we are soon going to get to a point where we want to do something really
cool, but doing it will make our games suck . That’s the challenge! Work with our limits,
embrace them, push them, but respect them and amend your approach to your game
design to deal with the lack of power . But let’s not be too down about this, even though
right now asking our GPU to do more than a few things will choke it, it does not mean
those few things can’t be fun to learn and we can find ways to make them happen .
void main()
{
gl_Position = MVP * vec4(a_position, 1);
v_texCoord = a_texCoord;
}
Not doing much, though now we are using texture 2D, which in simple terms is taking the
pixel from the point given to it by the vertex Shader .
This will load in the image and make a nice texture active and its handle available in the
variable texture1 . What the code now has to do is set up the system to send two values to
the Shader each time the Shader needs them, one for the actual location of the three vertex
points we used before, and now there are the two texture coordinates . This data is supplied to
the Shader by using the handles for positionLoc and texCoordLoc . The set up looks like this:
//load the vertex data
glVertexAttribPointer(this->positionLoc,
And a similar set up for the texture info, but the texture coords start after the three vertex
coords so we need to be +3 from the base, but the offset needs byte values so we multiply 3*
the sizeof a float (usually 4 bytes) to get a value that indicates where in the buffer memory
the texture coords start . I say usually 4 bytes, because the size of a float can vary depend-
ing on what machine it is running on, its best never to assume and so we use the sizeof
function to make sure when we compile we get the right size .
// Load the texture coordinate
glVertexAttribPointer(this->texCoordLoc,
2, // write 2 values
GL_FLOAT,
GL_FALSE,
5 * sizeof(GLfloat), // stride does not change
TheCubeVertices +(3 * sizeof(GLfloat))
);
The next step is to enable these values so that each time the Shader operates it will pull in this
data in chunks of threes for vertices and twos for texture (UV) coords, as we defined them .
glEnableVertexAttribArray(this->positionLoc);
glEnableVertexAttribArray(this->texCoordLoc);
Ok, so the buffer is set up to feed the Shader with vertex and texture data, now fix the cor-
rect texture to the process by binding it, making it the current active texture:
glBindTexture(GL_TEXTURE_2D, this->texture1);
Make these small changes to the draw code then run, we should see this .
glEnable(GL_DEPTH_TEST);
Simple enough, you also need a condition test for the Depth buffer and there are a number
of options, which should be apparent from the GL2 .h header .
#define GL_NEVER 0x0200
#define GL_LESS 0x0201
#define GL_EQUAL 0x0202
#define GL_LEQUAL 0x0203
#define GL_GREATER 0x0204
#define GL_NOTEQUAL 0x0205
#define GL_GEQUAL 0x0206
#define GL_ALWAYS 0x0207
Quite a collection and these provide some interesting options . For what we want though
all we need is
glDepthFunc(GL_LEQUAL);
Once the Context is set up with a depth buffer AND we have enabled the GL_DEPTH_
TEST AND correct depth function, we are good to go . There are several other values you
can set to influence your Depth Buffer, but for now, default values work fine, so let’s not get
too deep into this .
Clearly we can see (because we put text there for you) where the top, bottom, and sides are,
and I also have highlighted the UV coordinates, for each start point, I am not personally
too happy using one-thirds for a UV, as it’s a repeating fraction we’ll discuss this soon, but
for now this will work . This is our new texture; it has POT sizes of 128 × 64 .
We already have a cube table with UVs tacked to it, but now we need to change those val-
ues so that we can get the right UV to pin itself to the right vertex . Load the Die texture
this time instead of Harvey, and change the cube table textures according to the texture
layout in the figure above, and you should now get this:
This will load the number of texture units/images the GPU can handle into the variable
MaxTextureUnits and on my Pi it gives eight .
You can also ask how many the vertex Shader can handle with;
GLint MaxVertexTextureUnits;
glGetIntegerv(GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS, &MaxVertexTextureUnits);
Again on a Raspberry Pi Model3B it’s eight . Now, eight textures per Shader does not
sound like a lot, but remember we have UV mapping, so we can actually put a lot of
individual parts of a models texture in one or more big texture units/images . So, in fact,
eight is quite a lot and in most cases you’ll only ever really need to use two or three for
some effects .
It’s not especially wise (though possible with a bit of gymnastics, and a spare image
unit) to have a model using more than one texture for its graphics, so try to keep the tex-
ture images in a tidy order and manage them well, so you have the right textures in GPU
memory when you need them .
Another thing to test is the actual max size of your textures, which you can find with
another simple enquiry:
GLint MaxTextureSizes;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &MaxTextureSizes);
On the Raspberry Pi Model3B it’s 2048, so you could set your video memory to 192 MB,
have 128 MB of texture space and the rest can be buffers . That is more or less the max size
you’d need your GPU memory to be . Some higher spec machines can give you more Texture
units in your Shaders and have more GPU memory, but 2048 is pretty much the current
max for size . And as far as I know the OpenGLES2 .0 spec has a maximum possible of 32
texture units . Of course, there are many good reasons why even 192 MB isn’t going to be
enough; all will become painfully clear as we start to stress our hardware .
From the CPU side, we can only really operate on one texture at a time too, the
bound one, (the Shader, however, can operate on several (GL_MAX_TEXTURE_IMAGE_
UNITS) at once) it’s true we can unbind one and bind another but that’s the gymnastics I
was talking about, we don’t really like to do this .
This limit on textures means it’s actually a pretty good idea to keep your textures
managed separately from your models vertices . If your model loads up textures itself, it
may not have access to info on whether that texture is already loaded or bound .
Having info beforehand on what texture goes with what model is important, and keep-
ing track of what textures are loaded at the moment, so that the model can then be informed
if its texture is in place, is a good way to work, especially when we have a lot of models .
Anyway…as I said, textures are complex, and we’ll increase our understanding as we go .
A A A D C
C E C
E B F
G B D G
B D E B=H
F F C E
H I H I
F G
D
GL_Triangles GL_Triangle_Strip
GL_Triangle_Fan
A B B B
A A B A
C F C C
F F F C
E D
E D D E E
D
GL_Points GL_Lines GL_Line_Strip GL_Lines_Loop
So we will see a lot of things done with triangles, fans and strips, but never ever give up on
the triangle, it’s still the most important basic shape we have! You may have noticed our
draw system currently only uses Triangles using this command to inform the OpenGLES
system of its intent .
glDrawArrays(GL_TRIANGLES, 0, 3);
As programmers we will mostly use Triangles, some Lines, and a few Points and they work
fine as you’ll see, but the other types are less common from a programmer’s usage and
tend to be decided by a model’s design needs . So long as we set up the data before doing
the glDrawArray, or other draw command, which sets up the GPU to do its thing all will
be fine .
A large collection of these triangles and the other basic shape types all joined together
creates what’s called a mesh…when you see a mesh with no texture you can understand
the name .
The following image shows a robot model, which contains around 830 vertices,
roughly 200 faces . A face being either a triangle or other closed polygon . In this case, we’re
also seeing most of the faces represented as quads . This is because the drawing package
used provides quads as a base type, even though we almost certainly will output them as
triangles or one of our other supported primitive types . Artists though, like to use quads
We used textures on our sprites to spread over the faces and create a nice regular looking
square or pair of triangles as needed, but we can spread a texture over a massive number
of triangles or quads, which fills in the holes in our mesh, this basically creates our basic
concept of a model .
Unlike our flat 2D image of two triangles, which made up a sprite, our model contains
hundreds of triangles, and has a new Z-dimension that gives it shape and depth . Each of
You can see all the individual textures are combined into one big texture, which has a size
that can be anything from 32 × 32 to 2048 × 2048, it actually depends on the GPU you are
using and the amount of memory it has allocated . But whatever size we settle on for our
texture, we use a normalized coordinate so that we have values of 0 to 1 . Each triangle in
the wire mesh then covers itself with a part of this big texture referenced by a normalized
coordinate, called a UV coordinate . Given we have a quite limited amount of texture RAM
on an SBC, we really want to use as small as texture as possible and make it as efficient as
possible . The Robot texture has a lot of gaps in it, that’s something we should try to avoid
as much as possible .
Now if you’re just trying to texture up a cube, it’s probably not a massive chore
to work out coordinates, type them in, and so on, but our little 800+ vertex model is
already waaaaay beyond the limits of any normal human programmer to input without
error .
Which is why models of any complexity are not input by hand? This is a job for an
artist, or a modeler to be more specific .
We could do an interesting exercise in texture mapping on our HelloCubes code,
but really the chances of you actually needing to hand texture a model are tiny, though
should you ever need to, by the time we’ve finished this section you’ll be more than
able to .
Yes, sadly there are limits to a programmer’s art skills whenever we try to actually produce
something that we want to look like a character or an environment . So in those circum-
stances we need to feed some fish heads to the semi tame artist every pro programmer
keeps in their basement .
But artists know nothing of the joy of procedurally generating a character, they
instead shun the sensible logic of a programmer and use an art package and express their
creativity with imagination and skill, with a pen, sometimes on a tablet so that they can
actually create realistic or cartoony looking characters . Which probably is a better idea
than typing in 2000 vertices into a text file, fun though you may find that!
Artists generally produce models and that’s what we want to get our hands on rather
than spend 20 more pages on creating elegant torus shapes .
But whenever you have one member of the development team doing something dif-
ferent to another, you need some form of transfer of information . The art packages used
by Artists are frankly mind numbingly complex, and they output the results of their work
in a vast range of formats . But ultimately it’s all data, and data is something we can use, as
long as we know the format of that data .
We don’t really need to know how the artist produces the work, but we do know how
make use of that work in our games . It’s time to explore rendering and display of models .
Here’s an example image (above) from the Github download site for TinyLoaderObj . This is
actually the Rungholt scene,* a well-known test scene . Don’t be fooled by all the pretty images,
we actually have to do a lot of work to get an image like this on an SBC, but it can be done!
Though not in real time, this image has 6 .7M triangles in it and as much as I like and have
faith our tiny machines, this would be a step too far . We could build it up and create a static
image, but viewing all the triangles like this is pretty much impossible for our tiny machines .
But the point that TinyObjLoader is capable of loading such a large scene is impressive .
I chose TinyObjLoader, simply because it’s compact, effective and very well sup-
ported . There will be other model loaders available, and almost certainly some will be
better for the models you plan to put in your game, but for now, this will work for us, so
we’ll use it . You can access it from:
https:/github.com/syoyo/tinyobjloader
I won’t include it in the support site; since it will probably get updated periodically and it’s
best you download it, or another with similar features, yourself .
In actual use, it breaks down into a single header file, so just like our file/image handling
system; we need a small cpp file to take care of compiling it and allowing us to access it .
Unlike our file/image systems though we don’t create a new class, just a simple cpp file, which
includes it with a #define, creating a namespace wrapper around some global functions .
* Rungholt is available from McGuire, Computer Graphics Archive, accessed on June 28, 2016 . http://graphics .
cs .w illiams .edu/data
#define TINYOBJLOADER_IMPLEMENTATION
#include "tiny_obj_loader.h"
Yup, that’s all, just make sure now that you include the tiny_ob_loader .h file in your Headers
directory, and we are then good to go . But don’t use the #define in any other file it may result
in two sets of TinyObjLoader functions . The header compiles and creates a group of global
(not contained in a class) functions wrapped in a tinyobj namespace, which won’t please
C++ purists but if it gets the job done, I’m not too interested in rewriting it .
Any other cpp file, which wants to make use of the loader needs only to include the
header and access the functions by entering tinyobj:: Function_name(args) .
Considering that TinyObjLoader’s main purpose in life is to load Obj files, there is
really only one function we care about LoadObj(), it does come in two flavors, pass-
ing different arguments to overload the function, but basically it’s there to load our Objs .
There’s also a load with callbacks and a LoadMtl() function, but we don’t really need
those to do what we need to do .
The LoadObj code in the header gives us this information:
// Loads .obj from a file.
// 'attrib', 'shapes' and 'materials' will be filled with parsed shape data
// 'shapes' will be filled with parsed shape data
// Returns true when loading .obj become success.
// Returns warning and error message into `err`
// 'mtl_basepath' is optional, and used for base path for .mtl file.
// 'triangulate' is optional, and used whether triangulate polygon face in .obj
// or not.
bool LoadObj(attrib_t *attrib, std::vector<shape_t> *shapes,
std::vector<material_t> *materials, std::string *err,
const char *filename, const char *mtl_basepath = NULL,
bool triangulate = true);
We should by now, be getting used to reading headers and understanding what they do,
this one is a bit long but it’s pretty straightforward .
Faces, in this instance, are actually indexes into the list of vertex triplets we have identified
beforehand . OBJ is smart enough to not list the same vertex twice, making the indexing
the correct way to build the triangle .
There are a few other things you might see like g for grouping s for smoothing, and so on,
but for the most basic of models, v, vt, and f are all we need as far as vertex info is concerned .
Take special care to notice the use of mtllib, a command that tells the Loader and the ren-
der to make use of a material file, which will contain info on how the surface should be ren-
dered, and with what textures and texture/lighting effects . So if you see this in your OBJ file
mtllib walk-frm1.mtl
It indicates we need to use a material file called walk-frm1 .mtl, which looks like this:
newmtl RobotBuddy_meshSG
illum 4
Angry sun
R
O ed
Ye rang
G llo e Human eye
B ree w
In lue n
d
Vi igo
ole
t
A green leaf
Now the fact that colors are absorbed makes it really easy for us to give color to objects .
If we assume our light source is white light and our object is an orange color we can
get this:
vec3 SunlightColor(1.0f, 1.0f, 1.0f); // white is all 3 RGB colours
vec3 ObjectColor(1.0f, 0.72f, 0.11f); // a slightly orange colour
vec3 FinalColour = SunlightColor * ObjectColor; // = (1.0f, 0.72f, 0.11f) a
slightly orange colour
Our orange object is going to lose its R and G components and its Blue component alone
will be left .
Calculating the resulting color you should expect when you shine a light on a colored
object really isn’t hard, what is hard is working out how much of that color is shining on
your object, so we have to discuss what a normal is, and how it’s calculated .
This color concept is key to understanding of light, what we are basically describing
here is the concept of diffuse light, the idea that anything with color will radiate that color,
though it needs a bit of stimulation in the form of a light source . We’ve been doing this kind
of lighting already just by using textures to provide pixels that display our variations of RGB .
Dot Product
This is extremely handy, if slightly confusing function, which basically returns the cosine
of the angle between the two vectors . Its proper name is the Scalar Product, but because
we traditionally use a ∙ symbol to indicate the operation it’s known as the dot product .
When you look at the actual mathematical explanation of it:
a ⋅ b = a × b × cos ( θ )
It’s actually a bit scary so I don’t really think we need to go into the actual maths… (Yes
I am a coward) all we need to really know is that a ∙ b returns the angle between the
two vectors . GLM of course has a dot product function (That’s my get out clause right
there!) . But my fears are largely unfounded, despite the new name and funny symbols this
is actually just a case of using the length of a vector (easily calculated with high-school
Pythagoras theorem), and then doing a multiply of the component parts of the vector to
return the cosine of an angle .
|a|
|b|
a × b = a b sin( θ)n
Keep calm, keep breathing, it’s just a formula, it won’t hurt you . It’s just telling us the vector
multiplication of, a and b (a × b) is equal to the magnitude of a (|a|) * the magni… . .yeah but
no but… I’ve lost the will to live too; I wish I’d stayed awake in maths classes . But it’s ok,
don’t worry, our friendly maths library, GLM gives us this, so we don’t really need to code
it up, but we do need to know what it is and what it can give us . This is the Cross Product,
sometimes also called the Vector Product, because unlike the Dot Product, it returns a
Vector . A slightly confused Vector as far as its direction is concerned, it has no idea if it’s
up or down, but we can fix that with a little bit more information that will become available
when we need it . Visually it becomes a bit clearer when you see it like this .
a×b
n
b
It’s called the Cross Product because the maths symbol for vector multiplication that
returns a vector is × rather than the usual *, some things are more obvious than you realize!
The Cross Product is fantastically useful thing but it’s a bit hard to get our heads around
the calculations going on, so don’t try, just be assured that the Cross Product returns a vec-
tor, which is at right angles to two other vectors . This is the key principle of calculating a
normal, which you can see by the n symbol, is part of the process, we’ll talk about next .
Oh, if you are really interested in the maths behind these things, and how to code
them from scratch, buy a maths book, or find a nice online maths site . I prefer not to rein-
vent the wheel .
Assuming all the arrows went in at perfect right angles, when we turn the board over .
Types of Light
For lights to have an impact in our world, they have to exist there, and that means we need
to define some of the types of light we might have . Light is a complex concept in physics
and real world light is insanely complex to replicate, but in a digital world we can simplify
and conceptualize light in easy to use ways that provide us quite realistic representations
of light .
The most obvious form of light that is all around you, from the Sun or your home light
bulbs, is Ambient light . As light from a very bright source can scatter around bouncing
and reflecting off everything in a room seemingly at once, everything in your world is lit,
even if it’s not actually in your immediate line of sight .
Generally, the only thing we are ever likely to do with ambient light is define its inten-
sity, color or strength . Like a dimmer switch on your light bulbs, or even the Sun . When
we’ve put colors into our models so far, we’ve simply dialed in 100% intensity, which cer-
tainly makes them pop out; it’s just not very realistic . But we’re free to adjust it as we’ll see
soon .
Next up on the light wall of fame is Diffuse lighting . Diffuse lighting is similar to
Ambient but it comes from a directed source and does not have the intensity of ambient
light so it does not flood the entire scene with its photons to create ambient light, and as
such it only actually lights up the faces of things that have the light in front of it, but not
behind it . We refer to our textures, which contain the colors we want to display as our dif-
fuse textures, because the color of the pixels themselves comes from that texture but the
intensity is dependent on the way the light shines on and off it .
We’re going to start coding with diffuse and ambient lighting as these are the easiest
to implement, and we pretty much have that in hand already, but a little later, once we have
some better data to work with we’ll try a couple of other types of light .
Specular lighting is next and are interesting effects that create the shine or highlight
on a shiny object . They also vary depending on where the light hits them making for a
more dynamic lighting effect . These are a bit harder to explain so I’ll explain by doing
shortly .
Finally, the nicest in my view, Emissive lights . These are actual objects, which glow
and themselves become a light source . So you can make a model of a neon light and give it
emissive properties that will make it and the surrounding area glow .
There are quite a few other forms of light and light effects, but they generally are
computationally expensive and best left to the next generation of GPUs . You can slowly
introduce them into your Shaders as you get more confident, but I’m pretty sure you’ll hit
a point where our little GPUs won’t cope and frame rates will tumble . So I’m not going to
cover advanced topics such as reflections, physically based rendering, scattering effects,
and so on, in this book . But, we can try to dabble with them on the support site and I’d love
to see how far you manage to push these systems to produce cool effects and still maintain
playability in your games .
Shaders
Now having talked about light and shadow, and all the nice little maths functions we use
to get lighting to work, we have to now turn to a rather complex issue, there’s just no avoid-
ing this question . How do we actually do all that maths in our code?
We can do it on our CPU, and some small parts will still be done there but the fact is
it’s time to tackle the powerhouse system in our target and what makes it possible to get
our graphics looking good . The GPU and the API that lets us control it .
OpenGLES1 .1 was quite a nice API, which could do some really nice things, but it used
what was called a fixed pipeline, which basically meant you could set some flags and vari-
ables, tell it where your lights were, send your data to it for rendering and BOOM, you’ve
got a display . That was fine, but it meant it was usually one type of rendering for all types of
object sent, stopping and starting rendering to set different flags for different effects .
OpenGLES2 .0 however changed that idea, and instead of the fixed pipeline where
everything had to behave according to the way the flags were set . A programmable pipe-
line was introduced, where we now pushed our data through a choice of Shaders allowing
much more variation and graphic effects than was ever possible . Though our machines
almost certainly still support OpenGL’s original fixed pipeline concepts using OpenGLES
1 .1, it is only for legacy reasons so that old titles still work, there is really little reason to try
learning it, as it is so far removed from OpenGLES2 .0 that it will only confuse a beginner .
As we have rightly taken the option of using OpenGLES2 .0, which is similar to later
versions of desktop OpenGL, we have Shaders . I’ve added a few standard Shaders in our
projects and we’ve seen what they can do, but I’ve avoided explaining too much about
them simply because it really is a sub set of specialized graphic coding, which I didn’t want
to distract you with until you had gained more confidence in your game coding .
But now we have a lot of nice demos, it’s time to jazz them up a bit by getting our GPU
to really do its thing and that means understanding in more detail what Shaders do and
getting you to write a few of your own . This is where the real power of a GPU comes into
play and allows us to do amazing things to every pixel or vertex that our CPU would choke
on . Shaders represent the most power we can apply to our games . Some people; crazy, mad,
delusional, amazing people, even write games in Shaders…they are astounding things that
make the visuals of modern games come alive .
That said, they are so powerful and offer so much that there are some massive books
out there that explain what you can do with them, there are almost no limits, but they
are technical enhancements rather than fundamentals of game programming, so fully
explaining how to use them is (again) out of the scope of this book, the best we can do
here is briefly explain what they can do, how to do it, and show a few simple samples to
expand our games, that should be enough to let you start a journey into technical graphic
programming if you find that path exciting .
All this is, is a small text segment, which could just as easily be written in another file and
saved as standard text and loaded . Take away the array it’s being loaded into as well as the
\n and “’s, and you get this
precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D s_texture;
void main()
{
gl_FragColor = texture2D( s_texture, v_texCoord );
};
* A varying variable, is a special kind of shader variable, I’m just about to explain .
It’s telling us that a value gl_FragColour which is the required output of this fragment
Shader is taken from the texture s_texture at the coordinates held in v_texCoord .
There’s no sign of any actual manipulation here, because we are not doing any, yet!
But we are passing v_texCoord from our vertex Shader to this Fragment Shader!
There are only five Storage qualifiers in GLSL 1 .0
◾ None
◾ Const
◾ Attribute
◾ Uniform
◾ Varying
And it’s also possible to qualify the type of parameters they are so that you know if data
are coming in, going out, or do both but some Shader compilers don’t like this in GLSL 1 .0
◾ None/in
◾ Out
◾ Inout
* It is actually possible to have more than two Shaders but you still need a Vertex and Fragment shader pair at
least, usually extra Shaders are there as functions but it’s hard to give a good reason to create shader functions
and then attach them, but if there’s a thing that can be done, someone will do it, probably better than me .
◾ Highp
◾ Mediump
◾ Lowp
It’s possible to set any variable in the Shader to a particular precision but try to use the
same precision types in your calculations, also once you set a precision, all following vari-
ables are set to that precision unless otherwise set to a different one .
The strings can of course be sent to a buffer instead of the console, if you need to parse
for specific information to tailor your project (i .e ., if your system supports GLSL 300es as
some do, you can load in more advanced Shaders or use the base level 100 versions .)
Most of these are pretty familiar to us, a mixture of data types and flow control concepts
and used in pretty much the same way . We also have the ability to call functions and pass
arguments . So even though our control with if and else, is limited we have enough to make
decisions, and change flow .
There’s also quite a big list of keywords, which are designated for future use, I’m not
going to list them here, you can find them in any GLSL reference, but if you get an error
about reserved keywords…you’re using a reserved keyword .
You’re also not allowed to use variables or function names starting with gl_, which is
pretty obviously reserved for gl_typethings!!!
As Shaders are designed to do a specific job, there are a number of built in variables
for both vertex and fragment Shaders, which relate to that specific job, that is, putting
pixels on screen
Relate to the Vertex Shader the most important being gl_Position, which is the final
output value of a Vertex Shader .
For Fragments these are the built in variables;
void main()
{
vec3 modelViewVertex = vec3(MV * a_Position); // Transform the vertex
into eye space.
vec3 modelViewNormal = vec3(MV * a_Normal); // Transform the
normal's orientation into eye space.
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - modelViewVertex);
The actual calculations are reasonably short, so it’s not too hard to work out and we’re still
doing the usual things vertex Shaders do, like passing the texture coordinate to the frag-
ment Shader and setting gl_position . But notice the two new lines, which are there
to calculate the very important varying v_Colour value . That’s the one the fragment
Shader really needs to make light cool . The v_Colour value provides an intensity value
for the fragment Shader to use so the pixels on or near that vertex will be suitably lit by
the light .
All we’ve really changed in our fragment Shader is to add a slightly different calcula-
tion to get gl_FragColor . I didn’t bother to use the ambient value though I still passed
it to the Vertex . But I could still use it if I wanted to .
uniform sampler2D s_texture;
varying vec2 v_texCoord;
varying float v_Colour;
void main()
{
gl_FragColor = texture2D( s_texture, v_texCoord )*v_Colour;
gl_FragColor.a = 1.0;
}
By using the calculated v_Colour value we can alter the final gl_FragColor by more
than the simple value returned in the texture, making it darker or brighter, which is what
light or its absence does .
Of course we need to adapt our set up of our render a little in order to pass the attri-
bute info for normals in our VB, and also send extra uniform and attribute values, but
once added our new visuals become quite impressive .
So for now we’ll use this, it will work fine with our upcoming space ships, and assume
that the Sun, strongly located at a fixed point and out of our view point is our light source .
Not too shabby is it! This is an example of vertex lighting, where the vertex Shader is
deciding the intensity of the light for the entire face (or at least a third of it), by distribut-
ing the intensity across the face’s vertices, it’s the fastest way to do light and it is indeed
very effective for soft lighting, as long as our faces are fairly small and we don’t have a lot
of contrast in the color values . But, be warned, it does have an impact on our frame rate,
this is one of those constraints we have with our relatively low powered systems . We can
do light, but the increased complexity of the Shaders needed to achieve it, takes more
time to process .
For our lower power machines vertex lighting is really the preferred system where
we must have light, but it is also possible to do a more detailed form of lighting called per
fragment or per pixel lighting, which as you might imagine is going to calculate the color
#pragma once
#include <GLES2/gl2.h>
#include "../Headers/ObjectModel.h"
#include "Frustum.h"
class Camera
{
public:
Camera();
~Camera();
glm::mat4* GetView();
glm::mat4* GetProjection();
The only thing not obvious to us at this point is the Frustum, but we’ll get to that later, it’s
not needed yet and will not be set up until I explain it . I’ve also added the variables to cre-
ate the Projection and View values here . They can become much more important later, for
now just make sure you set them up in the Camera Class you want, you could also provide
get/set systems for them .
As you can see its really pretty similar to a 3D model, it has position that allows us to
place it in space, and some rotation values to deal with its orientation, we could maintain
a Model type matrix for it too, but at the moment it’s not needed, our aim is to create the
new view matrix with each update to pass that on to our objects .
◾ A ChaseCamera, which is actually the same but with a larger offset and allows for
the player to be drawn just in front of you to which you might want to add a bit
of sway logic .
◾ An FPSCamera for First Person games, which we’ll talk about later and of course
the base…
◾ A FixedCamera or God mode, which as you can imagine views our game scene
from a fixed point .
We could create different instances if we wanted and switch between them . For now we are
going to use the 3DCamera, which is an example of an almost free camera allowing 360°
rotation in two axes . Feel free to add more types, smart cameras for example, which home
to a point can be made to feel like drone views .
All our model types so far have code to use the camera supplied View and Projection
matrix, just comment out the hard coded versions in your chosen model type (mostly
ShipModel) and uncomment the get from Camera methods .
Now compile and run…you should now be able to see our world as you did before, but
this time our camera is able to be moved around . All we need to do is give our Ship the
Yaw and Pitch values we expect .
◾ Our player will be in control of a ship in space, rather selfishly trying to harass and
rob other ships of their cargo . So we need control and visualization that allows that .
In Space, No One Can Hear You Smashing Your Keyboard As You Scream Why “Don’t You Work!!!” 353
◾ Control should consist of thrust, braking (is so far as braking in space is possible),
and some type of left/right/up/down methods .
◾ We want to create a Space environment, not just a blank and empty space .
Space games are easy, all you really need to do is enclose your player in a massive space-
themed skybox/sphere* and populate a few space stations/planets in there… There’s not
a lot of rendering you have to do, just make sure that any object in your line of sight
is drawn in a proper orientation and scale and boom we have a classic Space game and
even worse, after all our playing with animation of models, ships don’t really animate .
But no worries, we’ll get to animation soon! But we can certainly put our model-loading
skills to use now; we’ve got a pretty decent framework of features in place to put anything
we need in .
We can rework the HelloCubes project to make our space game; it is after all basically
nothing more than a game loop displaying graphics . But it’s also rather limited because
of some oversimplification of the code . I knew we were only going to draw a few shapes of
models, so I didn’t make a very good job of allowing the different objects to communicate
with each other .
Let’s rebuild things, we still will use the basic idea of an ObjectModel Base Class
because we know it works well, but we’ll add a bit more control over our assets, textures,
Shaders, and more importantly create a few management classes to look after our objects
and not leave them in global space .
Encapsulating our models in a Manager Class will allow us to get a game object to
talk to or interact with other game objects; we’ll need that for collision tests and other
interactive points .
355
So it is time to download the slightly improved Flying Cubes project called SpaceCadet
from the support site and familiarize yourself with the layout .
The main thing you will notice this time is the full incorporation of the TinyObjLoader
files and a brand new constructor in the ObjectModel Class, which takes, not surprisingly
an OBJ file, which it then loads .
The project also does not allow the main entrance routine to control the workings of
the code, that was a very lazy hack, and I hope it made you uncomfortable . No? You’ve not
been paying attention then, that was one of those now infamous bad design choices…try
to keep up!
Instead, it now creates and calls a Game Class, which among other things is the new
home for the vector containing all our objects and the support classes we need for the
game to work . The entrance code of a project should never be responsible for any game
functions; its role is to start-up the game, not to run the game . Let’s never do that again .
I’ve added a little bit of extra code to our Model loader to load and manage material
files, which describe the surfaces of the faces, and, of course, any associated textures .
The code is heavily based on the examples provided by TinyObjLoader, modi-
fied to suit our loading systems and OpenGLES2 .0s slightly different way of handling
textures .
So we’re good to go, we can load any OBJ file, though please make sure they live in the
Resources\Models\ folder along with their associated .mtl and texture images .
Our space game is going to be fairly simple for the brevity of this book if nothing else, but
let’s establish the concept . You control a ship with some degree of pitch and yaw control,
using the most basics of momentum/impulse physics, and you have to attack other ships
in your area, which, in turn, will attack you .
Sounds simple? Well it is if we can come up with some logic that will allow the enemy
ships to dogfight reasonably well . Though in space it’s perfectly possible to spin/roll to face
your enemy, it’s much harder to alter your direction because of momentum . So we need to
either take that into account or ignore the laws of physics and have our ships behave in a
Ok, that should do it, let’s start with creating classes . We’ve got three ship types, let’s not
use the traditional reptile names, everyone does them, how about something less loved?
Amphibians! Those cute slimy little things? Here are our ships:
We can have a Newt Class, a sleek-looking Salamander Class, and one of our ships looks
similar to a big heavy class, so let’s make that a Toad Class .
We have to think about how much our ships have in common . We could give them
some special attributes, but for the most part they are all going to fly the same way, only
at different speeds because of differences in mass, thrust, and so on . Or in their weaponry .
Quite an armada?
Ok, we still have our simple look around controls on our Camera Class, which we’re
going to get rid of soon, but hopefully if you look around, you will see a few of your gener-
ated ships? If not don’t worry, they are there somewhere .
It’s time we created a ship of our own and gave ourselves some reasonable means of
flying around . As our view of the game is going to be from the ship’s viewpoint facing
forward, our own controlled ship itself is effectively the camera, so we are going to make
sure our ship actually has access to a Camera Class, which it can set to the ship’s positions
to give us a sense of flying in space .
Player* ThePlayer;
In the Game .h Class define . Then a small modification our init to create our player as the
first object, make sure this is the first thing we create, the Player is important to us, we will
reference him by his name, like we did with Fred and Bob, but the ship is still part of the
collective group of objects .
ThePlayer = new Player(&this->Handler, (char*)"../Resources/Models/
brian_03.obj", &this->MainModelManager);
glm::vec3 Pos = glm::vec3(0.0f, 0.0f, 0.0f);
float MScale = 0.95f;
ThePlayer->Scales = glm::vec3(MScale, MScale, MScale);
ThePlayer->SetPosition(Pos);
MyObjects.push_back(ThePlayer); // he's on the system for possible updates/
draws
ThePlayer->StoreGraphicClass(MyGraphics);
MyGraphics->Init(ThePlayer);
glUseProgram(0); // free the program
ThePlayer->TheGame = this;
You can see that I update the player and then the camera outside of the main loop, which
starts from 1, not 0, so the player is never going to be updated or drawn by this loop, so
long as he’s the first one created . There are certain advantages to making your character
the first in any list, this is a good example .
There’s a couple of small issues, with the 3Dcamera, which you might not notice,
mainly without an offset, we have essentially put the camera in the middle of our ship,
and if we leave our ship in there it might render itself around it if we decide to render the
ship (you should conditionally decide to render ships, more later), fun in one way but it’s a
bit of a hindrance in gameplay to have a blindfold around us, we have two simple options .
Provide the Cameraoffset value in the camera to be just on the outside the ship, or make
sure we don’t render the player ever .
We could also choose to use the chase camera, which at first glance is identical to
the 3D-attached camera as it is dependent on the players’ position and offsets . However,
a chase camera should have a little more functionality, for example, having it on an
I will let you work out how to put this in a loop and complete the required info needed .
Note we’ve done this kind of thing before! When we wanted to make our KamaKazi
The Asteroid Class provided is expecting to use the SimpleTri Shaders, which works fine
but really are a bit dull; you might, however, want to use the UntexturedLit Shader pair,
I’ll let you work out how to change the class to cope with the extra needs of those Shaders .
Skyboxes
After all that complex stuff, let’s discuss something simple but very cool you’ve already
seen me using on screenshots for the light Shaders (e .g ., in the previous image) . A Skybox
can make a massive change to your game graphics and sense of being in an environment .
It’s also rather a clever trick, which relies on the end of a rainbow effect where you can see
the rainbow, but you can never ever get to it . So how is it done?
So far we’ve considered models to be textured on the outside and we view them that
way, with no regard to anything inside them . But a Skybox has an interesting variation on
that, it’s designed to be seen from the inside and it’s also as you might imagine nothing
more than a simple box or cube, possibly without a bottom, because we often won’t see
that . There are other variations too, Sky Domes and Sky Spheres are sometimes used but
they increase the complexity a little, so I won’t cover them here .
One thing about a Skybox is that it’s usually quite large . Large enough for our game
world or at least a large section of it to be enclosed within it . In fact, what we are doing is
boxing up our world and then looking up from it, to see the inside of the box in any direc-
tion we are looking at . As the box’s faces can never be reached it’s always going to be like
a rainbows end, unreachable .
Of course, we don’t always have to encase our entire scene/world in a box, only the
viewing object, that is our camera, needs to be in a box, which as long as it moves with the
camera (but not rotating), the edges will never be reached . If the box was too big it would
understandably cause a lot of texture stretching . So we make a box big enough to seem far
away, along with a few perspective tricks . And we texture it with nice blue sky and clouds,
for a world base scene or star clusters and distant planets, for a space scene . Things we are
never going to reach stay just out of reach but we no longer have to care about the blank
space that exists where we are not drawing models .
It can be tricky if we have a small box, and we get terrain or buildings that exist
outside the box boundary, and then suddenly pop in to existence when you take one step
forward, but that’s a problem we can hide with larger boxes, or a distance fog system that
will allow far objects to fade into view as they cross into the box’s range .
Skyboxes 361
The best way to see this is to do it, so let’s create a simple space box in our code . As
always, if we are creating a new thing, we should create a new class for it . Lucky for you I
already supply it, in the project it’s just not added to the solution .
Add the Skybox .h/cpp files to your solution in the source and header filters, and then
in the initialization of the Skybox .cpp file, make sure you load the textures in this order,
again I supplied the textures for you, and will explain where they were sourced shortly .
faces.push_back("../Resources/Textures/vr_rt.tga");
faces.push_back("../Resources/Textures/vr_lf.tga");
faces.push_back("../Resources/Textures/vr_up.tga");
faces.push_back("../Resources/Textures/vr_dn.tga");
faces.push_back("../Resources/Textures/vr_bk.tga");
faces.push_back("../Resources/Textures/vr_ft.tga");
This class is pretty much totally self-contained, but you will need to load your own differ-
ent Skybox textures for different games, so in theory that’s the only edit you ever need to
worry about . It is also possible to load a single cube texture, where the single texture has all
six faces laid out in a cross . But I think that may also introduce some wasted space in the
texture space, I’m not 100% sure about that but I prefer to avoid it . However, if you wish to
alter the cube map generation, there are some very fine free examples of single cubemap
textures available here:
http://www .humus .name/index .php?page=Textures
We also need some Shaders and once more I’ve provided SkyBox Shaders for you in
the . .Resources/Shaders/ folder . They are already set up in the Skybox init . But it’s a good
idea to have a look at them, because they are a little unusual because we use cubetextures,
and don’t ever send texture coordinates to them . Any thoughts why?
Interestingly, the SkyBox is one of the exceptions to the idea of the camera having a
constant projection matrix, that’s simply because it’s not actually part of the world; it’s not
trying to represent any object within the world space . So its projection matrix is quite dif-
ferent from a normal model and uses a different set of projection values .
glm::mat4 Projection = glm::perspective(85.0f, 16.0f / 9.0f, 0.1f, 1000.0f);
With distance set at 1000 units or higher, this gives a good depth . We will still be using
the View matrix of the camera though, which will let us look up, down, and around in our
enclosed gameworld giving us a sense of motion .
Now with a SkyBox in place, even if you have no enemy ships visible, you have a sense
of depth and position now, so as you move around you can get a sense of where you are in
space . You should be getting a pretty decent idea that you are moving around, and have
some sense of flying .
I also mentioned Sky Domes and Sky Sphere’s, which as you can imagine are a similar
principle but a little more processor heavy, encapsulating our world in a full, or semi-
sphere over the top . As semi or full spheres are geometrically a bit more intense, they can
take a bit more processing, but they are going to be living full time in our GPU buffers,
so they are not really going to make a massive impact once it’s set up . A Sky Dome is not
normally going to have anything but the most simple of Shaders . They also tend to be able
to use a single texture, unlike the box that needs six, which can be a win . The decision on
which method to use is up to you, I find a well-defined SkyBox to be just as effective and as
fast to render as a Sky Dome in nearly every case .
But even at 512 × 512 and with obvious stretching we get a nice effect and we’re going to
add more improvements soon . I’ll leave it up to you if you feel you should leave this as it
is or try to reduce the stretch (Two clear options, reduce the display buffer size, or find
higher texture images) . Is it making your skin crawl?
GUI
GUI, stands for Graphic User Interface, essentially it’s the part of the screen that passes
information to us, and may also have buttons or boxes that we can click or swipe that
allows us to interact with our program . In games we also have HUDs, a lovely old military
term for Head-Up Display .
If you don’t have interaction with your game then you have a HUD, which gives you
all the information your game needs to pass to you . If you do have interaction you have a
GUI . But more often than not lazy programmers use the terms interchangeably, mostly to
annoy designers .
In one very useful way, we’ve pretty much done a HUD, without actually realizing
it . Our Single screen texture, 2D screen, double buffer idea, is effectively a HUD, we can
Screen or Render
A GUI can also be rendered, using an orthogonal projection matrix that was the basic
principle of our double buffer, though the projection matrix wasn’t actually defined . But
if we’re going to use orthogonal projection in a 3D game, drawing a GUI is the most usual
place to use one . GUIs can be pretty big though, perhaps even screen size, but we’ve got
our graphics in GPU memory, unlike our double buffer . Dynamic allocation, where we
created new textures every frame, is to be avoided; we should try as much as possible to not
alter the texture, which will cause FPS drop . There will always be situations where parts of
a texture need modification, for example, updating power meters, or score counters, and
so on . So we have to give some thought to how much of the screen is going to update and if
there are ways to reduce the making of new textures, perhaps by breaking the screen into
multiple smaller textures, where most are not in fact updated .
The easiest thing to do is to keep the variable parts of the GUI separate from the static
parts . Take a look at the GUI3D demo on the site, it has a simple ship screen overlay we are
going to use, most of the texture is indeed screen size, but there are holes in the texture,
where status bars are updated and radar is displayed . The dynamic parts will be updated
differently from the static parts .
3Dfont
3D games also need us to display text, and we could continue to use the old simple 2D tile
font on a surface and draw the surface exactly as we did in 2D, but CPU writing to pixels
buffers is so last week now . We really need something a bit more flexible, but still fairly
simple . Essentially, we’re going to do the same basic idea of keeping the font as a set of tiles
in a texture, but this time allow the Shader to get to the correct point in the texture to draw
the relevant tiles to create text . All we need to do is create a small quad of six vertices per
tile, map the texture UV for the required letter (making sure to not bother with space),
and build up a small buffer of these vertices that then get drawn with a single draw call .
I’ve included a Font3D system on the support site . It’s really quite basic but it will
allow us to print using our 2D font images and we can also change font images to get
different text styles and sizes . Just be sure to set the background color to an alpha of 0, or
we will draw the whole tile rather than just the pixels that we want to see .
Notice when using it though it is not actually drawing in space but to the screen or clip
space so that it will always appear a little detached from the scene, perfect for debug, and
direct user info-like scores .
Feel free to experiment with the Shaders to give some nice effects and maybe add
some MVP functionality to put text in the game itself, hint! (This is how you can do 2D
sprites with hardware)!
Culling Concepts
But no matter how simple our collision primitive is, we may still need to test a lot of things,
and for the most part the test will return negative . A world with 30 characters in it means
each character has to test the other 29 to see if it hit it, regardless of it being on the other
side of the game world . That’s a high degree of negative results; in fact, most of the time
your tests are going to return no useful interactions .
It’s very much in our interest to reduce the number of pointless negative tests and try
to isolate the most likely to collide . This is the basic principle of a collision cull . There are a
lot of methods that allow us to isolate objects of interest in nodes or cells . Grids and Quad
Trees are fairly simple well-documented systems to keep track of an object based on its
world location . Using that we can get every object (they still all need some kind of test) to
only test objects in its own immediate space . They work by using a cell of data representing
a small subspace of the world, in the structure that keeps track of only the objects in the
immediate area, so each cell has maybe two or three objects in it, those objects, in turn,
only need to test other objects in the same shared cell .
What we are essentially doing is using a data structure to direct our tests, that struc-
ture has to be built, typically at the start, and occasionally updated, so some processing
is needed, but usually, less than the wasted processing on mostly negative collision tests .
Having a Quad Tree or similar space partition system, will reduce the number of
complex collision tests, though Quad Trees have some issues when you have a lot of objects
moving around, rebuilding the Quad Tree is a little time-consuming . However, even one
or two complex collision tests could take as long to do as building the Quad Tree that
would prevent those one or two tests . It’s simply a question of balance, as much as possible
we want to avoid doing the expensive collision tests and an equally expensive but one-time
calculation to prevent multiple pointless tests is highly desirable .
Working out which objects to test, is a good way to decide if we want to do collisions,
but we can also do a very simple, and very quick broad-phase test to decide further .
Let’s talk about these partition systems in a bit more detail .
If we examine the image above, assume that the whole square represents the entire world
we want to render . We can see a load of model objects (30 hexagons) .
Now cut it into four equal square sections, we will call them nodes . Some nodes con-
tain more objects than the others . Now take the bottom-left square and divide that into
four equal nodes .
You can see that for the most part I only need to go two nodes down to get a situation where
there are four or fewer objects in each node, but the top corner has six (Seven, if you choose
to consider overlapping objects), so it needs another subdivision . And if my heart were in
that area, I’d need to go down three levels to find the objects . So, at most I have to test four
objects whereever I am in the world…often less or none .
Incidentally, notice in that last diagram that even though the top corner had a quad
in it, the rest of the world was subdivided in equally sized spaces . That’s a simple grid! But
back to quads .
This is a very common concept for collision, especially in 3D where collisions with
the environment can be tricky . But it can also be used for rendering . We can keep info
on models, or individual triangles, once we know which area we are in, we can select the
vertices in our area of interest .
OctTrees work exactly the same way as quads, but represent 3D space, and as you
might image use cubes with eight smaller cubes in each node . These are much more useful
if you have an environment that has a lot of relevant data in the y dimension, buildings
with stairs and balconies and multiple levels, for example .
We can use such systems for keeping track of objects, even allowing them to move
around a little, or for much larger data structures such as the individual triangles that
Sphere-to-Sphere
We’ve used the circle-to-circle collision system in our 2D games and for the most part
they gave a pretty decent resolution that didn’t really need much refinement . We can easily
expand that 2D system to produce an encapsulation system to see if a sphere that wraps
our object as best it can, intersects in any way with another object of interest in our imme-
diate area . This kind of test is marvelously fast, but not really very accurate, unless we are
actually using spherical objects .
But as a first-pass test to see if we should dedicate the time, it really is super, and for
many games where high-resolution collision is not needed it can be more than enough .
Remember this?
3D Particles
When we did particles in 2D it was actually pretty simple to create emitters and build a
vector of the particles as they emerged and do the simple bit of code to move them around,
decay them, and kill them off when done .
There’s really no reason at all why we can’t do the same things in 3D, except…there
kinda is .
We’ve seen how much faster things are when we use data in our GPU buffers; the
speed difference is simply immense . Those speeds rely on the fact that the vertex data
being supplied is essentially static in the GPU; we can’t in effect change it once it’s there .
Instead, we rely on a set of CPU side computed, model, and view and projection matrices
to reposition our objects, so that we can see them where we expect them . Three matrix
writes is not so hard for our system to work with and from that, thousands of calculations
are done in the GPU .
But particles are far more active than our models, and we will have many more of
them, several thousand at least to get good effects . Their positional data are more dynamic
and totally nonstatic unlike a model’s vertices, so we can’t keep them in the VBO . To draw
them, we need to send positional data from the CPU to the GPU every frame…and yeah,
that’s not desirable . But if we can’t escape the fact we need to shift a lot of data to the GPU,
let’s at least make sure that the GPU takes some of the calculation load off the CPU by
doing as much of the maths as possible, so that we will arrange for the data we send to be
relevant to the particle .
Information such as age, time, position, and velocity all need to be used to work out
the final position of the particle . So, we will pass them to the Shader .
Just like our 2D software particles though, there’s no getting away from the fact that
the more particles we have the more work both our processing units have to do . There are
ways of working with unlimited numbers but this is one of those discretion, is the better
part of valor situations where we put a sensible limit on the number of particles we can
have, and use them sparingly for the best effects .
The support site contains a 3D particle system you can play with, using similar
plumes and cascades, sending both particle info and emitter info to produce different
The Wrap Up
That’s pretty much all I want to do with this particular space game . There is one or two
rather obvious things wrong with it, such as the lack of full 360° motion in the Z axis .
For now, it simply represents a limitation on using traditional Euler-based angles . It’s not
impossible to overcome but really we want to get to a point quite soon where we no longer
rely on them for our calculations . But looking back on this and the previous foundation
chapter, we’ve again learned a number of notable concepts
Not least of which is 3D rendering but also
◾ Shaders
◾ Light calculations
◾ GPU optimization
◾ Simple 3D collision
◾ Ray Casts
◾ 3D particles
◾ GUI rendering
◾ 3D vector motion
◾ Skyboxes
◾ Matrix maths
These are all basic tools to add to your arsenal and will crop up again and again in every
3D game you do . In addition, quite importantly we’ve discovered that we have a major
limit in our understanding of maths when it comes to camera control; we are going to
cover that soon .
Some of the basic features of all 3D games developed in here are going to be com-
mon to all our new 3D games, so from now on our 3D framework is going to keep hold
of some of these features as standard . I will rename and tidy up the ShipModel Class,
because really it’s a standard OBJModel style class, and not just a ship . To make it a little
3D Particles 375
more flexible, I’ve also added a new pair of Shaders called CondOBJLight .vsh/fsh . Though
conditions should be avoided as much as possible, especially in our low power Shaders, the
flexibility they give us makes up for a slight drop in performance . If you decide, however,
to always use the lit or unlit systems, you can adapt the OBJModel Class yourself to pro-
vide the Shader you need (Ultimately that’s a better solution for performance) .
We will need Cameras too; at least the base class should be a common construct and
finally Skyboxes . These will all be put into a basic VisualGDB/Visual Studio template;
I will call Standard3D you can download from the support site . Giving you a starting
system that you can add to as we go when you create a new project, it will let you create a
project from a custom template in the New Linux Project Dialog, and then navigate to the
template file .
We’ve still lots to learn though, but so far, so good . Let’s press on!
Space as I said is easy, flying through the air is also not too hard so long as you avoid the
crashing into hard things and blowing-up in flames part, but moving around landscapes
that look similar to landscapes and are then expected to behave like a landscape, is some-
what harder, because we need to keep track of a lot more collidable things .
377
A lot of our 2D thinking can be used for games that keep a gravity bound character
moving around a play area . But realism means we need to try to make sure everything
in our environment behaves as much as possible as it should be in life . Trees should have
leaves, grass should wave majestically in the wind, and water should ripple . These things
all take effort from the CPU and GPU and we will face brick walls if we try to do too much .
Perhaps more importantly, if we are creating environments we are essentially attempt-
ing to create some form of realism, in which case, our rather static OBJ models are not
going to work as well for us . It’s time we looked into model animation, which in 3D is not
quite as straightforward as it was in 2D .
With a 2D game, there are only the frames given to us by the artists, we display them, in
turn, in the correct sequences, perhaps even having the benefit of a few intermediate frames
to allow for transition from one sequence to another . But animated 3D models are harder
to work with, because they are not direct representations of our artists intentions, that are
produced, but rather mathematical representations of it, we are now faced with other chal-
lenges and opportunities . Our frames are not just static images, they are more dynamic and
are visible from any angle and distance, and are generally less able to fool our eyes the way
2D images do . We need to take more care of not only what we draw, but how we draw it .
That can be quite a challenge . Animation, in particular, gives us a lot of interesting options .
We’re going to spend quite a few pages now discussing some of the technical chal-
lenges we need to deal with to introduce this sense of realism and bring in more usable
models and playfields, there’s no way to sugar coat this, we’re going to be going through
this stuff for a few days before we can get back to writing a proper game again . Don’t skip
it, tech stuff can be dull but the benefit of understanding it will allow us to make the games
run faster and smoother and squeeze more content in .
Animating Models
There are several ways to animate models; the simplest, like we do with 2D, is to just hold
many versions of the model in different poses, though usually using the same materials
and textures . If your models are not too massive then this can be quite an effective system .
Limitations of OBJ
Now because it is a simple and easy to use format, OBJ does have a number of limita-
tions, one of the biggest drawbacks is that it has no actual support for multiple animation
frames in the format . It does not contain information that other formats have that allow
us to process rig movement or cycle through frames, often allowing common vertices to
be unchanged and further adding to compression . So the simplicity of OBJ will eventually
prove to be a problem when we want to do a lot of models or have a model with a massive
number of frames .
An OBJ isn’t always composed of a single mesh though, one good thing in its favor
is the fact that an OBJ can have multiple component meshes stored, which makeup the
whole . If you look at our render systems’ code, we do actually render multiple shapes
inside a loop . These shapes are named, therefore there are ways to isolate individual shape
components and run them through a different MVP, creating some degree of motion
inside an OBJ, such as rotation of or bouncing of wheels, which you might want to play
with later . But actual animation in the sense of moving from one pose to another isn’t in
the OBJ bag of tricks .
That’s it, the file format is simply a guideline to the way the data in the first part of our binary
image, the header, are laid out . Once we load that binary image, we can overlay these values
onto the first section of the data and consider that to be the header; from that, we know that
the header gives information that allows us to make sense of the main body of data .
MD2 is really a very simple format, essentially it works in the same way as OBJ with a
list of vertices, some indexes to them, and its own reference to materials and textures, but
where OBJ stores only one frame in a file, MD2 can store multiple frames and we can index
through those frames to get to the relevant data . It also contains some other cool bits of
data that we’ll try to get to as we go through it .
MD2 is outdated, of course, and exporters needed in our graphic tools for this format
are also old, but they are available for most free and commercial 3D packages, especially
with a bit of hunting, Blender, which is very popular with the Linux community and is free .
There is still wealth of models out there and building our own MD2 loader and ren-
dering systems will give us a good grounding in how to implement MD3/4 and maybe
even MD5 systems if you choose to use those .
I would not be too concerned with things being outdated, remember that we need to
keep our expectations realistic . Our SBCs are not current generation PCs or Consoles, or
even previous generation . They have limits, that we have to work within, and basing our
Switching frames like this clearly works but it does seem a little bit jagged and abrupt not
at all like the smooth motion the online viewer gives us . There are two reasons for this, if
we set the update rate at 5/60 s, that basically means for 5 frames he or she will draw the
same image before abruptly going to the next one, for another 5 frames . The abruptness
is because an MD2 animation sequence is generally saved with the minimum number
of frames needed . We know though that the faster the update rate of an animation the
smoother it seems to us . But only 6 frames of run sequence is played every frame in 6/60s
of a second . That will look faster than a Keystone cop! So our timing is actually needed, but
we can introduce an element of inbetweening where we can calculate the frame between
two frames and produce an inbetween frame; in fact, if we want to get very smart we can
produce an extreme number of inbetween frames, but a set number of frames between a
first and the second frame, spread over the time, will smooth out animation, without us
having to install another costly memory sequence of vertices . At each cycle, the display
float Time;
float TimeStep;
private: /
int QNextBase;
int QNextFinal;
AnimType NextType;
};
Nothing too complicated here is there? The basic types of animation are the key; the com-
ments explain what they do pretty well . I’ll give you the update function, but am sure you
can work out what the other functions are for yourself (don’t look at the final versions until
you’ve had a try) .
bool MD2Anim::Update(float dt) {
case HOLD:
{
NextFrame = CurrentFrame;
break;
}
case TRIGGER: // trigger is basically a cycle but without timestep
case CYCLE:
{
CurrentFrame += Direction;
if (CurrentFrame > FinalFrame) CurrentFrame = BaseFrame;
}
return true; // no issues reported
};
You can see that as well as the current frame, it calculates the next frame so that we can
lerp . So with these in place all we need to do to trigger an animation is
Animation.SetSequence((char*)"run", this ); // update using default speed
and CYCLE
Of course, this assumed you to know what the name of the animation is, but that’s easy
enough to find, by asking the MD2Mode->animations map for the name of the anima-
tions that are stored there . BTW, MD2 models don’t usually have their weapons included,
so they are normally a second model, but use the same animation names, which will allow
you to sync a sword/gun/broomstick movement in a model’s hand . But you will also have
to remember to provide the weapon with the same MVP data as the holder, so that it can
be rendered in the models hand; however, it is oriented . It is a little wasteful, which is why
in MD3, the weapons were an extension of the main model . Something to do later when
you are tired of MD2 .
The Demo program starts up by creating a knight as an MD2Model, we already we
know that is not a good form, but it helps with the testing, it’s time to create a proper
Knight Class derived from MD2Model, and a Player Class that can be derived from Knight
so that we can control it . That way we can have a player Knight, and an enemy Knight .
#pragma once
#include "MD2Model.h"
class Knight: public MD2Model
{
public:
Knight() {};
Knight(MyFiles* FH, char* FN, ModelManager* MM);
~Knight() {};
bool Update(); // we only need to supply the update, Draw is part of
MD2Model
};
So all I really want to create are a constructor, and an update like this
#include "Knight.h"
All my constructor is doing is using its base class constructor with the same signature,
then setting the sequence to animate, just so it has something to present at the first update .
All we need to do now with our Knights is move them around; this will vary depend-
ing on our game, but clearly stand, run, attack, jump, should be pretty simple, and be used
when we make decisions on our actions .
The Player code is, of course, exactly the same, just substitute the Player for Knight,
but I can choose to make him a knight, or a Princess or an Italian plumber, when I create
them . Instead of AI, we’ll use key controls to move him/her around triggering animations
as we need them .
The players’ animations are equally straightforward and directly tied to key controls .
Less obvious might be taunt, wave, and point? These would be more likely to occur at spe-
cific game points . In this case when standing idle for too long . Keep a timer and if the user
does not input a key value, select one of these sequences to get their attention .
Try to do your own animation selections, and take note of the concept of queuing,
some of these animations especially the taunts, should return to another sequence
when done rather than stop or loop . That has not been added to the supplied anima-
tion class . Also, take care of some of the sequences, which can have multiple subse-
quences, just to make life extra interesting . You just need to take note of where they
start and end .
Having our models animating and moving under control is a big step, but without
some form of ground to walk on it is little more than a sprite floating around in an empty
universe . If we plan to have a character animating and doing things in a world, we have to
take another big step and talk about environments .
Explaining Environments
Although not all games need a big world, there does need to be some virtual space concept
where the objects you control/avoid/interact with, can exist and your camera can view .
The type of environment you choose will vary according to your game . Most 3D free-
moving games will require a world of some kind and, in turn, that world will probably
have a data structure behind it, depending on its size, it might use an optimization system
such as a partition tree to access it for logic and sometimes rendering .
I’m once again going to have to explain a few things before we can leap into our next
3D game . Sorry, I know these long explanations can get a bit boring but they might help
you to understand the route we are taking despite you having possible knowledge of better
routes . The road to coding enlightenment is often paved with some really dumb ideas we
thought were clever at the time and experience eventually shows us how stupid we can be .
3D space was easy because our expectations of what happens in space are not fixed
in our brains, not too many of us have actually been in space! But environments that we
allow things to interact with, in more or less real-world style, require a whole host of con-
siderations that we must try to work out and make as realistic as possible . So let’s try first
to outline some of the technical things we have to incorporate .
Level of Detail
The GroundPlane Class touches on the idea of altering the level of detail, with its MAX_
TRIS value, but what exactly does that mean? LOD methods basically take into account
the fact that the further something is from our viewpoint the harder it is to make out
detail, so a low-detail object far away is not going to look much different from a high-detail
object equally far away . This can be apparent if we have a high poly model in our extreme
view, it may have a few thousand triangles but it’s probably only going to look like a few
dozen that are rendered . Our GPU is obligated to try to draw the thousand triangles even
if it ends up looking like a blob because of the distance . If, however, we had a selection
of models of lower level of detail, we could ask the GPU to render an appropriate model
depending on the distance from our view point
If done well with a good selection of LOD models, we would not really notice the change
in mesh . But there is, of course, a downside to these LOD models, they will take up space .
The compromise between performance and memory always comes into play . But perfor-
mance nearly always wins the debate if memory is available, and we should try to make it
This might clearer to see if we render these models as GL_LINE_STRIPS rather than
GL_TRIANGLES, which is a quick, cheap, but not very accurate way to replicate wire-
frame on GLES2 .0 .
But we’re not talking models yet, we’re talking about our ground, and our ground is cur-
rently one single variable-sized mesh, so there’s no (simple) efficient way of operating a
LOD system at least on an environment we design and is mappable, though procedural
generation using systems such as ROAM* can be very effective . It would be fairly trivial
* ROAM-Real-time Optimally Adapting Mesh is a level of detailed algorithm that does, in fact, allow us to cre-
ate variable detail in a terrain around the area that is visible from a camera . It works best with procedurally
generated terrains and is not easily implemented in a user-mapped environment . But can be a great method
of creating endless wilderness-based terrains that can render in real time . Look it up, it’s very easy to code,
but not quite suitable for the games we are trying to produce .
Mipmapping
This brings us to a concept called Mipmapping . Mipmapping is a means of taking a basic
texture, halving its resolution by simple but precise scaling that reduces its level of detail,
storing that in a way our GPU can easily access, then taking that and making a quarter-
sized version, then an eighth, and so on .
This image has been taken from Wikipedia, used under the GNU-Free Documentation
License .
The effective result is that you have one main texture and four or five progressively half
sized slightly reduced quality versions . The scaled versions clearly are lower res, but repre-
sent a better texture to use if you are viewing the texture from a distance or using an LOD
model, where the main image no longer needs to be scaled by so much .
This can dramatically improve the overall quality of the texture sampling at different
view points, but does come at the relatively high cost of around 50% more texture memory
and:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,
GL_NEAREST_MIPMAP_NEAREST);
And you will now be automatically using MipMaps every time you load an OBJ with
TinyObjLoader . Notice the speed increase? Any increase in speed is well worth a bit of
effort, but take care, not to fill all your GPU Ram . You should also consider making the
Mipmapping optional on models you are not likely to scale or show moving away from
the camera .
Mipmaping also highlights the fact that the GPU really prefers to use smaller textures
when it’s rendering the pixels, ideally as close to a one to one relationship is possible . Even
without Mipmaps, using the smallest possible texture can give a big speed improvement .
It’s again down to balance, the more we can fit in a texture, the easier it is to add detail to a
model, but the bigger the texture is, the more work the GPU has to do to scale the texture
to fit the resolution of the pixels it’s trying to draw .
Filtering
We can’t talk about Mipmapping though without talking about filtering, which relies on
us understanding, that when in GPU memory, our texture pixels as we see them, are not
actually pixels as the GPU sees them, in GPU memory they are called texels, and they
essentially act as the data we are going to store in the pixels of our framebuffer, which will
be displayed .
Normally, in an ideal one to one scale situation, 1 texel will equal 1 pixel, but if you
operate any kind of scaling you have to use more or less texels to make your pixel?
If you scale by four, for example, then your 1 pixel now has to occupy 16 pixels (4 × 4)
but there is only 1 texel in the GPU that corresponds to the pixel so that texel gets repeated
in all 16 pixels, that’s an example of magnification .
At the other extreme is when you scale down, called minification, where you have a
situation where you may have more texels going into 1 pixel, and something has to be done
to merge the different texels into the 1 pixel .
Both magnification and minification have downsides, they can create sharp blocky
graphics on magnify, and, of course, you lose detail in minification .
100 mm
We can, of course, make use of a simple OBJ as our terrain, so long as it’s not too big
and does not contain billions of triangles, this can be very effective for games with small
playfields and like 2D tiles, it’s possible to stitch lots of them together to make larger
worlds .
Render Culling!
It’s a pain that two different concepts share the same name but in one sense they are doing
pretty much the same thing only for different reasons . You’ve already seen the massive
benefits of sending all the data to the GPU and letting it do the drawing, It’s so much faster
than CPU style drawing, but now that we have environments, you must have noticed we’re
sending EVERYTHING to the GPU and letting the view systems decide what does and
does not get drawn, but internally you should realize that every single vertex and associ-
ated fragments are being tested to see if they need to be drawn .
But sending everything is really rather wasteful, we may be filling our GPU memory
with hundreds, maybe even thousands of vertices that we’ll never see . We know our tar-
gets have quite limited GPU power and memory, they are not going to be able to keep up as
our environments get bigger and bigger with more and more models and more and more
vertices being sent to the limited memory we have, even the biggest PC video cards have
limits, we have much stricter limits, so so much stricter!
So we need to bring our CPU back into play again, and allow it to make some general
decisions about what it does and does not send to the GPU for processing .
The concept of render culling is simple and rather obvious; we’ve done it already with
our 2D systems and with our collision systems . Just as collision culling prevents us doing
Near
plane
Far
plane
* I’m really hoping by the end of this book you will start to push your boundaries and find ways to really
optimize your rendering, but the scale of the projects here just won’t challenge you to do that .
Build and run . You should now be able to see a few of these new Asteroids not too far
from your start point . Our space game was running at a reasonable number of frames
depending on your target, because there were only four or five ships that are active at any
one time . My Pi 3 was doing 50+FPS quite happily . But the addition of these rather large
asteroids has caused the frame rate to plummet . Assuming you have added the Frustum
Class and correctly updated in the Camera Class, uncommenting the test at the start of the
Asteroids Draw function and building again will see that frame rate go up again .
Our 20 asteroids are still there spinning around doing nothing but looking pretty,
but we’re just not drawing the ones we can’t see . This is a principle we can now add to any
model we plan to draw, with the exception of the SkyBox, or a single mesh terrain, because
they are (nearly always) going to be in our view, so testing them is worthless .
Why not try to add a bit more gameplay to your Space Cadet game by shooting the
Asteroids and giving them some collision properties, so that they can bounce off each other?
1. Newton’s first law sometimes called the law of inertia; an object at rest will remain
at rest unless acted on by an unbalanced force . An object in motion continues
in motion with the same speed and in the same direction unless acted on by an
unbalanced force .
3. Newton’s third law is the one everyone knows; <translated from Latin> to every
action there’s always opposed an equal reaction: or the mutual actions of two bod-
ies on each other are always equal, and directed to contrary parts .
That will take a few minutes, but once done you will have installed the header files
and library files you need to include and run Bullet . You then need to add the four-core
libraries to our list of library names, (1) BulletCollision, (2) BulletSoftBody,
(3) BulletDynamics, and (4) LinearMath in our VisualGDB library list! These are
“* .so” libraries, so you will have to be careful if you distribute your project to ensure that
the user installs this also, or that you also supply them in your distribution . Also remem-
ber to add to your list of include directories .
/usr/include/bullet
At the time of writing, this gives us version 2 .82 . There is a 2 .86 version available, but you
may have to build it from source and I’m not sure if we really need to worry about that, the
later 2 .xx versions don’t contain any significant differences, and we’re not going that deep
into them . So stick with the current installable version for now and update when you feel
a bit more confident . Bullet is due to make a transition to GPU-based calculations when it
becomes version 3 . That will make it much less likely to be usable for us, as the vast majority
X = RotationAxis .x * sin(RotationAngle/2)
Y = RotationAxis .y * sin(RotationAngle/2)
Z = RotationAxis .z * sin(RotationAngle/2)
W = cos(RotationAngle/2)
Setting Things Up
Adding #include <btBulletDynamicsCommon.h> to our Game .h file allows us to
create these members in the Game Class, public or private, it’s up to you .
btBroadphaseInterface* BroadPhase;
btDefaultCollisionConfiguration* CollisionConfiguration;
btCollisionDispatcher* Dispatcher;
btSequentialImpulseConstraintSolver* ConstraintSolver;
btDiscreteDynamicsWorld* DynamicPhysicsWorld;
In the Game Class itself at initialize, we’ll create these things using some basic functional
concepts, other options are available though as we will discover .
// create the main physics systems
BroadPhase = new btDbvtBroadphase();
CollisionConfiguration = new btDefaultCollisionConfiguration();
Dispatcher = new btCollisionDispatcher(CollisionConfiguration);
ConstraintSolver = new btSequentialImpulseConstraintSolver;
DynamicPhysicsWorld = new btDiscreteDynamicsWorld(Dispatcher,
BroadPhase, ConstraintSolver, CollisionConfiguration);
// set a "normal" gravity level
DynamicPhysicsWorld->setGravity(btVector3(0, -9.81f, 0));
That, in essence, is all we need to set things up and get started, we’ll cover updating in a
moment, from here on we have to add some collision shapes and give some means of han-
dling our reaction to collisions, but that’s best explained in the code as it will depend very
much on what we load and the size and shape it best represents . To create new physical
objects I have added a small helper function to the Game Class .
This will simplify the process of creating new objects, we just supply the shape, of which
there are few types, and occasionally more are added, then Mass, Position, and Rotation,
but Rotation as a quaternion .
I’ve provided default values of 0,0,0 Position and no Rotation (0,0,0,1), so you don’t
have to get your hands dirty too much .
Stepping Through
Once our world is created and populated, updating Bullet and most other physics engines
involves a process called stepping . All the movement in Physics is based on the idea of how
far things will move in a given time, and it deals with any collisions, which might occur in
that period, so if we want to see them move at 60FPS, we must step the physics at 1/60th of
a second if we have a fixed time step and we are maintaining that, then a simple
DynamicPhysicsWorld->stepSimulation(1 / 60.0f);
is all we need, but we usually need a bit more precision, especially if we have fast move-
ment, so we can allow what are called substeps, or how many times in that 1/60th we will
step, to increase the precision, say by a factor of 10, like this:
DynamicPhysicsWorld->stepSimulation(1 / 60.0f, 10);
In effect, the step system will then iterate though 10 tests at 1/600 of a second, giving much
better precision . The actual number of steps is something you should experiment with, but
generally the more steps, the better the result, but more processing! The actual frame rate
is going to be very dependent on your game, usually it’s better to use delta time but so far
we’ve got away with the fixed rate . Though we will uncouple ourselves from the fixed rate
when we start to have large numbers of objects moving around that occasionally takes us
over the frame lock .
Visualizing Things
One very nice thing Bullet allows you to do is to visualize the abstract mathematical
things that you create, which really does make debugging and placement of important
objects/parts a lot easier . This debug rendering will allow you to overlay the basic primi-
tive shapes of your physics objects by extending an Onboard Class called btDebugDraw .
Since this does not do any actual draws as the older Line draw system was designed to, we
now have the responsibility to call an additional render function like this to produce our
lines:
void PhysicsDraw::DoDebugDraw()
{// set up the line shader and then draw the buffer
//load the vertex data info
glVertexAttribPointer(this->positionLoc,
3, // there are 3 values xyz
GL_FLOAT, // they are float
GL_FALSE, // don't need to be normalised
4*sizeof(float), // (be aware btVector3 uses 4 floats)
(GLfloat*)&this->TheLines[0] // where do they start
);
glEnableVertexAttribArray(this->positionLoc);
glDrawArrays(GL_LINES, 0, TheLines.size()*2);
TheLines.clear();
}
A very simple Shader with a Matrix 4 × 4 uniform and a a_position attribute can then
draw the relevant lines as the last part of the render section of our game update loop . We
just supply it with the combined View and Projection matrices, they are points in space, so
no need for a Model matrix, though we’ll reuse the full MVP matrix pointer in the Shader,
and ensure we have a handle in positionLoc for the a_Position attrib . Here’s a code snippet
from the end of the Game Update, after normal objects have been updated and rendered, we
should wrap this in a #ifdef…DEBUG because it will only be needed in debug .
DynamicPhysicsWorld->debugDrawWorld();
glm::mat4* Projection = TheCamera->GetProjection();
glm::mat4* View = TheCamera->GetView();
glUseProgram(PhysicsDraw::ProgramObject);
Even if we choose not to give an object type its own physics object and leave this blank, it’s
not going to use up much space, better to have it as an option and use CreatePhysicsObh
when we need to .
CreatePhysicsObj It is best suited in the Game Class or wherever you create the
objects . Once created, a physics object is placed in the Dynamic world and we can inter-
rogate it as it gets updated by the physics update functions . This is especially useful if we
include a pointer back to our object’s base or derived class instance, so that we can ask
the physics system, which PhysicsObject is associated with which game ObjectModel,
using some pointer systems that the Bullet provides . This provides two-way links, so
our game ObjectModels and our Bullet PhysicsObjects will be able to know about each
other .
We still need to register our new draw systems with the inbuilt but currently passive
DebugDraw, which is simply done by creating an instance of our PhysicsDraw method
and telling the physics world where it is .
m_pPhysicsDrawer = new PhysicsDraw();
m_pPhysicsDrawer->setDebugMode(1+2);
this->DynamicPhysicsWorld->setDebugDrawer(m_pPhysicsDrawer);
The physics-based demos on the support site all have debug rendering enabled by pressing
the W or F key to provide a wire frame . I only really do the contact point and line func-
tions though, but you can fill in the others if you need them . You can have a look at the
BulletDebugDraw Class to see how that works, it’s not very complex at all .
Debug rendering does slow your frame rate down a bit though, as you are effectively
adding another draw cycle, albeit a short one, so best not to keep it active all the time!
Debug functions are usually written for functionality, and not for performance .
This might seem like a lot of trouble to go to, but being able to visualize how the phys-
ics world sees and manages your objects is very important, you will be able to spot errors
of structure and position much faster and ensure that your model visualization matches
the physical world that controls its movement and orientation .
which are methods in the btRigidBody Class . Each method requires a btVector3 value
and also a local point on the object where the force is actually applied to that allows us to
replicate movement around the center of mass .
There are couple of other methods, which can be very useful too, and indeed we tend
to use them more:
applyCentralForce();
applyCentralImpulse();
applyTorqueImpulse();
The applyCentralxxx methods allow us to push against the center of mass of an object,
so we don’t need the local offsets, so you just provide the btVector3 value, the center and
the whole thing will move . There is logically no way to replicate a Central torque as such a
concept does not exist in physics, so why try eh? That’s enough info for now to get started;
we’ll pick up the detail in the code as we go along .
We can also more usefully, when we know we have a manifold, gain access to it .
btPersistentManifold* Manifold =
Dispatcher->getManifoldByIndexInternal(index);
Depending on the complexity of our world and numbers of objects in there, the list can
grow, and we will have to go through each, in turn, so being able to get them by index is
very useful .
With a Manifold available to us, we can get some very useful information, such as
info on which two rigid bodies are colliding, and how many contact points there are .
Contact points are very important, because it’s possible that there are 0 contact points,
in which case, the objects are simply close to each other, not actually touching . But more
than 0 means that there is a clear collision occurring and they are in contact and we can
do something about that .
But we do need to be very clear about the concept of interact, contact, and separate .
Three different points in the life of the two objects . We have to decide which of these are
important to us and when we should do our own event that will trigger our response .
Most times, we want to trigger our events at the exact point of collision, but that can
be a little tricky to work out, we would need to know two things: first, are our pair cur-
rently touching, and if yes, were they touching before? If no to the second part, then we
can be certain that this is the first point of contact .
Equally for the separation point, we need to know that at the moment they are not
touching, and that they were touching in the last sequence . We could take note of the
interaction event, really it’s more of a proximity alert, which might have a value to your
game, but for now, of the three possible events we’re going to focus on the CollisionEvent,
and the SeparationEvent .
Clearly, all we need is a time machine to go back to the last 60th of a second and
compare… ah wait! there’s a flaw in that concept . Time machines weren’t invented when I
wrote this! That’s inconvenient . We need to come up with another plan .
We do need to be careful though, because the order of a pair will make them a distinct
std:pair, so body0:body1 is considered a different pair from body1:body0 . I have a neat way
to deal with that as you will see .
We could create a map of these CollisionObjects but there’s another factor to consider,
any individual object may, in fact, collide with more than one object . That makes a map a
little hard to use, because a map needs one of the index values to be unique and we can’t
be sure of that . A std::vector or std::list could be used, but we’d have to scan through it
and to add some qualification code for situations where we have more than one collision
on a body .
The solution is std:set, a new-fangled C++11 structure,* which allows us to compare a set
with another set, and extract the differences . It’s not especially fast, but it is especially clean
and effective, so in that vein we will create a set of CollisionObjects called ManifoldPairs .
typedef std::set<CollisionObjects> ManifoldPairs;
We also need to add three new functions to the Game Class, a Collision Checker, of
course, and handlers for the types of collision events . They are nearly always called the
CollisionEvent and the SeparationEvent, so let’s not break with tradition . Finally, we will
need to keep a record of our last set of pairs . In Game .h you can add these:
typedef std::pair<const btRigidBody*, const btRigidBody*>
CollisionObjects;
typedef std::set<CollisionObjects> ManifoldPairs;
All we need to do now is add the code for the Collision Check which looks like this .
void Game::CheckForCollision() {
// keep a list of the collision pairs we find during the
current update
ManifoldPairs pairs0ThisFrame;
* See, even I use C++11 sometimes when I need to, this is too useful a function not to use it!
//they are pointers & have numerical value, use that to store them lowest
value first
//this pair definitely are colliding, if they are brand new we can safely
say it’s a collision event
if (pairsLastFrame.find(NewPair) == pairsLastFrame.end()) //
search the old list for this new pair
{
CollisionEvent((btRigidBody*)Body0, (btRigidBody*)Body1); // got
through to the end...it wasn't there so this is a new hit
}
} // if
} // for i
// another list for pairs that were removed this update, they will be
separated events
ManifoldPairs removedPairs;
// compare the set from last frame with the set this frame and put the
removed ones in the removed Pairs
std::set_difference( pairsLastFrame.begin(), pairsLastFrame.end(),
pairsThisFrame.begin(), pairsThisFrame.end(),
std::inserter(removedPairs, removedPairs.begin())
);
// iterate through all of the removed pairs sending separation events for
them, can't use an index for this so this is a C++11 iterater
for (ManifoldPairs::const_iterator iter = removedPairs.begin(); iter
!= removedPairs.end(); ++iter)
{
SeparationEvent((btRigidBody*)iter->first, (btRigidBody*)
iter->second);
}
// because the names for sets are pointers, we can simply tell the Last
pair it is now the current pair
pairsLastFrame = pairsThisFrame;
Repeat this for the SeparationEvent system and decide if you want to have a different col-
lision handler for that event .
Since ObjectModel is our base class, no matter what type of object we created, if we
stored its address in its RigidBody’s user pointer, we can cast that back to an ObjectModel,
and provide HandleCollision routines, at least in the ObjectModel Class . If that routine
is marked as virtual, then derived classes can be given their own proper handler, and now
you can have a specific type of handling routine to do whatever you need to do when hit
by the other object .
The one flaw in this nice idea is that we can only pass an ObjectModel pointer, so you
are limited to the base values in that class, but you could include a Type enum or an ID
value, to check if you got hit by a bullet type, or a soft bouncy ball type . I’m told that one
of those is bad for your health .
As different games will need to use collision in different ways it can be hard to general-
ize your collision system . But certainly the idea of collision response is pretty universal and
using a virtual base function allows you to create specific class functions for derived classes .
I’ve added and moved all this code from my Game Class into a CollisionProcess Class
in my 3D game template, and extended it a little bit to keep track of contact events . This
might be useful when we want to know if contact between objects is constant, on a floor,
for example, though it’s a bit more wasteful than testing for simple collision and separa-
tion events . What is important is to make sure that Bullet provides access to the base class
you use, and that there is some simple form of ID available there to tell what you are hit-
ting, or is hitting you .
That pretty much covers collision, what we do with it is up to us and our game logic,
we can let it bounce around as it does by default or we can set off sound effects, making an
object die, blow up, get mad, and hit back . It’s open to lots of options . You can see this in
operation in all the Physics-based Demos that follow from now .
The Downside
Now it’s a word of warning time . I already said there is a cost of a lot of processing using
Bullet . Bullet and other physics engines you might find will work on our limited targets
Clearly, we can also return Left, Right, Up, and Down directions from this con-
cept if we need them, using relevant direction vectors to multiply the Quat . I added
btVector3 Forward; to the ObjectModel Class because it’s going to be important .
Take note that Bullet clearly has its own version of a Vector3, compared to GLM, it’s an
irritation that the two are not directly transferrable without having to do some form of
wasteful casting, but because we are not doing much with glm::matrices and glm::vectors
while using Physics we should just use what is available to us .
Another method to extract the forward vector is this:
btMatrix3x3 rotationMatrix(orient);
Forward = btVector3(rotationMatrix.getColumn(2)); // we will keep this its
useful for all sorts
Right can be obtained from column 0 and Up from column 1 . Here, we are creating a small
3 × 3 matrix and extracting the forward direction from it . I think the first version might
be slightly slower because of the internal multiplication that goes on, but both versions
provide what we need and the first makes getting cardinal points much simpler .
Once we have that vector, we can normalize it and it can then be used to push us for-
ward, or the negative of it to push us back, as a force, a simple scalar can be used to provide
the acceleration value needed, and we can increment that scalar up to a maximum value .
I set up a small test case in my game loop to move the first car:
MyObjects[0]->MyPhysObj->GetRigidBody()->setActivationState(DISABLE_
DEACTIVATION); // DON'T LET OUR PLAYER GO TO SLEEP, or we can't move it.
if (this->TheInput->TestKey(KEY_SPACE))
{
MyObjects[0]->Forward.normalize();
MyObjects[0]->MyPhysObj->GetRigidBody()->applyCentralImpulse
(MyObjects[0]->Forward * 10); // forward vector
}
if (this->TheGame->TheInput->TestKey(KEY_DOWN))
{
motion--;
if (motion <-20) motion = -20;
// apply an impulse for forward motion and a simple Vel for steering.
MyPhysObj->GetRigidBody()->applyCentralImpulse(Forward*motion * 30);
MyPhysObj->GetRigidBody()->setAngularVelocity(btVector3(0,-1.0,0)
*LRDIR*6);
Once entered, this compiles and now we have steering, braking, and turning . We have
a car! It can drive around pretty convincingly on a currently plain flat ground, and it
Staying on Track
Time to make our track visible and we have quite a number of different ways of creating
a realistic track . We can hold a pretty decent-sized world/track for our game as one very
large, or at least scaled OBJ file, but really there is a finite limit, and remember we only
really want to display the bits we see, if we have a large complex OBJ there’s going to be a lot
of vertices in that OBJ that at any given time are going to be processed but never rendered .
Quite a waste . So, we can take a bit of time to use some ingenuity in creating our world
so that it can be much larger than the area we want to display and prevent ourselves from
trying to display things that are not there .
If, rather than having one large playfield, we have a selection of smaller playfields, we
can use them in the same way as tiles, and have a simple array that represents which tile
we draw .
Yes it’s exactly the same process we used in the 2D games! Make our map out of
sections of OBJ, which are also allowed to repeat, and chop the playfield; in this case, a
terrain down into smaller chunks . This allows us to cull/not draw, some of those tiles
The TileManager Class in the race demo is responsible for creating the track, first it loads
and creates the six different tiles I have available, and creates and stores triangle meshes
for the collision maps we’re going to use shortly . It then discards the track models in turn,
because its main purpose is to set up the vb and texture systems and generate a collision
mesh . The ModelOBJ does not clean up its vb or Texture’s on destruction, that’s a task for
the Model Manager when it closes down, so the OBJ shapes, along with the vb’s and tex-
ture images loaded are safe and sound and usable .
we can do our old 2D trick of parsing through a tile map, and placing tiles in space, like so .
Those old 2D skills don’t die easy, do they? Sadly, I don’t have a plain grass tile to put in a bit
more detail, but you get the idea . I used the tile number nine to indicate a simple blank space .
Using a high god camera like this is quite inefficient as we’re pushing a lot of tiles to
the render system, and this does drop frames, but for sure we can consider setting some of
the tiles to a “not seen, don’t render” state, which will get our frame rate back .
But a better camera, better culling, and some more effective rendering will get this
running at a nice 60FPS, even with a few other cars on the track . Try adding some, you
have a choice of camera available to you, just create the type you want, and add a cull-
ing system into the currently inactive tile draw routines, to avoid sending unseen tiles
to the GPU .
Thereafter, it’s a simple matter in the parsing code to locate the section where the vertices
are pulled out to be stored in our vb vector, and additionally if it exists, add them to our
passed Mesh vector (around line 330 in the ModelManager .cpp file .)
This Code is already in place for you, it’s just been dormant as we’ve not added any Object
types that need to create any form of collision mesh . I stored it as btVector3 because it’s
eventually going to be used by the Bullet, so best to provide its format .
Adding null or harmless default parameters like this is an effective way to build on
older code, when you have a lot of systems already calling these functions, but you need to
add conditional extra functionality, it can save a lot of rewriting . Any new object type can
now load its vbs as before but also keep a Mesh vector to use as it wants, the old objects can
just load their vbs and let the mesh vanish .
Now that we have access to a mesh, we can send it to Bullet and ask it very nicely to
make a btBvhTriangleMeshShape or other shape and add it to the physics world . Our
TrackTile Class can now use this enhanced system and once created and added, switching
on the wireframe will show you that the track itself is now also part of the physics world .
Our ride is considerably bumpier and overall we are getting a sense of being in a proper
physical environment .
The Code to make a terrain tile physical is not exactly direct, though we can extract
the mesh now, it’s not in a format that Bullet understands, so a bit of translation to its for-
mat allows it to be used, like so .
>>>> These are the interesting physics parts that turn our mesh into a
bullet mesh.
If we divide the space that our tiles occupy up into a grid, we can see that some squares
have track and some don’t . Now we’ve used this principle to have tiles represent graphics,
and can do this here, both in 2D and in 3D…but suppose we now use the tiles to indicate
arrows of direction . If we had a lot of graphics that indicated direction, rather than the
track display it might look a little bit like this .
That gives us eight possible direction and a means to tell if we crossed the line, and our
direction map is simply going to be a 2D array with these, and maybe some other values
that give us indicators where to move . I say indicators, because we also have to take into
account that fact our cars are moving, and will have inertia, so the directions in the grid
are telling us where to move to, not giving absolute values .
Our track is effectively telling us the direction we need to move to stay on the track
and the direction we need to go to if we come off the track . It’s a neat system .
Flow fields are one of my personal favorite methods for autonomously moving things
around, because they are usually preproduced or sometimes precalculated during initial-
ization, and provide a very simple and easy to understand method for getting from one
place to another .
It’s not especially hard for us to pinpoint which cell in a grid we are looking at and find the
correct motion to head back, we know the tile size, in units at least, and there is a direct-
scaled correlation between the flow map and the track tile map . So, we can get hold of a lot
of good directional info very easily . For AI, this can provide a steering force, for a player
we could put up an arrow to indicate the direction to get back to the track, or a barrier to
prevent us going off the track, though usually the installation of actual barriers that are
given physical properties would be a better way to stop our wandering and give something
a bit more exciting to crash into .
The final step in the Racetrack demo only needs some very simple logic to make use of
our flow field and also to try not to get too ahead of our player if leading, to give him a chance
to catch up . This can be refined quite some amount by adding skill levels to the AI player to
prevent skidding and losing speed, or by simply adding speed limits, which are a little below
what we allow the player, but expecting the player to abuse and misuse all this horsepower
and not drive a clean line around the track, so the AI will benefit from the players’ errors .
Other Optimizations
This has been a fun little game, and starting to look like a semiprofessional product, but it
already throws up a lot of issues, frame rates dropped at times, and though Bullet’s version
of delta time compensated there were times when the game might have been a little jerky .
All is not lost though, because from the start I’ve mentioned that our systems are func-
tional, but far from optimal . We need to start being much more careful about how we do
things, and also which things we do at any given time .
Now having recognized that our systems usually can’t do everything we need, all the
time we need them to . There are quite some number of directly usable things we can do to
increase our overall performance, we can cull a lot of things in our CPU side, preventing
us doing excessive collisions, or sending pointless draws or logic systems, but we could
also improve some techniques for our use of our GPU, and it might be worth investing
some time in explaining them . But we are really getting into the realms of tech coding if
we do that .
Some are obvious and have been covered, such as reducing the resolution of our dis-
play or frame buffer, choosing when to use vertex lighting rather than per pixel lighting
(maybe even no lighting) . Making good use of LOD systems for both the vertex buffer and
texture sizes can also make a lot of difference, Mipmapping, and so on . How much these
will impact your performance will very much depend on the game you are doing . LOD
will only work, for example, if you have objects you know will travel in and out of your
depth view, if they largely stay in the front of your screen, there is no benefit and your
depth base LOD calculations are wasted .
Our current GPU rendering is also something that needs review, I stated way back
when we started using TinyObjLoader, that we were only using simple vertex arrays . There
is also quite some performance to be gained by using index-based buffers, but there is a
slight increase in the level of complexity needed to create them .
433
had to hold it on its side, as that allowed for a consecutive memory access . I’ve seen this
actually running on calculators and watches .
But as fun as this method was, there’s really no point in detailing it here, its horribly
outdated and would mean returning to our double buffer system we left behind in a 2D
land .
If we are going to do indoor 3D, we need to do proper 3D, which owes its gaming DNA
to the work of John Cormack and the role he played in coding the Quake series of games,
which popularized the concept of using polygons in engines, developing, and enhancing
the then largely unused techniques and systems to improve the basic position and display
systems .
Thanks to my student Channing Eggers for this, he knocked it up as part of his first year
Raspberry assignment, and I asked if I could use it for this demo . In parts of his map, there
are a lot of walls, and when we are in the enclosed spaces we know we are going to draw
characters behind the walls in front of us, so it’s a good example to use for attempting
some first-generation sneaking around using rays to detect walls and practicing occlusion
with frustum and line of sight occlusion ideas .
Take a moment to notice the way I move the Player and by extension other objects
around . I am not using forces or impulses and indeed I’m overriding most of the main
motion systems by resetting angular velocities to 0 . Why would I do that?
It’s to do with the nature and expectation of the game, and also some convenience .
It’s quite viable to move players and objects around in this 3D maze by supplying a force-
based movement, and if we balance everything well it will work nicely and feel very real-
istic, but we’re not trying to emulate a real-world environment, there’s nothing remotely
real world about a first person shooter, and that balancing can take quite some time to get
right .
We need more direct control of our motion, so rather than applying torque or direc-
tion impulses, I am actually adding ± angles to a models’ orientation to change it . I’m also
applying forward linear speed, not force, to move him forward and back . That gives me
a more immediate sense of movement and control in a game like this while still allowing
friction to stop me and collision detection to prevent me moving through walls, though
not, because of the override, providing any nice bounce off the walls . It’s a small price to
pay, or I could even create my own bounce off using a collision-response system .
Bullet still provides us with Gravity, because that’s pretty useful for controlling jumps
and falls, the collision, of course, is still very useful, and even without force-based move-
ment it gives us speed/movement and orientation control . Other benefits include the
The screenshot in the above image shows the antenna ray in front looking for an object
to avoid and the ray to the right test finding a wall to avoid . We could add any number
of such antennas searching ahead and around to make decisions based on contact with
things about to hit us but generally ahead and to the side will give us enough info on what
our character can see .
Of course, it’s not a perfect system, but actually that’s part of its joy, if enemies are
running too fast, their change in direction is not going to be smooth/quick enough some-
times, and they will bump into walls and each other, just like you see in every blooper
show on TV show with that Storm Trooper hitting his head!
Review the MoveToPoint(x, y, z) code in the provided EnemyLogic Class, you can
now add to the project . This will give you methods to move your enemy characters around
to move around and also detect your presence and respond to alerts in a simple way .
As always, feel free to add new behaviors .
You’ll find a finished but unoptimized version of 3DMazeHunt on the support site,
where I have made everything work for you, don’t cut and paste! Try to write your own
versions, and then compare it with mine . If you must cut and paste, try to find some better
ways to achieve what I did, there are several .
What Do We Draw?
Accepting for now, that we have to draw the entire maze, because we don’t have a decent
render method that would make it better (you can research a few though) . We need to
focus our efforts on the enemies and other details that we’ve placed in the world .
which gives us two possible corner points, which we can extrapolate to a full eight for the
cube, we can use these to cast a btRay to each corner from our camera view, if any of these
rays goes from the eye point to the corner without hitting a wall we can see it, or at least
be very close to seeing it, as the AABB is often larger than the model . But if all eight are
obscured by a wall, we can’t . Eight (though we could actually reduce that to four) simple
tests could save us from sending a model to the GPU and the associated time is gained .
It is sometimes valid to allow a model to hide another model behind it, but usually
our models do have a lot of empty space, which we can be seen through . Basic occlusion
systems work on bounding volumes, which regard that empty space as a valid area of
occlusion, so it’s probably best to focus only on avoiding a draw on an object obscured by
an environmental feature .
I’ve added an unoptimized occlusion test method to the ObjectModel Class, it returns
a simple bool for true, it’s visible, or false, it’s behind a wall, for you to try it out, it’s totally
self-contained, all you need to do is add it to your draw systems, and if the draw is not
needed abort the draw .
Your fun task now is to optimize that method and get it even faster and, of course, if
you don’t want to use Bullet you need to use a different ray cast and have a different way to
access the bounding volume .
One small point I should make you aware of as is, it caught me out when I was writing
it . In an FPS the camera is located at the models head, and because our player character is
himself a physical object, even if not rendered, he is also detectable by a ray-cast system,
and it is therefore entirely possible that the ray test can return him as the resultant hit of a
ray cast . This makes sense when you think about it, we are encased in a collision shell, and
firing a ray from inside that shell we will always hit the inside of it . But because all collision
tests are done as pairs, and dynamically sorted with some small elements of unpredict-
ability, it may be that sometimes you get the wall and sometimes you get the player, such
random returns can play havoc with your logic . So take care when using ray casts from
objects not to start the ray from inside the collision container . I add an offset slightly larger
than the radius of the character multiplied by a normalized direction to avoid that . If your
camera is not a physical object, you won’t need that normally or you must take care to
ignore any ray-cast result that returns your player .
We get a line going from our eye to the mouse pointer into the screen . But how does that
help us? Well the screen is effectively our near plane of our camera, and our eye, for all
Step 1: Create the directional Picking ray, this uses <jazz hands> Maths, but basically
takes the x and y coordinates of our mouse, jiggles it about a bit with the camera
view and projection and produces a ray as a vector3 . This is where having access
to the camera’s variable data finally becomes useful; I hope you added access sys-
tems? For now, I will just grab them from the camera itself .
Step 2: Cast the ray, using the internal Bullet systems, which returns the RigidBody
and other info IF there’s a hit .
I won’t detail the maths; the code is in the Rays Class in the next RayPick demo for
you to look at . All we need to know is we now have a very nice Pick function,
which if we have a mouse in use will tell us what we are clicking on and allow our
code to take whatever action is needed for that .
447
We’ve pretty much got basic concepts of different types of game up and running, but our
graphics are still a little on the lighter side; ideally, we want to delve deeper in the technical
side of graphics programming, but that’s a lot of maths to get your head around, and again
a 20 chapter book, most of which isn’t really going to be a lot of use on our itty bitty dual
core GPUs . But one big graphic boost you can add and probably get away with is shadows .
I hope you remember we put them in a bag for later, well now it’s time!
I mentioned before about shadows and their reliance on Shaders, which we were just
beginning with, so it was a bit of a complex topic to get our heads around at the start of
our Shader journey . We should now be comfortable enough with Shaders to start doing
some complex stuff . We now have suitable models and terrains, which will be able to dem-
onstrate them properly, so now is the right time for shadows, this topic is a bit technical
though, despite being a fairly simple premise, shadows are hard on OpenGLES2 .0 . I used
to use a variation on an approach I used in a full fat OpenGL, but since playing with SBCs
and OpenGLES2 .0, I found I was jumping through hoops a bit too much and had to sim-
plify things, I’ve since found some easier methods that seem to work pretty well . There are
a couple of other methods and more being discovered but let’s stick to something we know
works and which there are plenty of examples of use in games .
Before we start though, you need to know why I’ve left it this long to discuss, it’s
simple, shadows will kill your frame rate, because we’re going to have to run two passes of
our graphics through the GPU, there’s just no getting away from that, it’s a computation-
ally expensive process on even the best GPUs, our little twin and quad cores GPUs might
be able to run the code, but you have to consider this feature only when you really need
it and not in a game with dozens of shadow-casting objects . You can develop tricks and
methods to improve performance, and that’s definitely something you need to do, but any
form of shadow other than a prebaked built-in model shadow, is going to kill performance .
All that said, let’s try out the most useful system and get it working, the Raspberry
3B might struggle with this, but perhaps by the time you read this the Raspberry 7 will be
more than capable .
Shadow Mapping
I think I’m right in saying that Shadow mapping is one of the best of the current methods
used for creating dynamic shadows . They have been around a few years now and work
pretty well . But like all shadow systems, they do come at a bit of a cost, as they are a result
of a multiple passes through our GPU to set up buffers, which will render a scene from the
light’s point of view rather than the camera . Rather than render pixels, it will store depths
from the point where the light is placed to the point of the pixel being lit . The result-
ing framebuffer will contain depth information on the pixels the light-positioned camera
draws, but only as a distance from the light source .
Think of it as a light directly overhead a model of a character holding an umbrella we
want to render, the light is shining down on the umbrella, which clearly gets lit but stops
the light at that point, everything under the umbrella will be in shade and lit only by the
ambient light in the scene . So that means we have a concept of height, where the umbrella
is! Anything we draw under that level is darker . Changing the angle of the light does
not make a massive difference, remember matrices allow us to twist and turn points and
directions in space, which is all a light is from our viewpoint, the distance the light travels
until it hits an obstacle, can be comfortably worked out and then used to decide if pixels
That, in effect, means we have access to a concept of things that the light hits and how deep
into the draw plane it was hit…which we can twist around into a value of above and below
things that light is going to hit . Things above the barrier can be drawn as lit normally,
things below can be in shade or totally dark as we decide .
Okay, that, kinda sounds simple doing the calculations might be fairly trivial but
there’s a lot of them and they need to be easy to access .
This is where we need the power of our GPU and its Shaders . OpenGLES2 .0 has so
far been perfectly happy calculating and drawing our RGBA pixels into a big render target
buffer, which then becomes the screen .
It also can create other types of buffer not directly related to color info, especially…
drum roll please . A depth buffer! Which not surprisingly can hold info on how far from
a point of interest in our 3D space is…something interesting such as a light, for example .
Yup, OpenGLES2 .0 already has all the main tools we need to do shadows, but we still
need to get the Shaders up and running and we do have to do two draw calls to our GPU:
one to create the depth buffer, and one to create the actual image, which uses that depth
buffer .
Unfortunately, OpenGLES2 .0s means of accessing depth buffers during the pixel ren-
der is a little tricky . Unlike full-fat OpenGL where we can have multiple render targets, one
to contain depth values, one to contain pixels, and so on, on OpenGLES2 .0 we can only
have one . So, the render target buffer we would put the depth values into is the same render
target buffer we would write our pixels to, and they’d get kinda lost as one overwrites the
other . Tricky .
But where there’s a will there’s a way . Since we can’t write to our render targets, we
have to work out what we can write to . Of course, it’s a type of buffer, and we have options
to use .
Depth textures would be my preference because there are some cool and cheap ways
to draw to a texture and use extensions to produce depth values, and in an ideal world I’d
be happy to use those, but extensions are not guaranteed on all machines, so if you are
planning to release your projects to the community, you have to allow for the fact not all
GPU’s support the same extensions, so we’d need a fall back, so I’ll do this first using a fall
back position and later you can try to use the GL_OES_depth32 or GL_OES_depth24
glGenTextures(1, &Texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,Texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, Width, Height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Width and height being the size of the screen (resolution, not display), but everything else is
pretty much the normal system, what happens next is cool though, we now create a framebuffer:
glGenFramebuffers(1, &FBO); / allocate a Framebuffer
glGenRenderbuffers(1, &RenderBuffer); / allocate a render buffer
glBindRenderbuffer(GL_RENDERBUFFER, RenderBuffer); / bind it
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, theWidth,
theHeight); /give some parameters for it
glBindTexture(GL_TEXTURE_2D, NameTexture); / bind the texture
glBindFramebuffer(GL_FRAMEBUFFER, FBO); / and the frame buffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_
RENDERBUFFER,RenderBuffer); / they are now attached
Our subsequent draw and renders will go to that frame buffer, which is a texture . Just
remember to unbind the frame buffer after we have done with the light passes .
So, we now have to look at the Shaders we need to make those textures something
more interesting than just color . The Light Pass is very simple:
attribute vec4 a_Position;
uniform mat4 LightMVP;
varying vec4 v_TexCoord;
void main()
{
v_TexCoord = LightMVP * a_Position;
gl_Position = LightMVP * a_Position;
}
In fact, you’ve seen most of this simple Shader before a few times, it’s only calculating the
position of interest and passing on the texture for the Fragment Shader to use . But this
time, it’s using the Light source as the point of view, not the camera . The texture coor-
dinate though is very interesting and used later . The real work is done in the fragment
Shader; instead of loading RGB values into the texture we are going to put a scaled value:
precision mediump float; / only need medium precision, no need to choke the GPU
varying vec4 v_TexCoord;
float value;
float val2;
float vn;
float f;
void main()
{
value = 10.0 - v_TexCoord.z; // force the range to be -15 to +15 (alter if
needed)
val2 = floor(value); // remove the decimals
f = value – val2; // get them back
vn = v * 0.1; // reduce to fit
gl_FragColor = vec4(vn, f, 0.0, 1.0); // this would normally be RGBA, but R
and G have depth values, B is unused. A needs to be 1 or the colour might
get ignored
}
Ok, so our texture depth buffer will be ready after the first draw call, all we have to do is
pass it an MVP matrix for the light source and it’s going to do its things .
Of course, you need to run a render loop through all your objects to generate this
depth texture, so the number of objects you have on-screen will have an impact on this
process as it does with any render pass .
455
By far, the most obvious things we load in are our graphics and our data structures .
On SBC systems, we need to accept that loading is a slow process, so we therefore have to
make sure it is done only when the game is not running in a main loop . This is why level
design and check points are important . Compression systems and chunk-based loading
mean we can reduce the load times quite well, but we do have to be careful about the
memory overhead we use . Our slightly self-imposed minimum machine level has 512MB
of Ram, but in that, our code has to live, along with the OS . And, of course, our GPU that
shares the memory, steals at least 64MB, most likely 128MB, so in reality we may find we
only have less than 200MB of data to play with . The old school Spectrum programmer
in me is somewhat stunned by that amount of memory, but the more cynical modern
Console coder knows that just one model can take that much memory .
We need to always make sure we are realistic in our asset size . Streaming from our
SD/eMMC memory IS possible, but it’s really rather slow, so at best we can only do one
thing that way, usually music!
Other memory eaters are the graphics; the sooner they are loaded into the GPU mem-
ory and treated as a fixed immovable asset the better . Always ensure that you keep only
what you need in any given level or checkpoint area . Most games have a certain number of
assets that every level has, and many more variable ones . Keep careful records and work
on your design to ensure that you don’t fill all your graphic memory with 500MB of dif-
ferent models of grass!
Keep your loads in one place, loading in a game loop isn’t an option, we demonstrated
that way back in the 2D shooting games, so careful note of what you need and when you
need it is a key to success . Once you have model or texture data locked in the GPU make
sure that you free that CPU memory for reuse .
Keep data you want to save together . It’s far easier to write out a chunk of memory
with all the info you need, than it is to create a buffer and load things in . Consider the way
the MD2 file format is laid out, if your own memory mimics that header layout you don’t
have to do much work to save a header, and then save the memory chunk that follows .
Likewise, reloading it allows you to point straight to a prepped area .
Compression systems are incredibly useful, loading a compressed graphic will be
fast and allow you to keep more data on the storage medium . But with few exceptions
(compressed textures, for example), compressed data cannot be used directly, it must be
uncompressed into a workspace . So, in practice, you have the memory you load the com-
pressed data into, and the memory you decompress the data into . That clearly means that
you actually will use more memory than loading raw data directly into a prepped buffer .
Loading scripts can be a godsend here . Even the simple act of creating one can help
you to visualize what you are using and where . If your level system works by loading a
script, where all assets are listed and loaded . You can see clearly how much memory you
are using, both in GPU and CPU .
Scene Management
When we have a big world, we tend to have a big problem . Storing every single object
in a scene as individual lumps of data means an absolute massive amount of data to be
wrangled .
LOADENEMYTYPE BadKight,10,10,10;
It’s relatively easy to read, and in your code, you can simply scan for keywords,
LOADMODEL, LOADPLAYER, LOADENEMYTYPE, and in a switch/case system, parse
out the filename to load, the position to place, and on finding the ; line terminator, move
through to the next one .
Each keyword will have an associated set of minimum, values it need, LOADMODEL,
only wants the obj filename, and position to place it, we could also add scale and orienta-
tion if we wanted though?
A simple Parsing Level system is available for you on the support site, LevelLoadDemo .
You can expand on this system as much as you want and use it to create levels .
Asset Management
The biggest key to asset management is to make sure you have good folder structure . It
may indeed be a bit of a pain typing in full directory names, but the simple fact is, that
having your textures in a texture directory, and your materials in a material directory
makes it easier to keep track of them . Many times in the early projects I had files in shared
directories and hard coded the load systems to look in those directories . Only later did I
split them out . It’s quite true that having all our assets in 1 super folder is viable, for a small
project with under 10 files is probably quite desirable, but as games get more complex it
really is important to separate them into relevant locations .
In game, keeping track of what assets you have in place in the memory at any given
time is also very important . We used C++’s MAP structures to allow us to keep the names
of assets live in our system and compare them, so that in itself suggests we need to ensure
our naming conventions are clear and ideally descriptive . A file called RobotHead.png is
simple to read and find, unlike L1ARH1.png Even if Level 1 Asset, Robot head 1, might
seem obvious, it’s not! As with variables, keep filenames clear, a proper name avoids con-
fusion and lets you keep track .
But having them on the SD or eMMC memory in nice tidy formats, is only half the
battle, we need to also think carefully about how we load (and occasionally save) data .
◾ Avoid unnecessary dynamic allocation of objects, if your game needs 100 things
in it, creates them and leaves them active, using a reset/sleep method rather than
delete .
◾ Memory is usually allocated on a first come first serve basis from the base of the
memory block up, so make sure you create the big objects first, so that small objects
do not try to use a previously large block cutting it back into a smaller block .
◾ Finally…you could create your own memory manager to allow for more logical
allocation of memory and some defragmentation ideas . That’s quite a lot of fun,
but outside the scope of this book .
Expanding to an SDK?
Now at this point, we have a whole load of useful functions, which allow us to access our
graphics, our files, our models, our textures, our memory, and our input devices, play
sound handle animations, and so on .
We haven’t really had to go to our API layers for quite some time; we’ve quite delib-
erately added functions that we find easier to understand, which then does all the hard
works . Our 3D Object loader, for example, does not ask for us to all the nitty gritty com-
munication with that hardware we first had to do . We have combined all the different steps
needed into one simple function .
This is the start of an SDK; we can collect all our helper functions into libraries, cre-
ate header files, and include them in our precompiled build process, allowing for faster
compilation times .
It’s pretty light as an SDK though, there are traditionally a lot more helper functions in
an SDK, and they tend to break themselves down into different sections, such as memory
management, input device management, graphic management both low and higher level,
Audio, GUI, and many things as the SDK designers need to add . We could also develop a
few tools to make things a bit easier for us to use or configure our data to formats that we
find easier to handle and make a point of using that format all the time, perhaps creating
our own data structures .
The more helper functions or alternative means of creating and manipulating objects
we can have the easier the SDK becomes for our possible new target users .
When we have an SDK that we are happy with, we would expect users to use the high-
level calls knowing that they will do a particular thing, and return to a particular result,
what’s important is that our users should not be particularly concerned with how that is done .
This allows a hardware maker, or indeed an API developer to make changes to their
systems, and the SDK can be altered to allow for those changes, but the user’s access and
return operations are not unduly affected .
Limitations of Hardware
So far, we’ve done a lot of cool things and not even come close to the full potential of what
we can do, but we must remember our target machine has limits . Even a PlayStation 4 has
limits, they just happen to be hundreds of times more powerful than our Raspberry Pi .
Cross-Platform Compilation
I’m guessing you are wondering why I have left this so late…well quite simply, when I set
out to write this book as an initial collection of notes, I didn’t know exactly how to do it on
a Raspberry Pi, and when I worked it out midway through the process, I felt I was already
well into remote development and didn’t want to break my train of thought . Or as this
book took shape to give the reader more things to worry about .
But here we are now, our projects are getting bigger, our compile times are taking
longer, and our poor old Raspberry Pi is starting to feel the strain of compiling lots of quite
intensive files even with good make file and file structure, the linker itself is now taking
quite a while to do its thing . And all those source files and assets sitting in working direc-
tories on the target are eating up the storage .
So you now have a choice, and I will leave it entirely up to you . We can stay with our
perfectly functioning but a little slow remote building or switch our compiling/linking
processes to our main PC, which is considerably more powerful and then send the execut-
able and assets and only the executable and assets to our Raspberry Pi for running and
debugging .
This is a very powerful thing, and as Spiderman fans will tell you, with great power
comes great responsibility . In our case, the responsibility of maintaining the correct
libraries on our PC for the build, which match those on our Raspberry Pi, and keeping the
assets folders up-to-date on the Pi .
Multicore Fun
For those of you with original Raspberry Model A’s B’s B+’s and the fun little Zero and
other single-core SBCs…this chapter isn’t going to work well for you . You don’t have mul-
tiple cores to work with, so nothing in here will have any real impact . It won’t actually
crash your machine to try, because we use systems that work out the number of cores
and divide the workload . So, if you have a single-core machine, the managing code will
allocate work to one core and simply allocate some time to each process to try to give the
sense of all tasks running at once .
Even with a single-core machine you should still work through this . If you keep
up your programming after this book you are bound to eventually work on a multicore
machine at some point and this will all be useful then .
So what is multicore programming? Basically, it’s a concept that seems to give you
multiple CPUs on the one chip .
That’s the concept, the reality is a little bit different, it’s called a core because what you
actually have in your chip is multiple processing core units, each with their own register
sets and memory caches, they also share a lot of internal bits and bobs, to allow access to
specialist hardware; however, some things are protected, certain address lines, and so on,
only one core can usually access these protected bits at a time .
Having two cores does not mean you can run a project twice as fast, but it does mean
that you can run two applications or processes or even projects at once as long as they are
467
not trying to access the same areas of memory or protected hardware at the same time .
This ability to run two processes seemingly at once is called multitasking .
Multitasking is the key to understanding how multicores work . A modern CPU is
designed to multitask, allowing you to move your cursor around on-screen while still test-
ing your keyboard and playing music and several other tasks, all seemingly at the same
time . A CPU with only one core will use a time-slicing concept for its many tasks, spend-
ing a little of its processing time on one task, then a bit on another . Sheer-processing power
will allow that to appear like it is all happening at once and making the multitasking seem
smooth and seamless, until you try to do more tasks than it can reasonably cope with, then
things get a bit sluggish . This is why the older single-core machines really are quite slug-
gish when they display their Linux Desktops with even a few windows open .
But having more cores, means more tasks can be sent to a different core, which itself
can slice up its tasks . More cores = more tasks .
What Is Threading?
Threading is the basic method used to allocate tasks to cores or time slots, it’s not to be
confused with hyperthreading, which is a concept more valid to x86 style chips seen on
most desktop and laptop machines, which have internal architecture that allows each core
of the processer to make use of some unused clock cycles when the CPU is running to
do other things . x86 chips have very complex instructions, which mean several of them
need to use multiple-clock cycles to function, and often wait several clock cycles to com-
plete that the threaded app gets to use . So, each core on a hyperthreading CPU can have
multiple threads running on each core . But hyperthreads are only able to process in the
background of the main CPU processes . Hence, they do not always complete their tasks
in the same amount of time . But it is a fantastic boost to the CPU’s ability to do twice as
much work inside each core, using up clock cycles that would otherwise be wasted, when
it’s waiting for input/output from internal or external sources .
ARM chips, however, do not do this, the RISC-based design concept means that there
are few if any wasted clock cycles, meaning something that relies on unused cycles would
have to wait a long time till there were any before it could process . So rather than put
hyperthreading capabilities onto the silicon die that makes the CPU, ARM’s chip design-
ers decided instead to squeeze-in new cores that had the ability to fully process their own
internal data and only put it into waiting states if it was accessing the same external bus,
which we can effectively avoid with careful coding . This is why ARM and other RISC chips
talk of being multicore rather than having hyperthreading, though confusingly coders
still speak of threads .
But these cores can be loaded up with info, have their own sets of internal registers
and bits of cache memory, and are able to compute internally at the same time as other
cores . It’s only when you need to access the same main memory or have the address lines
talking to hardware, such as the GPU that you can get clashes, that means one core has
to wait for another to finish before it can do its thing . But a core that is not accessing the
same memory or hardware as another core is able to run at full speed on the task(s) it has
been given .
The key to maximizing multicore coding then is to make sure that each core is doing
something unique and specific that does not need the same memory as other cores, does not
clash, does not access the same hardware address lines, and is able to do a stand-alone job .
#include <thread>
#include <mutex>
We will need a function, which is the actual task we want to be done on the thread, let’s do
something simple like a printf, here’s a little code snippet you can add into your main file
on any project, put the code in before the main() function .
std::mutex m;
int count = 0;
void SayHelloGracie()
{
m.lock(); / About to use a function or change data so stop other
threads being able to access things
printf("Hello Gracie %i\n", count);
count++;/variable count is safe until m.unlock() is called
m.unlock();/we now allow access to count
}
The only thing odd about that is the mutex variable, which means a mutually exclusive
indicator, and is used to indicate to other threads that there is code or data that these
threads need to have exclusive use for a bit, so please wait . printf may or may not be
entirely thread safe (explained soon), so that little mutex variable now comes into play,
by locking it, we tell any other threads using the same mutex variable as an indicator that
they can’t have access just now and effectively have to wait for a little while, and we will
std::thread AnIdent(SayHelloGracie);
Since Say Goodnight Gracie was a catch phrase of the late great George Burns, let’s make the
Ident George . Now, we can make four Mr Burns’ giving a little homage to the great man .
/ have 4 Mr Burn's speak
std::thread George1(SayHelloGracie);
std::thread George2(SayHelloGracie);
std::thread George3(SayHelloGracie);
std::thread George4(SayHelloGracie);
If we run this now, we will get four Hello Gracie outputs and then nothing, but we know
that the thread ran because the output showed it, and also showed an increase in the
counter value . But it’s too short a process to really get any sense that its running along with
our game, it might as well have just been a function call, so let’s do something mad and ill
advised, change the code inside SayHelloGracie to this:
void SayHelloGracie()
{
while (1)
{
m.lock(); // About to use a function or change data so stop
other threads being able to access things
The addition of the while loop means that this function will never stop . Run it again . And
take care to look at your debug output window .
You can see now that you are outputting quite mad amounts of Hello Gracie’s and yet
your game is still running?
Those four George threads are running and never ending, spread across your avail-
able cores, whereas your main game loop is behaving in a normal sequential manner . If
you can see your CPU core indicators somewhere, you should notice that assuming a quad
core system, three of your cores are probably totally devoted to printing out Hello Gracie,
whereas one is doing your game AND printing out Hello Gracie .
There are few situations where we would actually make a thread repeat a function like
this but it certainly helps to understand how the threads are working .
Let’s do one more equally crazy thing and comment out the lock and unlock on the
mutex, run your project again in debug mode…take note of the output .
After allowing it to run for a few seconds hit the pause or stop button and look back
on your output values . It won’t be immediately obvious, but you should see in some places
477
Appendix II
479
This will keep your boards public drivers and all your onboard apps up-to-date and
functioning . Do this every so often just to make sure you have the latest versions of
things . I don’t, however, recommend this in the middle of a project development session .
Sometimes changes to libraries can be quite major causing new ways to access features,
and your code itself might then need to be altered in ways that are not always obvious .
It’s also a very good idea to keep a backup of your SD on your PC from time to time
with all its current updates and the tools and libraries we install . It’s not at all unknown
for coders to damage their storage medium, especially when doing some form of media-
access systems that forget to close files or accidently format things . Having a backup of
your SD, available in that unhappy event, will save you a lot of time updating and reinstall-
ing your working tools .
480 Appendix II
Appendix III
481
do something different, and so on, which are going to create bugs that we may not be able
to immediately solve and will create considerable annoyance as we are trying to learn
new things . Earlier in this book I demonstrate how easy it is to make mistakes with our
early memory management, and those were deliberate! More mistakes are sure to follow,
programmers are human and make mistakes; it’s going to be frustrating .
Frustration is the biggest drawback to learning I can think of, it’s important to build
your confidence and sense of ease in coding, and bug hunting for seemingly impossible
bugs is never anything else but frustration .
A source/revision control system will provide us with a means to step back to the last
stable build we had before we made the error, and start again . This is often the fastest way
to deal with complex bugs, which are avoiding your bug squashing debug hammers .
This works best if you have regular backups of your code, in general if you write any
small module of code, and it is running fine, back up, and commit it to your online reposi-
tory . Do this as many times as you want in each session, don’t leave it to the end of your
session . EVERY time you write some decent-sized chunk of code that works…submit your
code to your server…then if you screw something up, you are at worst only going to lose
the time it took to enter the faulty code, and you can be more careful when trying again to
add the code to the last working version you had from 30 min ago .
Whatever system you use, you’re going to need a client side app . SVN and Git both
have a very usable interface, you can download SVN from here https://www .tortoisesvn .
net/ and Git from here https://www .tortoisegit .org/ .
You will also need to locate an SVN or Git hosting server where your projects can be
uploaded to . These range in price from free to very much not free . But I won’t specifically
recommend any . Simply Google for free SVN (or Git) hosting, and you’ll find something .
If you are network minded and have your own server, you can even host your reposi-
tory yourself .
Perforce is a different kind of system, and in my view much more complex, especially
for simple projects, but if you already are using it, you may want to continue, it’s available
from https://www .perforce .com/ . It also needs a host server and again there are commer-
cial ones available, including some free ones .
Take some time to read the instructions on the sites and pick one . So long as you
remember the golden rule of committing often, you will save yourself a lot of time and
effort, especially when things go wrong .
If all this sounds confusing and scary at the moment, don’t worry; it will become
clearer, for now, just work through this book carefully and just don’t create any bugs!
No bugs at all, just, don’t .
483
has a side effect in that the code will grow in size; most variables will now be 64 bits long
rather than 32, pointers need to be 64 bit and many of the machine’s instructions are also
now 64 bit increasing the actual size of our code . That’s something we need to think about,
we generally still have the same amount of RAM available, and our code will increase in
size . It won’t double, but it will be more . But 64 bit machines are usually much faster for
various reasons and that is well worth the increased size . We also tend to consider that the
term word now, as a 32 bit value, as the old 16 bit value has more or less been left behind as
computers have grown in size and power .
Now most of the time we don’t really need to worry too much about the bit patterns
of our 32 bit words, but we do when it comes to graphics . Since a 32 bit word contains
4 bytes of data, with 1 byte each containing information on the red, green, blue, and alpha
components of our pixels . It’s also contained in a specific part of our word . A red pixel, for
example, might look like this in binary
11111111000000000000000011111111b
11111111000000000000000011111111b
Yeah…the same, the difference here is that the Alpha value is at the start, it’s a wonderful
way to confuse a PC coder, make them look at ARM binary values . Oh and let’s not forget
that in C/C++ component colors and alpha values are floats between 0 and 1 .0f .
All this depends on whether your CPU prefers to read data in a big or little endian
format, ARM is actually capable of both for even more head exploding madness .
It can be hard to get our heads around the size of things, for example, is int 16, 32, or
64 bits . The answer is yes, to all 3, but it totally depends on what machine you are running
on . C++ has a very useful sizeof(x) function that returns the byte (8 bit) size of any basic
type and helps you to overcome some of the problems of how much space you need to
allocate from time to time .
484 Appendix IV
Appendix V
OpenGLES3.0+
Obviously, this book has focused on OpenGLES2 .0 because of its current total dominance
of the marketplace, but even now there are a few systems capable of running OpenGLES3 .0 .
So it might be interesting to outline a few of the things that it can give you . In fact, at the
time of writing the current cutting edge version is OpenGLES3 .2, SBC/SoCs that can use
this are currently quite scarce, but give them a few years and we will see these versions
come into play .
At least one board, the Asus Tinker Board, appears in theory to have a GPU that is
capable of OpenGLES3 .2, which adds other types of optional Shader to the mix . Geometry
Shaders that can dramatically improve graphic effects and allow additional processing on
primitives, and Tessellation Shaders to greatly increase the numbers of vertices you can
work with from a basic set of data .
The board is too new for me to have had any chance to try it out, though having used
Geometry and Tessellation Shaders on full fat OpenGL on a desktop being able to play
with them on a credit card-sized machine is mind boggling .
But OpenGL3 .0 is here and a lot of fun to work with and represents the next big step
in SBC GPU coding .
485
First and most important, any machine that can run OpenGLES3 .0 is using a more
modern GPU and almost all are multicore rather than the limited cores we’re used to and
incorporate several internal improvements, which make rendering faster and more flexible .
This usually makes them considerably faster in real FPS terms than their OpenGLES2 .0
running predecessors running the same code . That single point has win all over it .
Apart from technical enhancements and speed, OpenGLES3 .0 mostly provides an
updated and more flexible version of GLSL, which does have a few more features and
changes a few concepts but it’s not a massive leap . It also offers significantly improved
texture and compression systems handling, and it can cope with larger and more complex
buffers and those buffers have more manipulation possibilities .
Though not always guaranteed, it seems most OpenGLES3 .0 machines can also cope
with OpenCL . At least so far, that has been the case . Making heterogeneous program-
ming possible for significant speed increases for those who know how to make best use
of all these compute features; this may be down to the hardware makers though, so it’s
not certain that all new boards will feature this . But it sort of makes sense with the way
an OpenGLES3 .0 chip works to provide at least OpenCL1 .1 (OpenGLES3 .1 even offers
compute Shaders) .
I hope that soon the leading brand makers will follow Hardkernal and Asus’s trail
blazing and bring out an OpenGLES3 .0+ supporting machine, which will really explode
OpenGLES3 .0+, and who knows, I might update this book to make use of these extra
features .
One mighty good thing also about OpenGLES3 .0 is that its fully backward compat-
ible with OpenGLES2 .0, so even if a whole bunch of new systems come out the day after
this book is released it will stay relevant for quite some time .
One bad thing about all this increased power though, and this may be a limiting fac-
tor for a lot of SBC designers and a reason why they are still focusing on OpenGLES2 .0
for a while yet; that extra power of our chips requires more power to operate . We’re seeing
more systems needing 3 even 4 amps power draws and that kind of power usage generates
more heat and heat needs to be dissipated making the units bigger with heatsinks added .
There’s probably a cut-off point where the heat produced and the ability to get rid of it
will no longer make them suitable for the mobile devices they are intended to drive . Also
batteries, which are often used to power most of these things drain pretty quickly as the
ampage goes up .
So unless we see another significant breakthrough in low-power processing, the way
ARM revolutionized the market with it first came out, we may see that SBCs will plateau
at a certain performance level . So learning to work on their constraints will become an
important industry skill .
If you want to play with OpenGLES3 .0, for me the Hardkernal Odroid XU4 is the best
option . At the time of writing the Asus Tinker board can run GLES3 .1 but it does not have
full working drivers . Scan the market, see what’s out there and double and triple check
the drivers offer full hardware access and get you on the OpenGLES3 .0 learning curve . I’ll
maintain a list of boards I have on the support site that can handle OpenGLES3 .0+ and
provide some info on how to use the extra features .
486 Appendix V
Appendix VI
If you cannot find graphic drivers on your system, these may allow you to create the proj-
ects, but at possible suboptimal frame rates, and only if your system has access to the GPU .
sudo apt-get install libgles2-mesa-dev
487
If you have a multicore CPU you might want to try using OpenMP to handle some of your
multicore work . It should already have OpenMP’s libs installed but you can check by try-
ing to install anyway . All single-core system will still work and compile but you won’t see
any benefit and indeed you might see a slow down, as threads are being constantly set up
and the CPU is time slicing rather than processing on idle cores .
sudo apt-get install libgomp1
On the PC End
We also downloaded a couple of header style libraries, which we then compiled to be part
of the finished executable, so they will not be affected by a new storage setup, but in case
you change PCs and forget what you used you can reinstall these .
SOIL, or more correctly the STB file loading was downloaded from https://www .
github .com/nothings/stb .
GLM The Maths Lib was downloaded from (but check for latest versions) http://
www .glm .g-truc .net/0 .9 .8/index .html .
Tiny Object Loader for loading our basic OBJ files from https://www .github .com/
syoyo/tinyobjloader .
488 Appendix VI
Appendix VII
489
So if you have an idea for a game, the first thing you should be doing is writing it down
on paper and planning it out . Identify the goals and aims of the game . What will the player
be asked to achieve, how will that be done?
What are the aims of the AI? What obstacles will be put in the way? What will be
the progression of the game? There are a great many design questions you need to answer
before you touch a coding keyboard in anger . Be fully aware of what the tasks are that you
need to tackle, the scale of them, and most important, the feasibility of them . Some things
are just not possible on the hardware you choose, are there alternative means to do them?
Once you have planned your game, consider its core make up . I’ve put forward the
notion that all games are basically variations on Space Invaders, but, of course, some
games bend that notion to near-breaking point . At the core of the game, technically it
will still have a main loop, taking input, processing, and rendering, so you can take some
comfort in that . The issue is how do you create a play world, what will be in it, and how
will they all interact .
Focus on those points! You should be clear about what it will look like, how the objects
move or are moved, what influences the user will have in the game, and what random
factors will it have .
Once you have that vision, get a designer to check it out for you…they’ll tear it apart
for sure, but that’s good . You need to be sure that what you want is going to be fun, playable,
and possible .
491
If you find that you are following all the VisualGDB concepts ok, you should be able
to adapt them very simply to the new VS2017 properties it provides .
But for now I’m still going to continue with VS2015 and VisualGDB, its less effort
to get things working . But maybe if there’s a next edition of this book I might swap over .
I will put an update on the support site when I get round to using VS2017, sometime
around the spring of 2023 .
493
Asteroid creation using Object Class, 360–361 Bus, 308
Asus Tinker Board, 485–486 BVH (Bounding Volume Hierarchy)
Attenuation, 339 Triangle, 424
Attribute(s), 341 Bytes, 483–484
pointers, 311–312
shader, 310 C
Axis-Aligned Bounding Box, 373 Camera, 350–353
in 3D space, 278–280
B view matrix, 291–292
Back buffer . See Framebuffer Camera Class, 358
Backgrounds . See Tiles and backgrounds Carrier Ship, 358
Back motion, 296 Cars in racing game, 417–418
Base Class, 88 Casting technique, 99, 166, 207, 433
bcm_host library, 41 Chase camera, 353
BigShip, 358 Cheap target system, uses, xvi
Bilinear interpolation, 395 Chunk-based loading, 456
Binary files, 235 cin (), 12
Biswas, Abhishek, 381 Circle checks, 2D collision system, 102–103
Bits, 483–484 CMake software, 19
BobsDay, 213–214 Cockpits, 365
Bool flag, 229 Code-based timer, 163
Bounding Volume Hierarchy (BVH) Coding, 475
Triangle, 424 Collision
Box check, collision detection, 101–102 cull, 369
Brain map, 439 detection, 101
Brian Beuken division of, 252–254
at NHTV in Breda, xxx map, 397
at Ocean Software in Manchester, xxix system, 208, 372–373
Virtucraft, xxix CollisionEvent, 413
Broadcom libs, 51 CollisionProcess Class, 415
Broad phase cull, 399 Collision-response system, 437
Broad-phase test forms, 373 ColourFade particle, 262–270
btDebugDraw, 407 Comma separated values (CSV) text files,
btRay cast systems, 438 235, 239
btRaycastVehicle system, 421 Commutative property, 290
Buffer(s), 308–309, 313–314 Compression systems, 456
depth, 318, 449 Concatenation, 287
double, 68, 75, 366 CondOBJLight .vsh/fsh, 376
OpenAL, 183 Console window, 9
pixel, 68, 119 cout (), 10
render, 313 C++ program, 10
texture, 312 namespace, 11
types, 449 CPU, 473
vertex, 309–311 -based 2D system, 247
Buffer-based game design, 434 manipulated graphics, 70
Build Dependency in Visual Studio, 197 CreateTexture2D routine, 64
Bullet, 402 Cross-platform compilation, 464–465
apply forces, 411 Cross product, 336
-based objects, 444 C++’s MAP structures, 459
physics, 402 CSV (comma separated values) text files,
Bullet Class, 89, 248–249 235, 239
494 Index
Cube drawing, 300–302 Enum command, 133–134, 170
CubeVertice, 300–301 Ermm rotation, 281
Culling, 369 ES (embedded systems), 32–33
test, 399 eth0, 15
Euler maths, 405
D
Data storage concept F
cross-platform compilation, 464–465 Faces, OBJ, 330
expanding to SDK, 461–462 FastShip, 358
hardware limitations, 463–464 Field of View (FoV), 293
scene management, 456–459 FileHandler, 86
track of assets, 455–456 File synchronization, 57
wrangling, 459 FilterDemo, 395
asset management, 459–460 First Person Shooter (FPS), 122
fragmentation, 460–461 Fixed camera, 350, 353
DaVinci Vitruvian man pose, 284–285 Fixed pipeline, 318, 340
Debugger, 28 Flow field, 429
Debug mode, Visual Studio, 8 Flying Cubes project, 356
Debug rendering, 409 Folder management, 58
Delta time, 164 Font system, 199
Depth buffer, 318, 449 Force-based movement, 437
Depth textures, 449, 451 Forward motion, 296
Dereferencing symbol (*), 86 FoV (Field of View), 293
Determinant, matrix functions, 295 FPS camera, 353, 438, 442
Dev Kits, xvii Fragmentation, 460–461
Diffuse lighting, 338 Fragment Shader, 342–344
Direct Memory Access (DMA), 308 Framebuffer, 68–74, 312–313, 448, 450
Direct X from Microsoft, 32 Frame-lock system, 378
Doggies Piloting Atomic Power Balloons game, Frame rate, 271–272
489–490 free() function, 67–68
Doom’s WAD files, 458 Frustum, 280, 292–293, 400
Dot product, 335 culling, 434–435
Double buffer, 68, 75, 366 Frustum Class, 400
DrawMap Full-fat OpenGL, 449
function, 241–242
system, 200 G
Dynamic allocation, 86 Game
Dynamic arrays, 93 cycle, 379
dev, 463, 474
E initialization, 391
EGL, 41 init systems, 458
system, 477 mechanics
window, 51, 76 3D collision, 368
Embedded systems (ES), 32–33 3Dfont, 366
Emissive lights, 338 cockpits, 365
Emitter Class, 259–261 collision systems, 372–373
eMMC memory, 459–460 culling concepts, 369
Encapsulation, 176, 473 Grids Tree, 369–370
End of File (EOF) marker, 235 GUI, 365–366
Enemy Class, 206 HUD, 365
Engines in game dev, 463 primitive collision types, 368–369
Index 495
Game (Continued) Grids Tree, 369–370
Quad Tree, 369–370 GroundPlane Class, 391
screen/render, 366 GroundPlaneDemo, 390
Sphere-to-Sphere, 373–374 GUI (Graphic User Interface), 14–15, 365–366
program features, 50
programming, 475 H
Game Class, 201–202, 250 Hard-coded numbers, 51
GameInit Function, 250–251 Head-Up Display (HUD), 365
GameObject Image pointer, 203 Heightmap, 391
GameObject methods, 202, 209 HelloCubes in 3D, 304–306
GameObjects YSpeed value, 216 attribute pointers, 311–312
Game states, 255 buffer, 308–309
Game::Update function, 231 frame, 312–313
GDB program, 28–29 render, 313
Geometry Shaders, 485 texture, 312
GeosphereXX .obj, 360 vertex, 309–311
Get functions, 73 GPU data, 307–308
GetScreenPos method, 243 Hello Triangle in 3D, 296–300
Gimbal lock, 405 Heterogeneous programming, 486
Git, 481–482 Homing Class, 226–228
glDeleteTextures, 65 Homing system, 208–211
gl_FragColour, 61 HotDecay, 261–265
GL Mathematics (GLM), 288, 290, 335 HUD (Head-Up Display), 365
GLM The Maths Lib, 488 Hyperthreading CPU, 468
Global spawner, 457–458
glUseProgram(), 452 I
GLut, 184 IDE (Integrated Development Environment),
GNU, OS, 19, 29 xviii, 31
God mode, camera, 353 Identity matrix, 283
GPU . See Graphics Processing Unit (GPU) Image
Graphics, 32–33, 447–448 array, 224
APIs, 32 retention, 333
process recap, 452–453 Image Loader, 54
program, 53 Impulse, 411
frame buffer and switch system, 74–76 Indoor games, 433–434
loading, 53–57 3D game, 434
texture, creation/display, 59 occlusion, 434–435
projects, 35–41 inet addr, 16
shaders, 35 Inheritance, 88, 176
shadow mapping, 448–452 Bullet Class, 89
Graphics Class Init routines, 344 code of human, 88–89
Graphics Processing Unit (GPU), xxii, 32, 34 Init_ogl routine, 51
chip, 32 Integrated Development Environment (IDE),
data, 307–308 xviii, 31
hardware, 473 InvaderStart, 85
pixels memory, 3D, 325 IP address, 16
stock shape objects, 325–326
Graphic User Interface (GUI), 14–15, 365–366 J
Gravity, 397 Java, xviii
Green, James, 383 Job manager, 471–472
496 Index
K Lerping, 385
Level-based game, 438
Kamikazi Invader, 123–128
Level of Detail (LOD),
alien
320, 391
behavior, 132–135
Libs, 487–488
types, 164–165
Light-positioned camera, 448
arcing state alien, 133, 137–141
Lights
baddies in, 130–131
camera action, 332–353
Bomb Class, 154–161
sources, 339
bomb state alien, 133, 144
types, 338
bullet to hit Fred and aliens, 153–156
Linear interpolation, 385
dead and dying state in, 158–159
Linear motion, 410
delta time, 164
Line draw system, 408
diving state alien, 133, 142–144
Linux, xxii, xxv
enemies in, 176
project creation in, 37–41
FlyBack stage alien, 145–146
systems, 471, 479
flying state alien, 136–137
Linux Project Wizard option,
FlyOff stage alien, 145
17–18
from Fred, 152–155
LoadMD2, 383
Fred reaction, 169–173
LOD (Level of Detail), 320, 391
going offscreen alien, 133
LookAt function, 292
hitting Fred, 161
moving state alien, 133–136 M
pixel colour in, 126–127
Magic numbers, 51
postmortem, 173–175
Maker boards, xxiv
Ship Class in, 128–129
malloc(), 67–68, 74–75
SimpleBob Class, 125
Manifoldpairs, 413
sloppy code, 192–193
Manifolds, 412
sound, 182–183
Map(s), 235–239
PlayMusic method, 188–190
array, 200–201
streaming, 191–192
brain, 439
StarField Class, 124
collision, 397
timer used in, 162–164
ladders, 235–239
UXB bomb in, 161–162
MapManager Class, 236–237
vectors in, 146–150
Material file, 330
Khronos group, 32, 35
Maths, 280–288
Matrix, 277, 282
L concatenation, 287
Ladders, 229–234 difficulty in, 276
attribute, 234 functions, 295–296
Bob’s movement, impact on, identity, 283
234–235 multiple sizes, 283
climb animations, 229–230 translation, 283
CSV text files, 235, 239 types, 3D games, 291
gravity, 231 model, 291
jump animation, 232 projection, 292–294
maps, 235–239 relationship of, 294–295
Launcher Class, 250 view, 291–292
Law of acceleration, 401 MD2Demo, 383–384
Law of inertia, 401 MD2Shadow demo, 452
Index 497
MD2 system, 381 Objects Class, 88–89, 175
animating system, heart, 385 OBJ file format
binary chunk, 382 loading models, 327
controlling animation of, 385–389 location and installation, 327–329
lerping, 385 Occlusion culling, 435
loader, 383 OctTrees, 371–372
Quake model, 383 Onboard memory, 455
shaders, 385 On-chip memory, 464
MD3/4 system, 382 OOP (Object-Oriented Programming), xx, 98,
MD5 system, 381–382 175–176
MDL format, 381 OpenAL, 177
MD(x) systems, 381–385 buffers, 183
Members in C++, 72 installation, 177–179
Memory, chunks of, 460 Soft, 177
memset(), 75 vs. OpenGL, 179
Mesa 3D Graphics library, 34 working, 183–184
Mesh, 322, 397 OpenGL, 32
Minification, 394 ES, 32–33
Mipmapping, 320, 393–394 eye, 385
Modders, 381 shader language, 344
Model vs. OpenAL, 179
animation, 379 OpenGL1 .1 coding template, 70
matrix, 291, 452 OpenGL3 .0, 485
ModelManager Class, 394 OpenGLES1 .1, shaders, 340
Model View Projection (MVP) matrix, 294 OpenGLES2 .0, 33–35, 42, 275, 309, 448–449,
Modern CPU, 468 473, 485
MousePointer, 256 OpenGLES 2.0 Programming Guide (book),
MSBuild tool, 19 35, 41
Multicore programming, 467 OpenGLES3 .0, 486
Multitasking, 468 OpenGLES3 .0+, 485–486
Mutex variable, 469 OpenGLESProjects, 19
MVP (Model View Projection) matrix, 294 OpenMax, 177
MyLib, 214 OpenMP’s libs, 488
Operating system (OS), 14
N GNU, 19
Native Development Kit (NDK), xviii for keys scanning, 78–85
Near and far clipping planes, 293 Orthogonal projection matrix, 366
Nearest-Neighbor interpolation, 395 Orthogonal view, camera, 280
Newton’s laws of motion, 401 Overloading, 149
Nibbles, 483–484
Node, 440 P
Non-Disclosure Agreements (NDAs), xvii Painters algorithm, 434
Non-raspberry machines, 477 Parallel projection, 293
Nonthread safe process, 472 Parsing, 115, 235
Particle system, 255–261
O ColourFade particle, 262–270
OBJAnimation, 381 heavy use of, 271
Object Attribute Buffers (OAB), 309 Partition trees, 435
ObjectModel Class, 415 Pathfinding methods, 439–440
Object-Oriented Programming (OOP), xx, 98, Path-tracing system, 453
175–176 Patrol Class, 224–226
498 Index
PCM (Pulse Code Modulation) control, 180 Raspberry Pi, 13–14, 28, 464
PCX format, 383 GDB replacement in, 28
Perforce, 481–482 installation, 2
Per fragment/per pixel lighting, 348–349 model, xix–xx, xxii–xxiii
Perspective projection, 293 A+/B+/Zeroes, xxii
view, 280 via hardware cable, 17
PhotoFrame, 65 via wifi, 17
Physics programming, 401 Raspberry Pi 2/3, 416
Pixel buffer, 68, 119 Raspberry Pi 2 Model B, xxii
Pixel-movement system, 211–214 Raspberry Pi 3 model, xix
Pointer, 67, 86 Raspberry Pi 3 Model B, xxii, 455
attribute, 311–312 Ray, 367, 397
Point lighting, 339 casting technique, 433, 440, 442
Point-to-point, 221–224 to triangle, 397
Polymorphism, 176 Reference point, 205
Posix threads, 471 Remote compiling, 57
Postmortem, Kamikazi Invaders, Render buffer, 313
173–175 Render culling, 398–399
Primitive collision types, 368–369 Render surface, 41
Projection matrix, 292–294 Render systems, 434–435
Pthreads, 471 Resource, 54
Pulse Code Modulation (PCM) control, 180 Respawn timer, 172
Pure Shader systems, 385 Reticule, 365
Pyramid view, 292 Retina, 333
Revision control backup system,
Q 481–482
Quad system, cores in, 473 Rig, 380
Quad Tree, 369–370, 398, 431 RISC-based design concept, 468
Quake games, 434 Routing, 28
maze exploring, 436–438
applying torque/direction impulse, 437 S
brain map, 439 SBCs . See Single board computers (SBCs)
Bullets inbuilt function, 442 Scalar manipulation, matrix
direction of signpost, 439–440 functions, 295
drawing, 441–442 Scalar product, 335
Enemy Classes, 438 Scene management, 456–459
moving baddies, 438–441 Screen-coordinate system, 199–200
pathfinding, 439 Screen displays, 32
PatrolBaddie, 438 Screen resolution
StaticBaddie, 438 in BobsDay header, 240–241
Qualifiers, 344 Bullet Class, 248–249
Quaternions, 303–304 collision, division of, 252–254
Quats functions, 405 draw map function, 241–242
Emitter Class, 259–261
R flexibility, 255
Racing game, 416 frame rate, 271–272
Radians, 139 Game states, 254–255
RAM double buffer, 68 loading images, 249–251
Random number, 162 particle system, 255–261
Range-based specific spawner, 458 processing, 251–252
Rasbian OS, xxiii, 477 scrolling methods, 243–248
Index 499
SD card, xxiii animation in, 98–100
updating vs. new, 479–480 bullets hitting on enemies, 101–102
SDK, 461–462 bullet test with shelters, 104–110
game dev, 463 Enemy Class creation, 91–92
game project in, 462 features, 87
helper functions in, 461 font drawing system, 113–118
Secure Shell connections (SSH), xxv inheritance, 88–91
Set functions, 73 InvadersFin, 87
Shader(s), 35, 340–344, 452 InvaderStart, 85, 87
code, 61 loading graphics in, 119–122
compilers, GLSL 1 .0, 343–344 moving direction, 96–98
language, 344–346 MyBullet, 89
tracking, 344 Shooter Class creation, 89–90
Shadow mapping, 448–452 creating bullets in, 101
Shadows, 339–340 text display in ASCII values, 111–112
Shared Object (SO) library, 197 TileFont Class, 114, 116
Sheer-processing power, 468 update function, 90–91, 95
Signpost/nodes, direction of, 439–440 Spawning system, 457
Simple OpenGL Image Library (SOIL), 54, 488 objects, 457
Simply SBCProjects, 19 types, 457–458
Single board computers (SBCs), xvi, xxii, 14 Specular lighting, 338
systems, 456 Sphere test, 373, 400
Single-line index, 199 Sphere-to-Sphere method, 373–374
Single-screen platforms Spot lights, 339
BobsDay, 213–214 Sprites, 85
coordinates, 217 2D games using, 101
Enemy Class, 221 display, 85–86
gravity value, 215–216 SSH (Secure Shell connections), xxv
Homing Class, 226–228 Standard Template Library (STL), 10
jumping process, 216–220 function, 235–236, 469
ladders . See Ladders stbi_load function, 66
Patrol Class, 224–226 std::, 11
point-to-point, 221–224 Stepping process, 407
test condition, 217–218 Stewart, Jamie, 381
Sizeof(x) function, 484 STL . See Standard Template Library (STL)
Skeletal animation-based systems, 381 Storage qualifiers, GLSL 1 .0, 343
Skybox, 361–363 Streaming of sound, 191–192
indoor games, 434, 438 String, 12
Sky Dome, 362 Surface Class, 70
SoC (System on Chip) machine, xix SVN, 481–482
SOIL (Simple OpenGL Image Library), 54, 488 Switching frames, 384
SO (Shared Object) library, 197 Sysprogs, 28
Source control backup system, 481–482 System on Chip (SoC) machine, xix
SpaceCadet, 356
Space games, 355–356 T
Asteroid Class, 360–361 Target-base compiler system, 491
chase camera, 359–360 Tessellation shaders, 485
Space Invaders, 77 Texels, 394
Aliens Class, 99 Texture(s), 320–321, 390
constructor in, 121 buffer, 312
aliens creation using array, 93–95 coords, OBJ, 330
500 Index
cube, 315–318 Trilinear filtering, 395
faces, mapping, 319–320 TurboSquid, 417
fixed pipeline, 318, 340 Two-dimensional arrays, 131
shaders, 321
size selection, 320 U
Third dimension (3D) games, 275–280 uniform value, 343
HelloCubes, 304–314 Unit vector, 147
Hello Triangle, 296–303 UV coordinate, 324
matrix, 291
model, 291 V
projection, 292–294 Vector(s), 93, 335
relationship of, 294–295 erase function, 156
view, 291–292 maths, 175
quaternions, 303–304 product, 336
self involvement and, 275 Vector2D Class, 147–148
Z concepts, 276 Velocity, 410
Thread(s), 256, 471 Vertex, 277
job manager in, 471–472 buffer, 309–311
safe, 472 OBJ, 330
Threading, 468–471 Shader, 296–297, 341–342
Throttle effect, 473 View matrix, 291–292
Tidier file systems, 214 Viewpoint concept, 278
Tiled, 212 Virtual function, 91
TileFont, 199 Virtual Reality (VR), 294
TileManager Class, 422 Visual C++, 5
Tiles and backgrounds, 198–199 VisualGDB, xxiv–xxv, 16, 25, 28, 57,
casting, 207 491–492
collision system, 208 console, 28
constructor and destructor, 202–203 installation, 2, 17
direction value, 206 intellisense system, 180
font system, 199 Visual Studio, xviii, xxiv, 8
Game Class, passing, 201–202 Build Dependency, 197
homing system, 208–211 console window, 9
map array, 200–201 debug mode, 8
new update system, 204 executing stages, 8
pixel-movement system, 211–214 Hello World, 10–12
reference point, 205 installation, 2
screen-coordinate system, 199–200 overview, 2–10
speed value, 204–205 templates group, 5
TilesExample, 200–201 Win32 Console Application, 6
wrapping, 211 Visual Studio 2017, 491–492
Time-slicing concept, 468
TinyObjLoader, 329–332, 356, 424, 430 W
_tmain, 8 Wifi, 17
TrackRacer2, 421 dongle, xxiii
Translation, 281 Win32 Console Application, 6
matrix, 283 Wired network connection, xxiii, xxiv
Transposition, matrix functions, 295 Wolfenstein 3D game, 433
Triangle
3D game, 322–324 X
code, 44–49 x86 chips, 468
Index 501