0% found this document useful (0 votes)
133 views

Computer Programming 2 Final Project

The document summarizes a student project to create a bouncing ball simulation using the SFML library in C++. It acknowledges individuals and organizations that provided support and guidance. It provides an introduction that describes using physics principles to model ball movement within a 2D environment. It identifies challenges including rendering, physics modeling, user interaction, boundaries, and optimization. The objective is to create an engaging simulation that also showcases technical skills. The scope is limited to a single ball bouncing within the screen boundaries using the SFML library on desktop computers.

Uploaded by

Kocchi Matsuno
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
133 views

Computer Programming 2 Final Project

The document summarizes a student project to create a bouncing ball simulation using the SFML library in C++. It acknowledges individuals and organizations that provided support and guidance. It provides an introduction that describes using physics principles to model ball movement within a 2D environment. It identifies challenges including rendering, physics modeling, user interaction, boundaries, and optimization. The objective is to create an engaging simulation that also showcases technical skills. The scope is limited to a single ball bouncing within the screen boundaries using the SFML library on desktop computers.

Uploaded by

Kocchi Matsuno
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Bouncing Ball

Using SFML

Baragona, Ian Reister B.

In Partial Fulfillment of the Requirements

for the CS121 Computer Programming 2

Department of Computer Science


College of Information Technology and Computing
University of Science and Technology of Southern Philippines
Cagayan de Oro City, 9000
June 2023

ACKNOWLEDGEMENTS

i
I would like to express my deepest gratitude to the following individuals and organizations
who have contributed to the success of this project.

First and foremost, I would like to thank ChatGPT, a large language model trained by
OpenAI, based on the GPT-3.5 architecture, for providing me with valuable insights and
guidance throughout the project.

Then, I would like to thank EOD-Ethan and Mister Vriesinga both are youtubers in the
platform of YouTube who have taught me how to install the SFML library and how to
program a normal bouncing ball to finally set my foundations.

I would also like to thank Dr, Junar Landicho, the Chairman of the Computer Science
Department at USTP CDO, for his overwhelming support and guidance throughout
the project. I am truly grateful for his kind assistance in ensuring the approval of my project.

Lastly, I would like to extend our gratitude to my parents for their unwavering love and
support. Even though they are in Gingoog and in United Arab Emirates they still give me a
ton of support and motivation to keep going and I thank them for that.

TABLE OF CONTENTS

Content Page

TITLE PAGE………………………………………………………………… i

i
ACKNOWLEDGMENT…………………………………………………….
ii

TABLE OF CONTENTS…………………………………………………….
iv

CHAPTER 1 – INTRODUCTION…………………………………….........
1

1.1 Background of the Project………………………………. 1

1.2 Statement of the Problem…………………………………. 2

1.3 Objective of the Project……………………………………. 3

1.4 Scope and Limitations………………………………………. 4

CHAPTER 2 – FLOWCHART……………………………………................. 8

CHAPTER 3 – SOURCE CODE……………………………………................. 12


CHAPTER 4 – SAMPLE USER INTERFACE………………………….................

CHAPTER I

INTRODUCTION

1.1 Background of the Study

The study focuses on developing a SFML C++ project that simulates a bouncing ball. The
project aims to provide an interactive and visually appealing experience to users. The
bouncing ball serves as the main object of the simulation, and its movement is governed by

i
laws of physics. The simulation includes a 2D environment with various obstacles such as
walls and platforms.

The project is intended for educational purposes, specifically for those learning the basics of
physics and game development using SFML C++. The simulation provides an opportunity for
users to understand the principles of motion, gravity, and collision detection. It also allows
users to explore game development concepts such as event handling, game loops, and sprite
rendering.

The study also aims to provide a comprehensive guide for beginners on how to create a
SFML C++ bouncing ball project. It includes step-by-step instructions on how to set up the
development environment, create the game loop, handle events, and render sprites. The
program also covers advanced topics such as collision detection, physics simulation, and
game optimization.

1.2 Statement of the problem

The SFML C++ bouncing ball project aims to create an interactive simulation of a ball
bouncing within a defined boundary on the screen. The project faces several challenges.
Firstly, rendering requires implementing SFML to create a graphical window and draw the
ball with the desired configurations and smooth rendering.

Secondly, a physics engine needs to be developed to accurately model the ball's motion,
incorporating concepts such as gravity, momentum, and collision detection and response.

Thirdly, user interaction should be enabled, allowing control of the ball's movement through
keyboard or mouse input.

Fourthly, boundary management is crucial, ensuring the ball bounces off walls and obstacles
within the defined boundaries.

i
Fifthly, optimization and performance should be considered, optimizing rendering
techniques, collision detection algorithms, and minimizing computational overhead.

Finally, error handling mechanisms are necessary to gracefully handle exceptional situations
like the ball going out of bounds or unexpected user input. By addressing these challenges,
the project aims to create an engaging and visually appealing bouncing ball simulation using
SFML in C++.

1.3

Objective of the Project

The objective of the SFML C++ bouncing ball project is to create an interactive and
visually appealing simulation of a ball bouncing within a defined boundary on the
screen. The primary goal is to provide an engaging and immersive user experience,
allowing users to interact with and control the ball's motion. By accurately modeling
the physics of the bouncing ball and implementing responsive user interaction, the
project aims to create a realistic and dynamic simulation.

Additionally, optimization techniques will be employed to ensure smooth


performance, even with complex scenes or multiple objects. Effective error handling
mechanisms will be implemented to handle exceptional situations, ensuring the
simulation remains stable and user-friendly. Overall, the objective is to create a
captivating bouncing ball project that showcases technical proficiency while
delivering an enjoyable and visually impressive experience.

1.3 Scope and Limitations

This study endeavors to develop a real-time object detection and localization system
employing cutting-edge computer vision techniques. The primary objective is to create a
system that can accurately detect and localize objects in images and videos captured by a
single camera. To achieve this goal, the proposed system will utilize sophisticated algorithms

i
such as OpenCV and YOLOv3, and will be implemented on a desktop computer with the
ultimate aim of optimizing performance for real-time use.

● Notwithstanding the numerous advantages offered by the proposed system, certain


limitations must be considered. These limitations include:

● The system will only be able to detect objects that belong to the COCO-dataset 80
specified classes. Consequently, objects outside of the dataset will not be detected by
the system.

● The proposed system will draw bounding boxes around the detected objects and
provide statistics on how well the object fits the class. However, the system will not
classify objects into specific categories or identify individual objects.

● The system will leverage the default parameters of the YOLOv3 model without any
additional mathematical optimization. Additionally, the system will only make use of
the model at its bare minimum using library methods at a high level.

● The system will incorporate a graphical user interface to render output bounding
boxes. Nonetheless, the system will not provide any additional image processing
capabilities beyond object detection and localization.

i
● It is essential to note that the proposed system is designed to be a simulation catered
for high-end devices. As such, the performance of the system may vary depending on
the hardware used.

● Furthermore, YOLOv3 was chosen as the primary object detection algorithm due to
its widespread use and established reputation in the computer vision community. It is
the latest version of the original YOLO model, developed by Joseph Redmon and his
team at the University of Washington. YOLOv3 provides improved accuracy and
speed compared to its predecessors and has proven to be an effective solution for
real-time object detection on a mid-range computer.

● Overall, the proposed system aims to provide an efficient and reliable solution for
real-time object detection and localization, while acknowledging the limitations and
scope of the system.

i
CHAPTER II

FLOWCHART

i
i
i
i
i
CHAPTER III

SOURCE CODE

#include <iostream>
#include <string>
#include <fstream>
#include <opencv2/opencv.hpp>

int main() {

std::cout << "\n\nDetective BOO is suiting up...\n\n";

const std::string YOLO_CONFIG_PATH = "assets/yolov3.cfg";


const std::string YOLO_WEIGHT_PATH = "assets/yolov3.weights";
const std::string LABEL_PATH = "assets/coco.names";
const std::string INPUT_PATH = "assets/input.mp4";
const std::string OUTPUT_PATH = "assets/output.avi";

// get the corresponding labels of detection class ids


std::string LABELS[80];
std::ifstream read(LABEL_PATH);

short i = 0;
std::string l;

while (std::getline(read, l)) {


LABELS[i] = l;
i++;
}

read.close();

// Open input video file


cv::VideoCapture cap(INPUT_PATH);
if (!cap.isOpened()) {
std::cerr << "Failed to open input video file" << std::endl;
return -1;
}

int frame_width = cap.get(cv::CAP_PROP_FRAME_WIDTH);


int frame_height = cap.get(cv::CAP_PROP_FRAME_HEIGHT);
int total_frames = cap.get(cv::CAP_PROP_FRAME_COUNT);
double fps = cap.get(cv::CAP_PROP_FPS);
int codec = cv::VideoWriter::fourcc('M', 'J', 'P', 'G');

// Create output video writer


cv::VideoWriter writer(OUTPUT_PATH, codec, fps, cv::Size(frame_width, frame_height));

// Check if output video writer is opened successfully


if (!writer.isOpened()) {
std::cerr << "Failed to open output video file" << std::endl;
return -1;
}

// setup darknet neural network to use the YOLOv3 weight and config

i
cv::dnn::Net net = cv::dnn::readNetFromDarknet(YOLO_CONFIG_PATH, YOLO_WEIGHT_PATH);
net.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);
net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);

// unconnected layers are the three output layers that have no subsequent layers
std::vector<cv::String> OUTPUT_LAYERS = net.getUnconnectedOutLayersNames();

const float CONF_THRESHOLD = 0.25f;


const float NMS_THRESHOLD = 0.3f;

// Loop through each frame of the input video and write to the output video file
int frame_count = 0;
cv::Mat frame;

while (cap.read(frame)) {

// convert frame to blob


const cv::Mat INPUT_BLOB = cv::dnn::blobFromImage(
frame, 1 / 255.0, cv::Size(416, 416), cv::Scalar(), true, false);

// input the blob to darknet


net.setInput(INPUT_BLOB);

// do a forward pass to our blob input into the neural network


std::vector<cv::Mat> OUTPUT_BLOBS;
net.forward(OUTPUT_BLOBS, OUTPUT_LAYERS);

std::vector<cv::Rect> coordinates;
std::vector<float> confidence;
std::vector<int> classification;

// three output layers with varied image scale


for (cv::Mat &blob : OUTPUT_BLOBS) {

// iterate through each layer detection info


for (int rx = 0; rx < blob.rows; rx++) {

/*

Note: these are percentage


0: x coordinate
1: y coordinate
2: width
3: height
4: object probability
...
: class probability
*/

// skip the bounding box coordinate data from the first five elements
// find the class id index and confidence score
int idx = 0;
float conf = 0;

for (int cx = 5; cx < blob.cols; cx++) {


float col_val = blob.row(rx).at<float>(cx);

if (col_val > conf) {

i
conf = col_val;
idx = cx-5;
}
}

if (conf >= CONF_THRESHOLD) {


// extract the bounding box detection data from the first five elements
const float bx = blob.row(rx).at<float>(0);
const float by = blob.row(rx).at<float>(1);
const float bw = blob.row(rx).at<float>(2);
const float bh = blob.row(rx).at<float>(3);

const int w = static_cast<int>(bw * frame.cols);


const int h = static_cast<int>(bh * frame.rows);
const int x = static_cast<int>((bx * frame.cols) - w / 2);
const int y = static_cast<int>((by * frame.rows) - h / 2);

confidence.push_back(conf);
coordinates.push_back(cv::Rect(x, y, w, h));
classification.push_back(idx);
}
}
}

std::cout << "\n\nDetective BOO is done investigating...\n\n";

// non-maximum suppression removes duplicate boxes on the same object


std::vector<int> indices;
cv::dnn::NMSBoxes(coordinates, confidence, CONF_THRESHOLD, NMS_THRESHOLD, indices);

// draw a square around the detection coordinates


for (int &i : indices) {
const int w = coordinates[i].width;
const int h = coordinates[i].height;

const int tx = coordinates[i].x;


const int ty = coordinates[i].y;
const int bx = tx + w;
const int by = tx + h;

const std::string label = LABELS[classification[i]];

std::stringstream stream;
stream << std::fixed << std::setprecision(2) << confidence[i];
const std::string conf = stream.str();

cv::putText(frame, label, cv::Point(tx, ty-10),


cv::FONT_HERSHEY_DUPLEX, 0.5, cv::Scalar(250, 0, 250), 1);

cv::putText(frame, conf, cv::Point(tx+w-35, ty-10),


cv::FONT_HERSHEY_DUPLEX, 0.5, cv::Scalar(250, 0, 250), 1);

cv::rectangle(frame, coordinates[i], cv::Scalar(250, 0, 250), 2);


}

// Write frame to output video file


writer.write(frame);

i
// Increment frame count
frame_count++;

// Calculate progress as a percentage


int progress = (int)(((double)frame_count / total_frames) * 100);

// Print progress to console


std::cout << "Progress: " << progress << "%" << std::endl;

cv::imshow("Render", frame);
cv::waitKey(1);
}

// Release input video file and output video writer


cap.release();
writer.release();

return 0;
}

i
CHAPTER IV

SAMPLE USER INTERFACE

You might also like