Dissertation Proposal
Dissertation Proposal
1
1. Aims and objectives.
1.1. Aims:
1. Create Engaging Visual Experiences: Develop a multi-camera bullet time rig that captures
dynamic and immersive frozen moments, providing users with visually stunning and engaging
experiences.
4. User-Friendly Operation: Develop a user-friendly interface to control and monitor the system,
ensuring accessibility for both enthusiasts and professionals.
1.2. Objectives:
1. Hardware Security: Implement secure boot mechanisms and hardware-based encryption to
safeguard the Raspberry Pi boards from unauthorized access and tampering.
2. Network Security: Employ secure communication protocols and encryption techniques to
protect data transmitted between the cameras and the central processing unit, preventing
eavesdropping or data manipulation.
3. Access Control: Develop robust access control mechanisms to restrict unauthorized access to
the system's user interface and configurations.
4. Data Encryption: Apply end-to-end encryption to the captured images during storage and
transmission to prevent unauthorized access to sensitive visual data.
5. Hardware Setup: Design and build a robust hardware setup using Raspberry Pi boards and
multiple cameras, ensuring stability and reliability during operation.
6. Synchronization Mechanism: Develop a precise synchronization mechanism to ensure
simultaneous image capture across all cameras, minimizing timing discrepancies.
7. Image Processing Algorithms: Implement efficient image processing algorithms, such as
stitching, to seamlessly combine captured frames and create the desired bullet time effect.
8. User Interface Development: Create an intuitive user interface for controlling and monitoring
the entire system, making it accessible for users with varying technical expertise.
9. Testing and Optimization: Rigorously test the system under different shooting scenarios,
identify potential issues, and optimize the performance for reliable and consistent results.
10. Documentation: Provide comprehensive documentation detailing the project architecture,
hardware setup, software implementation, and user instructions for seamless replication and
understanding.
11. Demonstration: Showcase the capabilities of the bullet time rig through demonstration
videos, highlighting its effectiveness in capturing captivating moments from multiple
perspectives.
12. Future Enhancements: Explore possibilities for future enhancements, such as expanding the
system with more cameras, integrating additional sensors, or implementing wireless
communication for increased flexibility.
13. Budget Management: Allocate and manage the budget effectively, ensuring that the project
remains cost-effective while meeting its objectives.
14. Risk Mitigation: Identify potential risks, including hardware limitations, synchronization
challenges, and software bugs. Develop strategies to mitigate these risks and ensure the
successful completion of the project.
2
2. Introduction:
The concept of capturing frozen moments in time has always intrigued creative minds, and the bullet
time effect, popularized by iconic films like "The Matrix," has become synonymous with visually
striking and dynamic imagery. This project aims to delve into the realm of creating a multi-camera
bullet time rig, utilizing the versatile capabilities of Raspberry Pi technology. The motivation behind
this endeavor lies in the desire to provide enthusiasts and professionals alike with an accessible and
cost-effective solution for producing captivating visual experiences.
2.1. Overview:
The project involves the design and development of a synchronized system consisting of multiple
Raspberry Pi boards and cameras. These cameras will act in unison, capturing images simultaneously
from different angles to create a seamless bullet time effect. The system's versatility opens doors for
applications in diverse creative fields, including photography, art installations, and interactive
experiences. This overview sets the stage for an exploration into the technical intricacies of hardware
setup, synchronization mechanisms, image processing algorithms, and user interface development.
Another crucial element of legal concern is data protection legislation. Compliance with data
protection laws like the General Data Protection Regulation (GDPR) or other regional equivalents may
be required, depending on the country where the system is being implemented. Legal compliance may
be achieved by ensuring clear data handling procedures, getting informed consent before collecting
data, and putting strong security measures in place to safeguard the data that has been obtained.
Liability concerns should be handled last. Users should be given explicit disclaimers and conditions of
use that specify the obligations and restrictions placed on both the creators and users of the bullet
time rig programme. This legal forethought can lessen the likelihood of disagreements and liabilities.
3
accommodate users with different technical backgrounds. In line with social justice ideals, this
promotes inclusion and democratises access to creative technology.
But it's important to think about how society could be affected and deal with any unforeseen
repercussions. When using technology to express oneself artistically or creatively, one should be
aware of other viewpoints and cultural sensitivity. A crucial component of responsible technology
innovation is the use of ethical storytelling and the avoidance of information that might damage
certain populations or reinforce stereotypes.
Gaining the trust of users requires transparency. The system must to give users explicit instructions
on data collecting procedures, the rationale for picture capture, and the functionality of the
technology. It is ensured that users make informed judgements regarding their involvement when
there is open information about the capabilities of the system. One way to practise responsible
technology usage is to discourage the use of the bullet time rig in any way that might compromise
security, privacy, or ethical standards. Encouraging moral behaviour among users and offering rules
for appropriate usage can have a beneficial effect on society.
3. Background:
Bullet time, popularised by films such as "The Matrix," is the idea of concurrently recording a moment
from several perspectives to produce an amazing frozen-in-time effect. This project uses Raspberry Pi
boards to create a multi-camera bullet time setup, with the goal of making this cinematic technique
accessible to pros and hobbyists alike. The Raspberry Pi was selected because of its community
support, cost, and accessibility, which make it the perfect platform for democratising creative
technology.
Historically, bullet time has been a resource-intensive method that frequently calls for expensive
equipment and specialised training. By harnessing the capabilities of Raspberry Pi boards and many
synchronised cameras, this project intends to give a cost-effective and adaptable method for achieving
the bullet time effect in various creative applications.
4. Related Research:
Surveillance has grown in importance and benefits over time as a crucial part of many organisations'
domestic safety and protection priorities. It provides instantaneous supervision of belongings and
allows people to visit your private residence and property. This project focuses on the design of an
Embedded Real-Time Automated Door Lock Safety Device. It uses a Raspberry Pi 3 as its primary
platform and a pi digital camera for intruder detection, improving surveillance generation to provide
houses with essential safety and associated control. An integrated web server establishes a
streamlined process for monitoring and automatically locking any device. The idea of following a
certain region at a remote location and coming across an unfamiliar face if they are trying to enter is
4
made possible by the Raspberry Pi-based fully automatic door unlocking system. Only when it
recognises a familiar face and activates the house lights as part of the home application does the door
unlock. This gadget will send an SMS and send the owner's mail a photo of the unauthorised individual
whose face it detects to be unknown or not authorised. A thorough investigation of the current COVID-
19 scenario. Given the critical role that temperature testing plays in controlling the spread of COVID,
the proposed technology includes an additional feature that measures the temperature of everyone
attempting to enter the home. It even features an LCD and temperature sensor that asks the user to
take their temperature and shows it. The temperature is also sent to the cloud so the owner may
monitor the user's temperature. [1]
Applications for image and video processing are crucial in a variety of fields, including industry,
medicine, and automotive technology. Using image processing techniques, Driving Assistance Systems
(DAS) should consider the driver's behaviour as well as objects and road conditions in order to ensure
safe driving. In this investigation, a Raspberry Pi platform that satisfies these requirements is utilised
as these applications require processors with high speed and power. Road line tracking, object
detection, motion detection, face and eye detection, and a Canny edge detector—one of the crucial
edge detection techniques—are all taken care of on the other hand, the research in the literature that
look at these methods in isolation are all based on real-time Raspberry Pi operations and rely on video
images captured by cameras. Because of this, Python software and libraries with widely used
computer vision methods, such as OpenCv and TensorFlow, are employed. The outcomes of all image
and video processing applications are seen to be fruitful and fulfilling. As a consequence, the
suggested techniques enable safe and pleasurable driving. [2]
Yocto is a lightweight Linux distribution that can be customised and is suitable for embedded devices
with any sort of hardware architecture. It is possible to quickly assemble an operating system with the
same features for several hardware types using the Yocto. In order to create a high-speed Internet of
Things (IoT) hardware/software solution with the only characteristics they want, this article suggested
combining the Yocto with the Raspberry Pi. Many capabilities included in the popular Linux
distribution for Raspberry Pis are superfluous for certain specialised applications, such as Internet of
Things apps that only require a limited range of functions to function in this project, they used two
distinct Raspberry Pi models and an embedded Linux operating system called Yocto to develop a
prototype of a smart home equipment control system. All of the household appliances may be
managed by this system via the control interface display or a dedicated global website. [3]
In the context of the information era, image processing technology is widely utilised in many spheres
of life. The picture rectification impact can be enhanced by processing it using a computer vision
algorithm. This paper discusses image processing technology based on computer vision algorithm,
including computer vision display system and image distortion correction two parts, which have good
effects and high application value. It also introduces the image categories and technical characteristics
of computer vision algorithm and image processing technology. Due to a lack of technological
advancement in the past, objects in traditional two-dimensional environments could only display a
single, side projection. However, as society advanced, technology continued to advance, and at a
pivotal moment, a new display technology emerged, allowing for the creation of three-dimensional
pictures. Technicians must employ computer vision algorithms to apply image processing technology,
express object coordinates in three dimensions using three-dimensional voxels, and rectify projection-
induced distortion in order to get the desired visual appearance. The image processing technique
based on computer vision algorithms clearly offers more benefits and greater accuracy as compared
to the standard BP neural network. [4]
5
The creation of an adaptable multicamera object detection system that can be further tuned for
processing needs is the focus of this work. The entire project is mimicked in a virtual environment to
follow the intended object and gather as much data as possible about its attributes in order to produce
early findings. Unity software is used to create the virtual scene, and a virtual camera that is controlled
by a mouse or other external device takes the pictures. Further expected results of this project are to
achieve a model which can be trained on the various classes of the 3D objects to classify objects and
calculate the optimal distance between the object and the camera position, making it reliable to get
the maximum information from the real-time environment which is developed by physical model by
using camera and microcontroller raspberry pi4. The preliminary work's initial results accurately
reflect the progress made thus far. To identify the item that is in the camera's field of view, machine
learning methods have been employed. Various categories have been used to classify objects. OpenCV
and Python have been used to implement object detection. [5]
Voice-activated digital assistants provide a multitude of advantages by helping users to control various
elements of Internet of Things systems. These assistants are susceptible to well-known cyberthreats
like Dolphin attacks, though. In these assaults, the attackers conceal maliciously intended orders at
high frequencies that the assistants may hear but which are undetectable to human ears. As a result,
audio communications must be secured and resistant to Dolphin assaults. In this study, a machine
learning-based method for voice-activated Android application authentication for home IoT systems
is developed. Specifically, they create a speaker identification model that verifies voice instructions in
home IoT systems using Convolutional Neural Networks (CNN). Their method is general and user-
aware, meaning it may be used with a variety of voice samples from different assistants. An analysis
of the suggested system reveals that their methodology attains an 88% accuracy rate. [6]
The Internet of Things (IoT), an emerging technology, transformed the worldwide network made up
of people, data, information, smart objects, and smart gadgets. IoT development is still in its early
stages, and there are still a lot of problems that need to be resolved. The idea of integrating everything
is united in IoT. IoT provides a fantastic opportunity to improve accessibility, integrity, availability,
scalability, secrecy, and interoperability on a global scale. Nonetheless, securing IoT is a difficult
undertaking. IoT development is built on a foundation of system security. IoT cybersecurity is
thoroughly reviewed in this article. Information communication technologies (ICT) and the integration
and protection of heterogeneous smart devices are the main factors to take into account. Researchers
and practitioners interested in IoT cybersecurity will find this review to be helpful in providing
information and insights on a variety of topics, such as current research on IoT cybersecurity, IoT
cybersecurity architecture and taxonomy, important strategies and countermeasures that enable
cybersecurity, significant industry applications, research trends, and challenges. [7]
Owning a dependable security system that can safeguard our belongings and preserve our privacy has
become increasingly crucial in recent years. In a standard security system, entrance to places like
homes and workplaces requires the use of a key, identification (ID) card, or password. On the other
hand, the current security system is rife with flaws that may be easily exploited. The majority of doors
are operated by individuals who use keys, security cards, countersigns, or patterns. This study uses
facial detection and identification technology to help users improve the door security of critical
facilities. The primary subsystems of the suggested system are picture capture, email notification, face
detection and identification, and automated door access control. Because Eigen faces are used and
the scale of face photographs is reduced without losing essential aspects, Face Recognition enabled
openCV is brought up and allows facial images of several people to be kept in the database. Using the
Telegram Android app, the door lock may easily be accessed remotely from anywhere around the
globe. For security, the authorised person will receive the pi camera's taken image by email. [8]
6
We are now residing in the Information Age, which is distinguished by enormous technical
advancements and quick changes. Perhaps, two of the most important advances in this century have
been in the domains of Automation and Internet of Things. We live in a world where there are gadgets
everywhere that can connect to the internet and exchange data with other devices of the same kind.
The authors of this paper present a way to combine the Internet of Things' capabilities with the power
of automation to control peripherals and hardware using a specially designed web server. A Raspberry
Pi 3b+ is the embedded controller utilised in the suggested setup. The goal of this project is to simplify
the development, implementation, automation, and monitoring of home automation and associated
automated processes over the Internet. By tracking and conserving the units used by the various
system peripherals, the suggested system would monitor and help the end user save power
expenditures while also having a good effect on the environment. Flask is a micro framework that is
utilised in the back-end implementation process. Building and deploying smooth Python-based web
apps is made possible with Flask. The authors have constructed a small-scale electrical system that is
controlled by the Raspberry Pi via a web server that is hosted on the Local Area Network in order to
illustrate how the suggested system would operate in real time. Additionally, the software is designed
to provide an extensive data sheet with a variety of information, including the devices' running times,
rates of consumption, and ultimate unit consumption. [9]
The computer sends the real-time photos it has collected to the processing unit, which is a crucial
component of the mobile robot platform's ability to sense its surroundings. Following the analysis and
processing of the picture data, several functionalities are implemented based on the actual
requirements. One popular form of technology that has emerged from the ongoing advancement of
computer technology is image processing technology. By using it for fruit detection and grading, one
may raise the standard of fruit detection and grading, increase its efficiency, regulate the expense of
manual detection and grading, and guarantee fruit business profits. The human visual system is
significantly more capable than any computer or information processing system now in use for the
recognition, processing, and manipulation of pictures. The human brain is currently the most efficient
biological intelligence system known to science. The image processing method utilised in this study is
based on a computer vision algorithm. It can rectify distorted pictures generated by projection and
describe the real coordinates of objects in 3D space using 3D voxels. Image processing based on
computer vision algorithms offers greater benefits and better accuracy than typical BP neural
networks. [10]
7
5. Methodology
For this comprehensive project, the methodology follows a systematic approach to ensure a thorough
understanding of each component and the successful integration of various elements. Initially, a
detailed literature review is conducted to establish a strong theoretical foundation, exploring existing
research on bullet time techniques, Raspberry Pi applications, synchronization methods, and
cybersecurity considerations. With a clear project scope and learning objectives in mind, the hands-
on phase begins with the acquisition and setup of hardware components, including Raspberry Pi
boards, cameras, and synchronization hardware. The Raspberry Pi environment is established through
the installation of the operating system and relevant development tools, guided by online tutorials
and documentation.
The project progresses with a focus on practical experimentation, involving the implementation of
synchronization mechanisms using GPIO pins and external triggers, as well as the application of image
processing algorithms for stitching captured frames. During this stage, the development of a simple
prototype serves as a foundational step, allowing for iterative improvements and optimizations to
enhance synchronization accuracy and image processing quality. Simultaneously, the principles of
user experience (UX) design are explored, leading to the creation of a basic user interface prototype
that facilitates control over camera settings, image capture, and system monitoring.
As the project integrates cybersecurity considerations, educational resources are consulted to grasp
basic cybersecurity concepts. The implementation of fundamental cybersecurity measures, such as
secure boot mechanisms, encryption, and access control, aligns with the project's emphasis on
responsible and secure technology development. Extensive testing under various shooting scenarios
helps identify potential issues, and optimization efforts are directed towards achieving better
performance within the constraints of the Raspberry Pi environment.
Documentation plays a pivotal role throughout the project, encompassing the logging of progress,
challenges faced, and solutions found. Code documentation ensures clarity and facilitates future
reference. The culmination of the project involves the creation of presentation materials and a
demonstration video showcasing the operational bullet time rig. Reflective analysis provides insights
into the lessons learned, challenges overcome, and avenues for future exploration. The final project
presentation aims to effectively communicate technical details, challenges, and accomplishments to
an audience of peers or mentors. Additionally, discussion on potential future enhancements, such as
the integration of additional cameras or sensors, encourages a forward-looking approach and
demonstrates a commitment to ongoing learning and improvement.
8
6. Expected outcomes / deliverables
Upon completion of this research, a number of significant results and demonstrations are anticipated.
First and foremost, there must to be a functional multi-camera bullet time rig that enables me to
simultaneously record amazing events from various viewpoints. I'll make sure it functions properly by
testing it in various scenarios.
I'm currently working on creating an intuitive user interface. Users will be able to operate the cameras
and view the action thanks to this. To improve it, I'll ask a few individuals to give it a try and let me
know what they think.
Achieving flawless camera synchronisation and assembling the images seamlessly are significant
objectives as well. I'll check that everything fits together perfectly by examining the photographs to
see how well everything works.
To keep everything safe, I'm also implementing certain security mechanisms, including making sure
nobody tampers with the cameras or the photos. I'll put these security precautions to the test to make
sure they genuinely keep everything safe.
It's crucial to document what I accomplished in clear, comprehensive documentation. I would like to
make sure that others can comprehend how I assembled the contraption and operated the cameras.
I will review these materials to ensure that they are easily comprehensible.
I'll create a few videos to demonstrate how the bullet time rig works. These videos will make it easier
for others to see its capabilities and effectiveness. I'll invite others to see the videos and respond with
their thoughts.
Ultimately, I will prepare a presentation to discuss the project. I'll describe my goals, my process, and
the lessons I discovered. Towards the conclusion, I'll also consider how to improve the project going
forward. Perhaps introducing more features or cameras. To demonstrate that I'm planning forward,
I'll offer these thoughts. In general, I'll be reviewing everything I've created and accomplished to
determine whether it's functioning effectively and whether there is any room for improvement.
9
7. Plan of work
10
8. References
[1] M. S. P. J and S. C. B, “Automatic Door Un-locking and Security System,” in 2022 IEEE
International Conference on Distributed Computing and Electrical Circuits and Electronics
(ICDCECE), 2022.
[2] M. Yildirim, O. Karaduman and H. Kurum, “Real-Time Image and Video Processing Applications
using Raspberry Pi,” in 2022 IEEE 1st Industrial Electronics Society Annual On-Line Conference
(ONCON), Elazig, Turkey, 2022.
[3] B. A, B. D, C. SS and B. A, “Smart Home Equipment Control System with Raspberry Pi and Yocto,”
in 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability
(WorldS4), 2020.
[4] F. M and L. Y, “Image processing technology based on computer vision algorithm,” in 2022 4th
International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM), 2022.
[5] V. Dwivedi, M. Bhatnagar, J. Venjarski, G. Rozinaj and Š. Tibenský, “Multiple-camera System for
3D Object Detection in Virtual Environment using Intelligent Approach,” in Proc. 29th
International Conference on Systems, Signals and Image Processing “IWSSIP 2022”, June 01 - 03,
2022, Sofia, Bulgaria, 2022.
[6] S. S and P. MR, “Security over Voice Controlled Android Applications for Home IoT Systems,” in
2019 9th International Symposium on Embedded Computing and System Design (ISED), 2019.
[7] L. Y and . X. L, “Internet of things (IoT) cybersecurity research: A review of current research
topics,” IEEE Internet Things J., vol. 6, 2019.
[8] N. A, N. JN and K. M, “IOT based door access control using face recognition,” in 2018 3rd
International Conference for Convergence in Technology (I2CT), 2018.
[9] A. Abhishek, M. Bhasker and A. Ponraj, “IoT based control system for home automation,” in 2021
IEEE 2nd International Conference on Technology, Engineering, Management for Societal impact
using Marketing, Entrepreneurship and Talent (TEMSMET), 2021.
[10] L. Shao, “Construction of image processing model based on computer vision algorithm,” in 2023
IEEE International Conference on Image Processing and Computer Applications (ICIPCA), Wuhan,
Hubei, 2023.
11