Smart Parking System Report
Smart Parking System Report
Project Report
[Smart Parking System]
Team2
FutureWill
Abstract:
2
Content:
Abstract: .......................................................................................................................... 2
Introduction: .................................................................................................................... 4
Background: .................................................................................................................... 5
Related Work: .................................................................................................................. 5
Experimental Section: ...................................................................................................... 5
Design: ............................................................................................................................ 6
Schematics: ............................................................................................................................. 6
Flowchart: .............................................................................................................................. 7
Targeted performance: ........................................................................................................... 8
Challenges: ............................................................................................................................. 8
Implementation: ............................................................................................................... 9
Website: .................................................................................................................................. 9
Python code: ......................................................................................................................... 10
Bash Scripts:......................................................................................................................... 15
OS configuration: ................................................................................................................. 16
Test Results: ................................................................................................................... 17
Performance: ........................................................................................................................ 17
Product datasheet ................................................................................................................. 19
TRL Level............................................................................................................................. 20
Improvements: ..................................................................................................................... 20
Discussion: .................................................................................................................... 20
Conclusion:.................................................................................................................... 21
References: .................................................................................................................... 21
3
Introduction:
4
Background:
Related Work:
Experimental Section:
5
Design:
Schematics:
Choosing resistors values for ultrasonic sensor in a voltage divider circuit to control
and drop the voltage going to the pin of GPIO down to 3.3V from 5V. The ultrasonic sensor
has a range of 2cm up to maximum 4m which covers the appropriate distance for the camera
to take a clean picture. The measuring angle is 15 degree.
6
Main feature of the camera is sensor quality, the selection of camera type was restricted by the
amount of investment the team had. Going with basic functionality was the only option to have
the project working within price range. The cameras have 300K CMOS sensor, 24bit true color
and 8 megapixel.
Flowchart:
The flow chart for the project starts with registering a plate number in the website and
paying in advance for the entrance. Once the registered car approaches the parking it will be
7
detected by the ultrasonic sensor and signal to the camera to capture an image of the plate
number. The central computing unit analysis the image and extract a string of text containing
the plate number of the vehicle and compare it to a list of all registered plate numbers already
registered in the system. LEDs will show if the plate number is recognized and send a signal
to the barrier (depending on the parking lot method of controlling traffic in the lot). Exit node
at the gate will run the same flow once the vehicle approaches the exit gate. Analyzing the plate
and calculating used time, this output would be sent back to the website to apply the charges
per the parking policy and log the activity in the user profile.
Targeted performance:
The main idea of this product to have an easy and quick method to access a parking lot.
Calculated time for a user to access a parking with an ID card was on average 9.66 seconds
(tests were carried at Bangor School of Computer Science and Electronic Engineering with
staff members accessing parking area with a card reader). The experiment showed multiple
users taking long time to locate their access card while others had multiple false reading from
the NFC reader.
The expected performance from the product to be faster than usual NFC card readers and
eliminate human interaction with hardware or expose the driver to the elements. Optical
Character Recognition (OCR) expected to be quick averaging (146.8 milliseconds). Overall
performance of the system to be less than 7 seconds including network latency.
Challenges:
Challenges in the design were related to quality of the image, network latency and successful
plate recognition. For image quality, the camera to be used should have Adequate lighting for
capturing a clear image, it should have auto exposure feature along with auto shutter speed to
adjust to different time of day. Angle of capture is important to avoid image distortion and
inaccurate plate recognition. Pixel density of the sensor helps recognize the plate faster and
more accurately. Image setting like resolution and white balance is a factor in a successful
recognition.
For an OCR system, having enough CPU resources helps decrease total processing time of the
image. Hosting the OCR system locally on a raspberryPI with ARM processor could lead to
unwanted delays. By using a cloud API, reliable fast network connection is essential to meet
response time.
8
Implementation:
Website:
A website is an essential method to reach customers easily. In order to design, configure
and implement a website, a platform had to be selected for hosting and connecting user data to
the systems. For the purpose of this project the website is hosted locally on the RaspberryPi by
using various packages. XAMPP is an Apache distribution which contains all required
packages to host a website locally. MariaDB , Perl and PHP were installed and configured as
core packages. Initially, Bitnami software containing Wordpress binaries was installed on top
of XAMPP then removed due to compatibility issues with Kernel level of the OS. Manually
installed Wordpress binaries was done, adding required plugins then modifying the code to
give us the required functionality was done after that.
Once user is registered with requested mandatory fields, a profile view with multiple options
is provided to the user (figure). Administrators of the website have a profile viewer with more
control on user profiles and all the data regarding a user(figure).
9
Figure 6: administration view of user profile Figure 5: user profile view
Python code:
The main functionality of the project resides in the python code, for the two parts of the
system (entrance and exit gates) different scripts were created to simplify the process and do
in-depth debugging. The codes were created using python version 3.4 which was preinstalled
in the OS of raspberry Pi. The code starts with importing all necessary libraries to control:
Input/Output pins, subprocesses and invoking other scripts within the code, calling database
and parsing JSON. Different variables were defined to control the hardware
PIN FUNCTION
POWER 5v power
GND Multiple ground terminals
GPIO 4 Trigger port for ultrasonic1
GPIO 10 Echo port for ultrasonic1
GPIO 18 Echo port for ultrasonic2
GPIO 22 Red LED
GPIO 23 Green LED
GPIO 24 Trigger port for ultrasonic1
GPIO 25 Red LED
GPIO 27 Green LED
Table 2: pins and functions of GPIO on PI
1. #!/usr/bin/python3.4 (entry_node.py)
2.
3. #Libraries
4. import RPi.GPIO as GPIO
5. import time
6. import subprocess
7. import requests
8. import base64
9. import json
10. import datetime
11.
10
12. #time control
13. x = datetime.datetime.now()
14.
15. #led control
16. GPIO.setmode(GPIO.BCM)
17. GPIO.setwarnings(False)
18. GPIO.setup(22,GPIO.OUT)
19. GPIO.setup(23,GPIO.OUT)
20.
21.
22. #GPIO Mode (BOARD / BCM)
23. GPIO.setmode(GPIO.BCM)
24.
25. #set GPIO Pins
26. GPIO_TRIGGER = 18
27. GPIO_ECHO = 24
28.
29. #set GPIO direction (IN / OUT)
30. GPIO.setup(GPIO_TRIGGER, GPIO.OUT)
31. GPIO.setup(GPIO_ECHO, GPIO.IN)
32.
33. def distance():
34. # set Trigger to HIGH
35. GPIO.output(GPIO_TRIGGER, True)
36.
37. # set Trigger after 0.01ms to LOW
38. time.sleep(0.00001)
39. GPIO.output(GPIO_TRIGGER, False)
40.
41. StartTime = time.time()
42. StopTime = time.time()
43.
44. # save StartTime
45. while GPIO.input(GPIO_ECHO) == 0:
46. StartTime = time.time()
47.
48. # save time of arrival
49. while GPIO.input(GPIO_ECHO) == 1:
50. StopTime = time.time()
51.
52. # time difference between start and arrival
53. TimeElapsed = StopTime - StartTime
54. # multiply with the sonic speed (34300 cm/s)
55. # and divide by 2, because there and back
56. distance = (TimeElapsed * 34300) / 2
57.
58. return distance
59.
60. if __name__ == '__main__':
61. try:
62. while True:
63. dist = distance()
64. print ("Measured Distance = %.1f cm" % dist)
65. time.sleep(1)
66. converted = int(dist)
67. measured = 10
68. if converted < measured:
69.
70. subprocess.call(['/home/pi/webcam.sh'])
71. print "LED on .. captured image"
72. enter_time =open("time.txt", "w")
11
73. enter_time.write(now.strftime("%Y-%m-%d %H:%M"))
74. enter_time.close()
75. time.sleep(2)
76. filename = '/home/pi/plates.txt'
77.
78. def check(plate_number):
79. datafile = open(filename, 'r')
80. found = False
81. for line in datafile:
#checking numberplate in each line of file
82. if plate_number in line:
83. datafile.close()
84. return True
85. datafile.close()
86. return False
87.
88.
89. IMAGE_PATH = '/home/pi/image2.jpg'
90. SECRET_KEY = 'sk_6da8e710ca13a1ebba9b7d38'
91.
92.
93. with open(IMAGE_PATH, 'rb') as image_file:
94. img_base64 =
base64.b64encode(image_file.read())
95.
96. url =
'https://api.openalpr.com/v2/recognize_bytes?recognize_vehicle=1&coun
try=eu&secret_key=%s' % (SECRET_KEY)
97. r = requests.post(url, data = img_base64)
98.
99.
100. #print(json.dumps(r.json(), indent=2))
101. answer = r.json()
102. plate_number = answer['results'][0]['plate']
103. print(plate_number)
104.
105. result = check(plate_number)
106.
107. if result == True:
108. print('found')
109. GPIO.output(23,GPIO.HIGH)
110. time.sleep(0.5)
111. GPIO.output(23,GPIO.LOW
112. else:
113. print('Not found')
114. GPIO.output(22,GPIO.HIGH)
115. time.sleep(0.5)
116. GPIO.output(22,GPIO.LOW)
117.
118.
119.
120. # Reset by pressing CTRL + C
121. except KeyboardInterrupt:
122. print("Measurement stopped by User")
123. GPIO.cleanup(
1. #!/usr/bin/python3.4 (exit_node.py)
2. #Libraries
3. import RPi.GPIO as GPIO
4. import time
12
5. import subprocess
6. import requests
7. import base64
8. import json
9. import datetime
10.
11. #time control
12. x = datetime.datetime.now()
13.
14. #led control
15. GPIO.setmode(GPIO.BCM)
16. GPIO.setwarnings(False)
17. GPIO.setup(25,GPIO.OUT)
18. GPIO.setup(27,GPIO.OUT)
19.
20.
21. #GPIO Mode (BOARD / BCM)
22. GPIO.setmode(GPIO.BCM)
23.
24. #set GPIO Pins
25. GPIO_TRIGGER = 4
26. GPIO_ECHO = 22
27.
28. #set GPIO direction (IN / OUT)
29. GPIO.setup(GPIO_TRIGGER, GPIO.OUT)
30. GPIO.setup(GPIO_ECHO, GPIO.IN)
31.
32. def distance():
33. # set Trigger to HIGH
34. GPIO.output(GPIO_TRIGGER, True)
35.
36. # set Trigger after 0.01ms to LOW
37. time.sleep(0.00001)
38. GPIO.output(GPIO_TRIGGER, False)
39.
40. StartTime = time.time()
41. StopTime = time.time()
42.
43. # save StartTime
44. while GPIO.input(GPIO_ECHO) == 0:
45. StartTime = time.time()
46.
47. # save time of arrival
48. while GPIO.input(GPIO_ECHO) == 1:
49. StopTime = time.time()
50.
51. # time difference between start and arrival
52. TimeElapsed = StopTime - StartTime
53. # multiply with the sonic speed (34300 cm/s)
54. # and divide by 2, because there and back
55. distance = (TimeElapsed * 34300) / 2
56.
57. return distance
58.
59. if __name__ == '__main__':
60. try:
61. while True:
62. dist = distance()
63. print ("Measured Distance = %.1f cm" % dist)
64. time.sleep(1)
65. converted = int(dist)
13
66. measured = 10
67. if converted < measured:
68.
69. subprocess.call(['/home/pi/webcam2.sh'])
70. print "LED on .. captured image"
71. time.sleep(2)
72. exit_time =open("time2.txt", "w")
73. exit_time.write(x.strftime("%H\n%M"))
74. exit_time.close()
75. filename = '/home/pi/plates.txt'
76. subprocess.check_call(["python3.7",
"/home/pi/time.py"])
77.
78. def check(plate_number):
79. datafile = open(filename, 'r')
80. found = False
81. for line in datafile:
#checking number plate in each line of file
82. if plate_number in line:
83. datafile.close()
84. return True
85. datafile.close()
86. return False
87.
88.
89.
90. # OPENALPR
91. IMAGE_PATH = '/home/pi/image222.jpg'
92. SECRET_KEY = 'sk_6da8e710ca13a1ebba9b7d38'
93.
94.
95.
96. with open(IMAGE_PATH, 'rb') as image_file:
97. img_base64 =
base64.b64encode(image_file.read())
98.
99. url =
'https://api.openalpr.com/v2/recognize_bytes?recognize_vehicle=1&coun
try=eu&secret_key=%s' % (SECRET_KEY)
100. r = requests.post(url, data = img_base64)
101.
102.
103.
104. #print(json.dumps(r.json(), indent=2))
105. answer = r.json()
106. plate_number = answer['results'][0]['plate']
107. print(plate_number)
108.
109. result = check(plate_number)
110.
111. if result == True:
112. print('found')
113. GPIO.output(25,GPIO.HIGH)
114. time.sleep(0.5)
115. GPIO.output(25,GPIO.LOW)
116.
117.
118. else:
119. print('Not found')
120. GPIO.output(27,GPIO.HIGH)
121. time.sleep(0.5)
14
122. GPIO.output(27,GPIO.LOW)
123.
124.
125.
126. # Reset by pressing CTRL + C
127. except KeyboardInterrupt:
128. print("Measurement stopped by User")
129. GPIO.cleanup()
The code below used to calculate how long did the customer stay in the parking and can be
charged based on the hourly rate of the lot. A call to this code (time.py) is impeded into the
exit gate code (exit_node.py) to calculate the time once the customer is registered and the car
identified correctly at the gate. By importing two timestamp logs from entry and exit codes, a
calculation is done by parsing string to integer type. A variable (difference) could be sent to
the website or payment gateway to charge the customer accordingly.
1. #!/usr/bin/python3.4 (time.py)
2.
3. with open ("time.txt","r") as myfile:
4. hrs = int(myfile.readline().rstrip('\n'))
5. mns = int(myfile.readline(2))
6.
7.
8. with open("time2.txt","r") as myfile2:
9. hrs2 = int(myfile2.readline().rstrip('\n'))
10. mns2 = int(myfile2.readline(2))
11.
12.
13. t1= hrs*60+mns
14. t2 = hrs2*60+mns2
15.
16. difference = t2 - t1
17.
18. print("customer spent " +str(difference) +" minutes.")
19.
20. myfile.close()
21. myfile2.close()
Bash Scripts:
This code is used to fetch users plate number from mysql database and insert it into a
new file. Applying filters using (awk) function to only select required part of the output and
store the data in a new file named clean.txt. for the final version of the plate numbers file,
appending only newly add clean data to the final plate numbers record plates.txt.
15
In order to use the camera control drivers installed into Linux in the python code, using
a separate file for each camera to be called by python code once the sensor detected a
movement within range. Using command fswebcam to control the capture of an image
along with specified options for image size.
OS configuration:
Having Linux as the operating system of the Raspberry Pi gives a lot of flexibly in
running the system efficiently. Using (Run-Level 3) as default mode allows the network
interface services to be configured and start automatically with the boot of the system. By
disabling all un-necessary services from this run-level and setting required services to auto
start, a more optimized OS can be achieved. Bash scripts were scheduled to run automatically
every minute and will autorun at boot as well by using crontab command. Python codes will
run in the background in an infinite loop and could be stopped by business requirement or off
working hours. Further improvements could be made by disabling GUI elements and
customizing snapshot to run in CLI mode. Hosting the website locally prevented this
optimization to be made as GUI interface is required for theme control and faster database
configuration.
Figure 7: Top command shows CPU utilization Figure 8: all required services enabled
16
Test Results:
Performance:
Test methodology focused mainly on optimizing total processing time for the system. The
plan meant to test different aspects of the design:
- angle of camera
- distance for the sensor
- processing time for cloud API vs local API
- image quality
- accuracy of detection
for camera angle: by using a gyroscope with accelerometer in a phone, multiple images were
taken and processed to find best camera position. The result showed best confidence (94.75%)
with camera angle between 0-45 degrees, while more aggressive angles had either incorrect
detection or more total processing time depending on API used.
Testing cloud API against local API showed huge advantage for could API as the latter has
been trained to detect explicit plate types and been optimized to fix angle distortion which was
proven to have directly to correctness of the detection. Local API in an untrained state couldn’t
recognize one image and took more than 3 seconds total processing time.
17
Figure 10: processing time in milliseconds Figure 11:total processing time in milliseconds
# SIZE RESOLUTION PT PT TT TT
LOCAL CLOUD LOCAL CLOUD
IMG1 52kb 900*506 222.381ms 207.500ms 968.807ms 678.646ms
IMG3 64kb 590*350 149.613ms 114.722ms 406.293ms 409.302ms
IMG4 840kb 2560*1600 102.483ms 72.252ms 2940.89ms 1280.35ms
IMG5 436kb 1800*1200 169.414ms 150.971ms 3003.09ms 1467.99ms
IMG6 385.6kb 1024*768 - 254.460ms - 265.621ms
Table 3: API testing with different images
Using cameras purchased for this project showed confidence of 94.73% in total processing
time of 175.53ms. total time of the operation was 5.25 seconds on entry node which includes
uploading image of size 420.86KB to the cloud on a 40Mb/s internet connection. The exit node
showed similar results with total operation time of 5.47 seconds.
1. {
2. "matches_template": 0,
3. "plate": "M90DGR",
4. "confidence": 94.72909545898438%
5. "processing_time": {
6. "plates": 123.39320373535156,
7. "total": 175.53499998757616},
8. }
18
Product datasheet:
Figure 14: stress test 100% CPU utilization (71.04 C max temperature)
19
TRL Level:
The product currently at entry stages of level 5. The prototype is under development,
functionalities have been tested in a relevant environment but isn’t advanced enough to be
demonstrated on the ground. In order to move to next level, more development and
optimization has to be done.
Improvements:
As a proof of concept, the product in its current state is working as intended and considered a
success but with TRL 5 more improvements have to be done. The hardware is the substantial
bottleneck for the system. Adapting more capable cameras with better sensors will grant for
smaller image size in better quality that yields to faster processing time.
Codes could be improved to make them more efficient in utilizing CPU time and have an
advanced error checking capability. Developing a mobile application to provide more value to
customers and better connecting with them.
Discussion:
20
Conclusion:
References:
21