0% found this document useful (0 votes)
32 views19 pages

Testing - My Part

The document discusses the implementation of a system to detect speeding vehicles. It describes the various hardware and software components used including cameras, sensors, Arduino boards and Matlab. Algorithms for tasks like background detection, vehicle detection and tracking, and speed calculation are also outlined.

Uploaded by

biruk molla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views19 pages

Testing - My Part

The document discusses the implementation of a system to detect speeding vehicles. It describes the various hardware and software components used including cameras, sensors, Arduino boards and Matlab. Algorithms for tasks like background detection, vehicle detection and tracking, and speed calculation are also outlined.

Uploaded by

biruk molla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

System testing document

4. PART THREE: IMPLEMENTATION


4.1. Overview
Apparently the system’s implementation focuses on over speeding drivers who are presumed to
cause the most traffic problems in our country. In the upcoming section we have prepared bunch
of code implementations that illustrate which kinds of tools and techniques have been used in the
project & how we configured them, some of the functionalities that describe the system by part
including,
 The image capturing and preprocessing unit,
 The feature extraction unit,
 The report generator section,
 The image up loader code,
 The speed detection and calculation portion and
 Plate number matching code with other complementing functionalities are written.
We have also used several algorithms as for skew detection, noise removal and for other
functionalities. In the course of doing so, we have encountered some problems. For instance, the
background section of the code yields a greenish color of all the background which made it so
difficult for us to detect real world objects, however we used the algorithm/ function
YCBRC2rgb then all the blurring effect with a green color was gone. Such and other effective
algorithms are also included.
4.2. Tools and technologies utilized during system development
Our system utilize many technology and tool as listed below
Mat lab:
The major programming language used necessarily in image processing video object
tracking. We used it since it encourages reuse of tested and it is efficient mathematical
routine with high level representation of programs and mathematical expression over details
of algorithms.

C#:
We use C# in developing the desktop application which is useful for the clerk and Asp.net
for dashboard to show monthly or annual report of the traffic status.
Office 2016:
Used to write all software documentation.
Visio:
Used to create diagrams like use case, DFD, database diagrams.
Python:
For configuration API that connect to sky drive
Sensors:
There are two sensors. Both the first and the second sensors sense moving object, then they
send signal to the micro controller periodically
Arduino:
When micro controller (Arduino) receive the first signal it starts the time counter. And the
second signal came from the sensor to stops counter and calculate speed and sends command
to camera to capture photo. And receive the photo from camera and upload to sky drive.
Camera:
The Camera is used to capture image frames of the area ahead of the vehicle.
Cables:
Cables used to connect to Arduino with camera and sensors
Sky drive and a powerful supply of internet access are also among the requirements.

4.3. Prototype setup


In our system we are implementing different kinds of hardware and software components such as
 External webcam with maximum resolution of 640 x 480 pixels
 Two motion sensors
 1 arduino board
 ASUS core i5 Laptop/personal computer
 Matlab IDE
o algorithms used to fulfill system functionalities
After we install the hardware components in their correct configuration we setup the software
with hardware by connecting the Arduino board and the camera to the Asus laptop which acts as
a center for most of the software configurations. Then we use the client side user interface to
manage the system.

4.4. Implementation detail


In this section we have tried to describe the detail implementation of some modules incorporated
in the system. Accordingly some of the algorithms which we thought most essential that make up
the system are written below.
Background detection
function get_background_btn_Callback(hObject, eventdata, handles)
% hObject handle to get_background_btn (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

axes(handles.axes1); cla;
imaqreset;
set(hObject,'UserData',0) %User data 0 (1 stop capture)
% Enable "Start" and "Stop" buttons
% set(handles.uipanel3,'visible','off');
% Disable current button
% set(hObject,'Enable','off');
% Get default source
% Open GUI to select the camera to use
sel_camera;
%
uiwait;
% Bring the camera features
% id = Camera ID
% es_web_ext = indicator if laptop or external camera
global id es_web_ext;
% Determine format depending on the type of camera to use
if es_web_ext == 0
formt = 'YUY2_640x480';
else
formt = 'RGB24_640x480';
%formt='RGB24_320x240';
end
try
% Create video object
vid = videoinput('winvideo', id, formt);
% Update handles
guidata(hObject, handles);
catch
% Message on error
msgbox('Check the connection of the camera','Camera')
% Remove axis labels
set(handles.axes1,'XTick',[ ],'YTick',[ ])
end
% Specify how often to acquire frame from video stream
vid.FrameGrabInterval = 1;
set(vid,'TriggerRepeat',Inf);
% Start capture
% _______Get Background_________
vid.FramesPerTrigger=50;
start(vid);
data = getdata(vid,50);
if es_web_ext == 0
bgImage = double(ycbcr2rgb(data(:,:,:,50)));
else
bgImage = double(data(:,:,:,50));
end
% Set last image as background
% Show background
imshow(uint8(bgImage));
% Reset video object
stop(vid);
clear vid;
imaqreset;
% Save background
handles.backg = bgImage;
guidata(hObject,handles);

Vehicle detecting and tracking algorithm


function [indicador] = compare(input_image, background, threshold, handles)
indicador = 0;
set(handles.compare_output, 'String', indicador);
%%
% Perform image difference from the new image and the background image
difference = (abs(input_image(:,:,1) - background(:,:,1)) > threshold) |
(abs(input_image(:,:,2) - background(:,:,2)) > threshold) ...
| (abs(input_image(:,:,3) - background(:,:,3)) > threshold);

%%
% Performs morphological closing (dilation followed by erosion).
b = bwmorph(difference,'close');

%%
% Performs morphological opening (erosion followed by dilation).
difference = bwmorph(b,'open');
difference = bwmorph(difference,'erode',2);

%%
% Select the biggest object
big_object = bwlabel(difference,8);

%%
% Measure properties of image regions such as: 'Area', 'Centroid', and 'Box'
object = regionprops(big_object);

%%
% Number of objects in in the image.
N = size(object,1);
%%
% Return whether no object in the image
if N < 1||isempty(object)
return
end

%%
% Remove holes less than 200 pixels
holeFilled = find([object.Area]<200);
if ~isempty(holeFilled)
object(holeFilled)=[ ];
end
%%
% Count objects
N = size(object,1);
if N < 1 || isempty(object)
return
end

%%
% Draw a rectangle and center point for every object in the image
for n = 1 : N
hold on
centroid = object(n).Centroid;
C_X = centroid(1);
C_Y = centroid(2);
rectangle('Position', object(n).BoundingBox, 'EdgeColor', 'g', 'LineWidth', 1);
plot(C_X, C_Y, 'Color', 'g', 'Marker', '+', 'LineWidth', 1);
hold off
end

indicador = 1;
set(handles.compare_output, 'String', indicador);
end

Speed detection
while islogging(vid)
if get(handles.stop,'UserData') % Data from "Stop" button
break
end
% Get image
if es_web_ext == 0
get_image = ycbcr2rgb(getdata(vid, 1));
else
get_image = getdata(vid, 1);
end
% Show image
image(get_image);
% Convert image to double
input_image = double(get_image);
axis image off;

% Call "compare" function


if (compare(input_image, bg, pop, handles))
frame_counts = frame_counts + 1;
set(handles.frame_count, 'String', frame_counts);
img = getsnapshot(vid);
img = ycbcr2rgb(img); %remove noise
if (frame_counts == 10)
imwrite(img, strcat('snap',num2str(frame_counts), '.png'));
img_tmp = img;
end end

%Calulating the speed


frameCnt_Value = str2double(get(handles.frame_count,'String')); %getting the
number of frames
track_Stat = str2double(get(handles.compare_output,'String'));%getting trqcking
status
if (frameCnt_Value ~= 0 && track_Stat == 0)
%Calculate time
time_count = frameCnt_Value * 0.033333; %In seconds
%Calculate speed
speed = distance / time_count;
set(handles.speed_detected, 'String', speed);
spd_lmt = str2double(get(handles.speed_limit,'String'));%Get from the UI

if (speed > spd_lmt) %27.7 which is 100 km/hr


%figure, imshow(img_tmp);
plate_extract(img_tmp, bg, pop);
end

%Frame set
frame_counts = 0;
% set(handles.frame_count, 'String', '');
%
% set(handles.speed_detected, 'String', '');
end
drawnow;
end

5. Introduction
Tremendous works have been done to analyze, design, implement and finally test the whole
system. This test plan document also tries to stipulate the steps followed and the strategies used
to complete testing the whole system. Accordingly the works done in each phases of the test as a
unit in unit testing, as a system, system testing and finally as to how much the hard ware and
software systems are integrated in integration testing are listed in the document.

5.1. Objective
The objective of the test suite is to provide adequate coverage metrics, requirements validation,
and system quality data such that sufficient data is provided for those making the decision to
release.

The main objective of this test plan conforming to our system is to make sure that the hardware
parts all together that were used in manipulating the vehicles like, sensors which are used to
detect the motion of vehicles, the arduino board which used in configuring the linkage in the
whole systems, the cameras to caption targeted cars and other undisclosed parts are properly put
in place so that they are able to function with the software code written.
5.2. Scope
VSMS as it was discussed in the design phase has 3 main components namely the interface,
logical and database layers. Particularly the logical layer is the most crucial section that needs to
be tested though the whole system testing is more crucial. The whole process staring from taking
camera shots through determining the elapse speed up to uploading and securing the file needs to
be under scope of test plan. This includes a couple of steps to follow. Like, noise removal,
skewing, segmentation, feature extraction, background subtraction and other essential steps so
that the whole system comes to a completion.
However, the interface and the database layers are also under the test plan. As we importantly
need to provide a secured access to the uploaded images authorized users need to be verified by
using their prior account inserted in the database.

5.3. Resources
The resources we used to test our system includes the physical characteristics of the
facilities, including the hardware, software, special test tools, and other resources
needed.
Facility required:
The lab area with a sustainable power out let, an internet connection, a table for easy
access and depiction of the simulation work. A toe car for simulating the real car is
also amongst the needed facilities.
Hard ware required:
5.4. Schedule
Types of Testing Date of testing Tested by
Unit testing April (7-10) Surafel Nigussie
Integration testing April (26-30) Yitbarek Adugna & Seifu Geremew
System testing May ( 16-23) Surafel Nigussie

5.5. Features to be tested or not to be tested


5.1.1. Features to be tested
 The Graphical user interface with respect to response time that
involves the database.
 The speed of a vehicle passing at a time.
 The image processing output
 The sinking of image files up to drop box.
5.1.2. Features not to be tested
3rd party and Off-The-Shelf components.
Since it is assumed that 3rd party components were evaluated and the pros
and cons properly weighed before choosing that component with our
software. The interfaces to those components will be tested, but not the
functionality or performance those components. In our case the drop box
which is used to sink the image files doesn’t need testing.
 Features that can’t be tested include those conditions which were described
in the out scope section of the design document. This includes, checking the
speed of cars arriving consecutively one after the other. This is due to the
difficulty of identifying the image of the license plate.
 Compatibility of the system with other platforms aside from windows.
 The actual database software utilized is assumed to work as designed and
will not be directly tested for functionality.
5.6. Pass/fail criteria
This section specifies generic pass/fail criteria for the tests covered in this plan. They
are supplemented by pass/fail criteria in the test design specification. In our
terminology usage “fail” is to mean that the test plan is “successful” while “pass”
means the test is “successful” just like the IEEE standards. Below here we have
spotted a few pass/fail criteria and assessed how our system reacted to these criteria
and we recorded them as follows.

Criteria Passed Failed


Able to recognize other image formats 
than .png
Recognize .png image format 
Vsms tracks the movement and calculates 
the speed of a single car passing at a time.
Vsms tracks the movement and calculates 
the speed of multiple cars passing
consecutively
Threshold value…

5.7. Approach
Black box testing:

Which is also called behavioral testing or partition testing. This kind of testing focuses on the
functional requirements of the software. It enables one to derive sets of input conditions that will
fully exercise all functional requirements for a program.

GUI Testing:

GUI testing will includes testing the UI of OCR. It covers easiness for use, look and

Feel, error messages, and GUI guideline violations.

Integration Testing:

Integration testing is systematic technique for the program structure while conducting test to
uncover errors associated with interacting different function together. In image processing,
integration testing includes the testing preprocessing algorithms and integration of preprocessing
algorithm with feature extraction.

Functional Testing:

Functional testing is carried out in order to find out unexpected behavior of the recognized text.
The characteristic of functional testing are to provide correctness, reliability, testability and
accuracy of the recognized text.

System Testing:
System testing of software is testing conducted on a complete, integrated system to evaluate the
system's compliance with its specified requirements.

Performance Testing:

Performance testing will be done by class mate

Security and Access control testing

The clerk was given a full authority as to control the synced images and it is a crucial test.

User acceptance testing:

The purpose behind user acceptance testing is to conform that system is developed according to
the specified user requirements and is ready for operational use. Acceptance testing is carried out
at two levels - Alpha and Beta Testing.

5.8. Test case specifications


This section, the core of the test plan, lists the test cases that are used during testing in our
project. Each test case is described in detail in a separate Test Case Specification document.
There could be derived as many test cases as the system has however it is not worthwhile to
mention all of it. Hence, we have opted to show those which are derived from the functional
requirements of VSMS their pass success is also illustrated with a necessary snapshots.

Test cases derived from functional requirements:


As for most of the software systems VSMS also provides a login access and halts unauthorized
users. The first page as the software runs is the simple login page and the manager of the system
enters the correct username and password to proceed to the next page.
Figure: login page
As the login privilege is passed it automatically pops up the home page window. The home page
is where the general directory/one click functionality of the system resides. It consists of the
following toolboxes.
 Background settings
 Available video input
 Process requirements
 Troubleshoot
 Statistical information
 Vehicle information
The next step to this could be choosing the input video channel or camera,
The ASUS pc webcam and
The externally added webcam are the two available cameras for the system. It is there to choose
either of them and by clicking get background button from the background settings box we can
start detecting a passing object. The action is depicted by the following 2 shots.
After detecting a background object if it exists the system automatically starts detecting speed by
counting the number of frames. It is shown as follows.
5.9. Estimated risk and contingency plan
Risks Probability Risk type Mitigation/ contingency approach
Being unable to acquire the necessary 30% Personnel schedule Resources to components will be
number of the project group as split between the existing resources.
components become ready to test So, schedule must be adjusted
accordingly.
Unable to acquire some of the 25% equipment Utilize existing acquired hardware.
necessary HW & SW required for
integration and system testing
Third party services utilized in the 10% Third party Setup a communication channel to
system become unavailable during 3rd party to report and handle issue
testing when they occur.
Components are not delivered on 30% schedule Integration testing must be delayed
time until the component is delivered.
Turn over 10% Personnel Testers will work in parts on
components. If a single member
decides to leave, a secondary testing
with the knowledge will be leader to
test.
5.10. Conclusion
The document all in all explains the various activities performed as part of testing of VSMS
application. It is a Mat lab developed vehicle speed detecting and monitoring system. There are
several modules like, image processing, plate number matching, and report generation, image
capturing and other modules. Amongst the virtues grabbed from the project best practices are the
major. This includes,
 Doing of repetitive task manually consumes much time, this task was automated by
creating scripts and run each time, which saved time and resources.
 Automation scripts were prepared to create new customers, where lot of records need to
be created for Testing.
 Business critical scenarios are separately tested on the entire application which are vital
to certify they works fine.

You might also like