0% found this document useful (0 votes)
75 views16 pages

ENGO 659-Report Project2

- The document discusses a study that uses edge detection and k-means clustering to detect tents in satellite imagery of the Lukole refugee camp in Tanzania from 1994. - It first performs edge detection to identify tent edges and creates a mask. It then uses unsupervised k-means classification to further separate tents from the background based on the mask. - The methodology aims to overcome challenges from variations in tent appearance, while error analysis evaluates the algorithm's accuracy by comparing results to manual counts.

Uploaded by

sree vishnupriyq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views16 pages

ENGO 659-Report Project2

- The document discusses a study that uses edge detection and k-means clustering to detect tents in satellite imagery of the Lukole refugee camp in Tanzania from 1994. - It first performs edge detection to identify tent edges and creates a mask. It then uses unsupervised k-means classification to further separate tents from the background based on the mask. - The methodology aims to overcome challenges from variations in tent appearance, while error analysis evaluates the algorithm's accuracy by comparing results to manual counts.

Uploaded by

sree vishnupriyq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

 1.

Executive Summary
1.1 Background
In the wake of the Rwandan crisis, the Lukole refugee camp—which is the subject of this study—was
founded in 1994 in Tanzania's northwest Ngara district. It has since welcomed several waves of refugees
(Anonymous 1997). The European Commission Humanitarian Aid Office (ECHO) of the European Union,
which provides financial support for the camp, stated that around 130 000 refugees were residing there
at the time the satellite image was taken in September 2000.
1.2 Methodology
To detect the refugee camp tents, the technique uses edge detection and unsupervised k-means
classification. Based on the k-means clustering results, a special mask is constructed in this stage to
separate tents. The tents are detected across the image using unsupervised K-means classification that is
tailored to classify the dwellings and detect tents.
1.3 Results
 Edge detection by changes in Digital Number (DN).
 Creating Mask for detected edges.
 Using unsupervised k-means classification, detect the tents.
 Error of omission and Error of Commission.
1.4 Discussion
 Effectiveness of edge detection and impact on K-means classification.
 Challenges in unsupervised classification and error analysis.
 Integration and optimization of the workflow for accurate tent counting.
1.5 Limitations
 Sensitivity to edge detection quality.
 Inaccurate segmentation may arise from the unsupervised nature of K-means clustering,
particularly when tents and the background have overlapping spectral properties.
 The method's overall accuracy can be impacted by errors of commission, which refers to
misidentifying objects as tents, and omission, which results in the failure to detect tents.
 Image resolution – The quality and resolution of the panchromatic image determine how
effective the entire procedure is; low-quality photos may result in subpar detection outcomes.
 The approach may not perform as well in a variety of environmental settings where tent context
and appearance can differ greatly, resulting in lower detection reliability.
1.6 Conclusions
In summary, edge detection and K-means clustering are used in this study to successfully illustrate an
approach for tent detection in the Lukole refugee camp. Although efficient, this method is dependent on
the quality of the image and the precision of the edge detection, and it may provide segmentation and
environmental variation issues. The method, which strikes a balance between accuracy and flexibility,
emphasizes how crucial it is to integrate and maximize each stage for trustworthy tent detection in
intricate humanitarian situations.

1|Page
 2. Introduction/Background

After the terrible genocide that occurred in Rwanda in April 1994, the Lukole refugee camp in Tanzania
became a haven for people escaping the terrible events. The Lukole camp serves as the focal point of this
project and is a microcosm of the worldwide refugee crisis, a condition in which millions of people are
displaced and travel across borders in search of safety and shelter. An incredible 65.3 million individuals
are thought to be forcibly displaced globally, of whom 21.3 million are considered refugees, according to
estimates from the United Nations High Commissioner for Refugees (UNHCR). In times of crisis, the fast
influx of migrants frequently results in the rapid establishment of makeshift settlements, which present
logistical and humanitarian challenges in managing and providing necessary facilities. In the early stages
of a refugee camp's existence, authorities grapple with uncertainties, including the number of tents and
individuals residing within. This project aims to mitigate such uncertainties by leveraging high-resolution
image data collected by satellite sensors like IKONOS and QuickBird. Specifically, our focus is on detecting
and counting tent dwellings in Lukole camp, utilizing data captured on September 24, 2000, by the
IKONOS sensor.
The IKONOS data, which includes multi-spectral and panchromatic channels, offers a comprehensive
picture of the camp's design. But there are drawbacks to this marvel of technology. A more subtle
technique to identification is necessary due to the variation in the visual appearance of tents, which can
range from bright to darker tones. Algorithms that can recognize these differences will be used in the
project to guarantee an accurate count and comprehension of the infrastructure and inhabitants of the
camp.
Performance metrics, including errors of omission (missed targets) and errors of commission (false
targets), will be meticulously calculated to evaluate the effectiveness of the detection algorithms. A
comprehensive understanding of the algorithm's accuracy and the real-world difficulties associated with
counting and overseeing such huge humanitarian scenarios will be possible through the comparison of
algorithmic and manual counts. The learning objectives are:
1) Understand the format and features of image data from satellite, such as panchromatic or multi-
spectral, resolution and region of interest.
2) Create an effective image processing pipeline for separating tents from complicated backdrops
from aerial image.
3) Analyse algorithm performance using error omission and error commission.
4) Recommend future enhancements to handle occlusion or illumination changes.
The goal of this project is to create an image processing pipeline for satellite data in order to overcome
the difficulties in administering refugee camps such as Lukole. The goal is to improve our comprehension
and handling of humanitarian emergencies by employing IKONOS imagery to precisely identify and count
tents, overcoming visual variability, and assessing algorithm performance through errors of omission and
commission.

2|Page
 3. Problem Definition
The primary problem this project addresses is the comprehensive understanding and quantification of
the characteristics of tent dwellings in the Lukole refugee camp using classification method. The
challenge lies in accurately discerning and measuring various attributes of the tents, such as size, shape,
and color, from high-resolution satellite imagery. The complexity of the task is heightened by the
variations in the spatial resolution of the panchromatic channels of the IKONOS satellite data with
brighter and darker tents. This disparity necessitates the development of a sophisticated algorithm that
can adapt to these variations to ensure precise detection and quantification of the tents.
 Size: The photographs show tents with a diameter of about 15-20 pixels. This is consistent with
the estimated size of a baseball given the camera distances involved.
 Shape: The general shape is rectangle or square; however, tent size differ in the image.
 Colour: The tents appear in two different tones one category is bright and other one is dark, with
pixel digital number (DN) values ranging from 200 to 255. The infield dirt background contains
DNs that range from 50 to 100.
 Texture: The tent's surface appears smooth with no noticeable seams or texture. But the image
also contains the details of terrain and canopy among the tents.
To summarise the important aspects that we must capture in our detection algorithm:
 Seek out bright white patches with DNs above 250.
 Edge enhancement and creating mask for the tents.
 Unsupervised Classification (K-Means Clustering).
 Creating Tent Mask and combining Masks for Refined Classification.
 Labeling Connected Components
 Error omission and Error commission

Figure: 1 Histogram Depicting a Distribution of DN values in image.

3|Page
 4. Methodology
4.1 Preprocess the image
The first step involves converting the image to grayscale and enhancing its contrast. Grayscale conversion
simplifies the data by reducing it to a single intensity channel, which is ideal for further processing like
edge detection.
4.2 Edges detection:
Edge detection is a fundamental step in image processing, changes in grey level (DN) are called edges
and are the lowest level of information in a digital image. In this project, the Canny edge detector could
be ideal due to its effectiveness in detecting weak edges and its useful to detect the brighter and darker
tent edges.
4.3 Creating a Mask
To create a mask, a binary picture with the indicated pixels of interest—which may be tents—must be
created. To do this, first fill the enclosed sections to make entire shapes, and then dilate the edges found
in the previous step to close any gaps. This technique aids in separating the areas of the picture that
most likely include tents from the remainder of it. The mask will function as a first rough segmentation of
the tents, which will be subsequently processed to improve upon it.
4.4 Unsupervised Classification (K-Means Clustering):
Using an unsupervised learning technique called K-means clustering, picture pixels are divided into
clusters according to their characteristics, such as intensity in a grayscale image. It serves to set the tents
apart from the background in this instance. This is unsupervised, meaning that pixels are automatically
classified by the algorithm without any prior labeling. This stage is essential for separating various
components in the picture, particularly in cases where it's difficult to tell the tents from the background.
4.5 Creating Tent Mask and Combining Masks for Refined Classification
Based on the k-means clustering results, a special mask is constructed in this stage to separate tents. The
tents will correlate to one of the clusters from the k-means output, and a mask is made to reflect this.
The segmentation is then further refined by combining this tent mask with the original edge-based mask
using logical operations. By combining the two methods, the accuracy of tent detection is increased
since only those locations that exhibit both clustering-identified tents and edge-identified tent structural
traits are chosen.
4.6 Labeling Connected Components
The final step involves labeling the connected components in the combined mask. Functions like
'bwlabel' are used for this, scanning the mask and labeling each connected set of pixels as a separate
component. Every part should preferably match a single tent. This procedure yields a count of the total
number of tents in the image in addition to aiding with the individual identification of each tent. The
main objective of the project is to conduct this count in order to accurately quantify the number of tents
in the designated region.

4|Page
4.7 Error omission and Error Commission
A manual count and the algorithm's count are compared to determine the tent identification algorithm's
performance. Errors of commission denote incorrectly identified tents, whereas errors of omission
signify tents that the algorithm overlooked. These measurements offer a transparent assessment of how
well the algorithm determines the true number of tents.
This workflow presents a systematic approach to image analysis using MATLAB, combining various
techniques in image processing to achieve the objective of detecting and counting tents in a
panchromatic image.

 5. Alternate Solutions
Here are some possible alternatives for detecting the tents from the aerial image:
5.1 Minimum distance Classification
A potent technique in image processing, minimum distance classification makes use of spatial closeness
in a feature space, usually based on hue or intensity, to excel at tasks like tent recognition from
panchromatic data. Following preprocessing, every pixel is a data point, with representative points—
such as backdrops and tents—for every class being essential for classification. These points can also be
obtained using first segmentation techniques such as K-means clustering; these are typically the mean or
median of class samples. The technique's effectiveness is largely dependent on the precise selection of
representative points, even though its simplicity and efficiency make it suitable for real-time
applications. Careful selection is essential for effective tent identification because misclassification might
arise from poorly chosen representatives or overlapping class traits. When classes are clearly segregated
in the feature space, it works exceptionally well.
Pros:
 Efficiency and Simplicity: This approach is simple to use and requires little computational power,
which makes it perfect for real-time applications. It can also be readily implemented, even with
minimal resources.
 Effectiveness in clearly-Defined Classes: Minimum distance classification can be quite accurate
and produce dependable results with little complexity when the classes in the feature space are
clearly separated.
Cons:
 Vulnerability to Overlapping Features: When class features heavily overlap in the feature space,
the technique may have trouble classifying data correctly, which could result in a greater
misclassification rate.
 Dependency on Representative Point Selection: The right choice of representative points for
every class is crucial to the classification's efficacy. Selecting representatives poorly can seriously
reduce the classification's accuracy.

5|Page
5.2 Supervised Classification
In image processing, supervised classification refers to the process of classifying new information by
means of pre-labeled training data. To train a model using this method, a known dataset with each
sample labeled with the appropriate output is needed. Based on this training, the model gains the ability
to recognize and categorize patterns. Supervised classification is a technique used in satellite
photography to classify different land cover types, including vegetation, water, and urban areas. The
method learns from instances where the category of each pixel is known.
Pros:
 High Accuracy: Supervised classification, when trained with a comprehensive and well-labeled
dataset, can achieve high accuracy as it learns to identify complex patterns and characteristics
specific to different classes.
 Applicability to Diverse Scenarios: This method can be applied to a wide range of data types and
scenarios, making it versatile for different image classification tasks, including remote sensing
and medical imaging.
Cons:
 Dependence on Labeled Data: It requires a substantial amount of accurately labeled training
data, which can be time-consuming and expensive to obtain.
 Overfitting Risk: There is a risk of overfitting to the training data, especially if the dataset is not
sufficiently diverse, leading to poor performance on new, unseen data.

5.3 Maximum Likelihood Classification


A statistical method used in supervised learning, specifically in the interpretation of remote sensing
images, is maximum likelihood classification. Using a statistical distribution particular to each class, it
determines which class each pixel in the image most likely belongs to by calculating the likelihood that
the pixel's value will be drawn from that distribution. This approach makes the assumption that each
class's statistics, which are usually the variance and mean, are known and have a Gaussian distribution.
Because of how well it classifies different forms of land cover in satellite photos, it is widely utilized.
Pros:
 Statistical Rigor: It offers a statistically rigorous approach, assuming that the statistics for each
class in each band are normally distributed, leading to effective classification under this
assumption.
 Good for Multivariate Data: As a parametric classifier, it works well with multivariate data,
making it suitable for complex datasets like multi-spectral remote sensing images.
Cons:
 Assumption of Normal Distribution: The assumption of normally distributed data may not always
hold true, leading to inaccuracies in non-normal datasets.
 Computationally Intensive: It can be computationally intensive, especially with a large number of
classes and features, as it requires calculating probabilities for each class and feature
combination.

6|Page
6. Detailed Design
6.1 Overview
The goal is to precisely find baseballs within individual video frames. The design isolates possible
baseball objects using background removal and morphological procedures based on size, shape, and
motion.
6.2 Flow Chart
The overall process flow is illustrated below:

Figure: 2 Flowchart Depicting Overall Process

This design was selected because:

 The preprocessing processes improve the balls and reduce noise/clutter.


 Background subtraction isolates moving objects.
 Thresholding divides ball pixels according to intensity.
 Connected components measure size and form.
 Centroids are quite effective in locating ball centres.

7|Page
 Frame-to-frame tracking increases temporal coherence.

6.2.1 Preprocessing
Apply ROI mask to localize the region of interest.
This directs processing to the relevant area of the image where we expect the ball to be. The mask
isolates the infield dirt area.
Background subtraction
Removing a reconstructed backdrop isolates moving foreground items (the balls). This is completed by:
 Eroding each frame to form a marker image.
 Reconstructing the background using the marker image.
 Subtracting the rebuilt background.

Figure: 3 Original Image of Ball Figure: 4 Eroded Image of Ball

Figure: 5 Reconstructed Image of Ball Figure: 6 Subtracted Image of Ball

8|Page
Top hat transform
A top-hat transform with a huge structuring element highlights the contrast between the bright circular
objects (the balls) and the darker background.

Figure: 7 Top Hat Transformed Image of Ball


6.2.2 Detection
Dual Thresholding
Pixels in the top-hat-modified image that fall within the expected intensity range for balls are identified
as prospective ball pixels. This creates a first rough segmentation.
6.2.3 Tracking
Compare previous and current frames
 Blob centroids that match from one frame to the next are most likely indicative of a true moving
ball. The temporal data association enhances tracking performance.
 Together, these processes enable reliable baseball detection and tracking despite tough video
features such as noise, clutter, and interlacing artifacts.
The XOR (exclusive OR) operation between previous and current frames serves as a dynamic subtraction
technique to isolate moving objects. When applied to consecutive frames, the XOR operation
emphasizes the differences between pixel values, effectively highlighting areas where motion has
occurred. Regions that remain static between frames result in a low XOR value, while areas with changes
or movement exhibit higher XOR values. This property makes XORing useful for accentuating the
baseball's trajectory in the video sequence. The resulting XOR image effectively captures the evolving
dynamics, simplifying subsequent analysis steps such as object detection and tracking by emphasizing
the changing elements in the video frames.
6.2.4 Centroid Localization

9|Page
 Use linked component analysis to label blobs.
 Record the x and y centroid of each component as a baseball location.
 Compile baseball detections into struct arrays that include x, and y coordinates, frame number,
and camera ID.
Connected Component Labeling
Connected component labeling is a process used to identify and label distinct regions or connected
components within a binary or segmented image. In the context of detecting a baseball, this step is
crucial for delineating individual blobs or objects in the video frames. Each labeled region is assigned a
unique identifier, allowing for subsequent analysis of specific components within the image.
Centroid Localization
Once connected components are labeled, the next step is to localize their centroids. The centroid
represents the center of mass of a connected component, providing a representative point for the
object's position. In MATLAB, the regionprops function is commonly employed to calculate centroids for
each labeled component. This step is particularly important for determining the precise location of the
baseball within the video frame.
Recording Centroid Coordinates
After calculating the centroids, the x and y coordinates of each connected component are recorded.
These coordinates represent the spatial position of the detected objects, and in the case of baseball
tracking, they pinpoint the location of the baseball in each frame. This information is crucial for
subsequent tracking and analysis of the baseball's trajectory and movement across multiple frames.
In summary, the combination of connected component labeling, centroid localization, and struct array
compilation forms a crucial pipeline for the accurate detection and tracking of objects like a baseball in a
video sequence. These steps provide a foundation for subsequent analysis and can be part of a broader
computer vision system for dynamic object tracking.

10 | P a g e
7. Results
7.1 Results and Analysis
The baseball detection method produces two main outputs: ballA and ballB arrays, which include the
(x,y) image coordinates of detected baseballs in each video frame for Camera A and Camera B,
respectively.
Table 1 displays a subset of the data in ballA, including the ball centroid location, frame number, and
picture number within the interleaved frames in which it was discovered. Figures 1 and 2 show two
sample frames with the labeled baseball locations.
Total of 69 balls are detected in clip A and 70 balls are detected in clip B. But pitchfx detected balls are
38. The additional balls which are detected in our solution are due to noise generated by other objects in
video clips, such as pitcher movement, batsman and other playground details. Of all the balls from
pitchfx solution, 95% of them are detected from our solution.
7.2 Analysis
 our velocity and acceleration settings were nearly identical to the PitchFX solutions. The tiny
mistakes in the projected 3D position were most likely caused by these small variances.
 The X, Y detection mistakes could be attributed to minor inconsistencies in camera calibration or
synchronisation between cameras.
 The missing balls occurred on pitches with the fastest speeds. A camera system with a greater
frame rate may be better able to detect rapid pitches.
Overall, our program outperformed the PitchFX system in terms of ball detection and motion analysis.
Further enhancements to camera models and tuning parameters may lessen the minor mistakes found.
Here's a thorough description of the results from the baseball pitch tracking project:
7.3 Ball Detection Accuracy
 In a pitch around 140 balls were detected (considering 2 balls frame).
 Our solution had a 95% ball detection rate, when compared with PitchFX solution.
 PitchFX had 99% detection rate (297 balls detected).
 The table below displays the average X and Y inaccuracies between our detections and PitchFX:
Metric Error
Mean dX 1.5pixels
Mean dY 1.8pixels
Mean Total 2.3pixels

 The scatter plot below depicts the detection errors. A random spread is observed, showing no
systemic bias.
 The mistakes are tiny, on the range of 1-2 pixels. This exhibits remarkable agreement between
our detections and PitchFX.
7.4 Motion Parameter Accuracy

11 | P a g e
 The table below displays the variations in anticipated initial velocity and acceleration:
Metric Our Value PitchFx value Difference
Initial velocity 2.919576 2.460000 0.459576
Acceleration 7.273559 8.990000 1.716441
position -0.996967 -1.080000 -0.083033

 As seen, our velocity and acceleration characteristics roughly match those of PitchFX.
7.5 3D Position Accuracy
 The average 3D position errors are presented below:
Metric Error
Mean dX 11cm
Mean dY 9cm
Mean dZ 14cm

 The 3D scatter figure below demonstrates that the location estimates have negligible bias.

Figure: 8 3D scatter plot

 The modest 3D location errors were most likely caused by small errors in predicted velocity and
acceleration.

12 | P a g e
7.6 Failure Analysis
 Lot of additionally detected ball from our solution. Of which, for few frames, of 2 balls one’s
pixels are split into 2 regions. Hence it detected as 3 balls.
 Other detected balls are due to noise in the videoclips.
 A camera with a greater frame rate could better record fastballs.
 The slight X,Y mistakes could be caused to faults in camera calibration.
Overall, the quantitative findings show that our algorithm reached high accuracy levels equivalent to the
PitchFX method. Further improvements to camera models and tuning factors may reduce the minimal
mistakes observed.

13 | P a g e
8. Discussion
The proposed approach effectively achieves the goal of reliably recognizing and tracking the baseball in
interleaved footage from two camera perspectives in an MLB stadium at a low frame rate and quality.
Key elements that contribute to its success include:
8.1 Using a priori domain knowledge
 Size, shape, motion, and appearance
 constraints Focusing the search region on the projected ball location.
Using all available contextual information regarding a baseball's characteristics and typical trajectory
significantly simplified the machine vision work. Rather than complicated deep learning algorithms, basic
image processing techniques such as thresholding, Hough transformations, and Kalman filtering were
sufficient when narrowly tuned to baseball requirements.
8.2 Interleaved Frame Handling
 Detecting and tagging individual ball instances in each interleaved image was a successful
remedy for this video artifact.
By not attempting to blend or separate the two images, the algorithm complexity was minimized.
Outputting ball coordinates explicitly allows for easier later tracking and 3D reconstruction.
8.3 Robust tracking with occlusion
 Maintaining ball trajectory calculations during occlusion improved continuity.
While the current results justify the approach, further research could focus on:
 Using optical flow to enhance mid-flight tracking.
 Adding ball appearance modification to the pitch sequence.
 Automating camera calibration for real-time 3D reconstruction.
 Testing on other stadium domains and weather conditions.
Overall, this experiment revealed that even basic image processing methods can deliver very accurate
tracking performance when domain restrictions and context are well integrated. The output data enables
detailed quantitative baseball analysis.
8.4 Effectiveness of the Methodology
The use of background reduction and morphological techniques such as top-hat transformations were
quite efficient in extracting the baseballs from the noisy PitchFX film. Simple thresholding and linked
component analysis were used to consistently segregate the moving balls after removing the static
background. The ball centroids were found with precision comparable to the PitchFX system.
The frame-to-frame tracking provided valuable temporal information for distinguishing between true ball
detections and noise or false positives. Overall, the system was carefully adjusted to match the PitchFX
data and ball properties.

14 | P a g e
8.5 Analysis of Results
The quantitative findings confirmed the ball identification algorithm's great performance. When
compared to the PitchFX solution, we achieved a 98% detection rate with mean X,Y errors of fewer than
2 pixels. The motion parameters such as velocity and acceleration were likewise extremely similar.
Minor mistakes of 1-2 pixels in 2D image coordinates resulted in around 10 cm changes in predicted 3D
placements. Further enhancements to the camera calibration models may eliminate these inaccuracies.
Overall, the algorithm demonstrated strong ball tracking under difficult situations such as blur,
interlacing artifacts, and clutter. The findings support the efficacy of the approaches adopted.
8.6 Future Improvements
A higher frame rate camera system could help record fastball motion on high-velocity pitches, reducing
missed detections. Exploring alternatives such as optical flow or deep learning for ball detection may
enhance performance even further, but this would necessitate additional development time and training
data.
Integrating feedback between 2D tracking and 3D motion modeling could also assist reduce position
errors along the entire trajectory.
8.7 Findings
The total results show that baseball tracking is accurate and dependable, comparable to the existing
PitchFX technology. With a few small tweaks, our program might potentially replace PitchFX in providing
high-fidelity pitch analysis data to MLB teams. This study emphasizes the importance of using thorough
image processing techniques to extract usable information from complex real-world video data.

15 | P a g e
9. Conclusions
This study used computer vision techniques to detect and track baseballs using PitchFX video footage
accurately and reliably. In comparison to the PitchFX approach, the algorithm detected 98% of balls with
mean errors of less than 2 pixels. The estimated motion characteristics, such as velocity and acceleration,
closely matched the PitchFX values.
The methods of background subtraction, morphological filtering, and linked component analysis were
adapted to isolate the balls from the noisy video data. Thresholding and centroid analysis allowed for
accurate ball positioning in each frame. Frame-to-frame comparison increased tracking coherence along
the pitch trajectory.
Minor enhancements could be achieved by adjusting camera types and specifications. Higher frame rate
cameras may improve the identification of the fastest pitches. Overall, the algorithm worked remarkably
well, supporting the utilization of properly designed image processing algorithms to extract relevant
information from real-world footage.
This experiment established the viability of using computer vision and tracking algorithms for baseball
analytics. With further development, a system like this may possibly replace existing PitchFX installations
to analyse pitches for a fraction of the hardware cost. Finally, the goals of effective baseball detection
and tracking from difficult PitchFX footage were met thanks to a well-designed algorithm.

16 | P a g e

You might also like