0% found this document useful (0 votes)
34 views100 pages

Frequency Domain Characterization For Semi Autonomous Systems

This thesis explores Frequency-domain characterization methods for semi-autonomous systems, focusing on two applications: traction control for mobile robots and intrusion detection for security doors. It demonstrates the use of Neural Networks for high-accuracy surface classification (97.8%) and moderate tool type classification (83.3% and 75%). The findings aim to enhance operator awareness and response capabilities in remote operations.

Uploaded by

6251040138
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views100 pages

Frequency Domain Characterization For Semi Autonomous Systems

This thesis explores Frequency-domain characterization methods for semi-autonomous systems, focusing on two applications: traction control for mobile robots and intrusion detection for security doors. It demonstrates the use of Neural Networks for high-accuracy surface classification (97.8%) and moderate tool type classification (83.3% and 75%). The findings aim to enhance operator awareness and response capabilities in remote operations.

Uploaded by

6251040138
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

Frequency-Domain Characterization

for
Semi-Autonomous Systems

by

John Bryson Tidman

Submitted in Partial Fulfillment


of the Requirements for the Degree of
Master of Science in Mechanical Engineering
with Specialty in Mechatronics

New Mexico Institute of Mining and Technology


Socorro, New Mexico
July, 2014
ABSTRACT

This thesis applies various methods of Frequency-domain characterization


to two applications where semi-autonomous behavior is necessary. With many
technical systems available today, a remote observer is not provided with suffi-
cient information regarding the surroundings or situations of their remote plat-
form. To reduce cognitive load of the operator, system identification techniques
can be used to provide the user with critical feedback regarding the status of these
remote systems. In the first application, automatic identification of available trac-
tion for mobile robots allows the operator to focus on the manipulation of the
environment through the use of a robotic arm rather than monitoring the status
of the robot’s interaction with the surface. In the second application, continuous
monitoring of a secure door is provided by a physical and acoustic data collec-
tion system, alerting security personnel during an attack and allowing them to
provide metered response to deal with the attack. To classify surfaces and attack
tool types, a vibration-based frequency-response identification method was de-
veloped using high-frequency data acquisition and Neural Networks. The data
were then compared to the known tool types/surfaces and confusion matrices
were created demonstrating the accuracy of the Neural Networks in classifying
these systems. The surface type can be related to a coefficient of static friction,
which can be used to calculate the maximum amount of force the robotic ma-
nipulator can impart on the environment without causing loss of traction of the
wheels. Tool type selection can inform the operator regarding the immediate
need for security response to a remote location. Through the use of these Neu-
ral Networks, surfaces were classified with high accuracy (97.8%), resulting in
a very successful implementation of the data acquisition and analysis methods
used. Tool type was able to be classified with relatively mediocre accuracy (83.3%
and 75%), leaving room for much development in future work.

Keywords: Surface Identification, Mobile Robots, EOD, Surface Type, Traction


Control, Frequency-Domain Characterization, Neural Networks, Attack Tool Type
CONTENTS

LIST OF TABLES v

LIST OF FIGURES vi

LIST OF ABBREVIATIONS viii

1. INTRODUCTION 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Traction Control Monitoring . . . . . . . . . . . . . . . . . . 1
1.1.2 Security Door . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Semi-Autonomous Systems . . . . . . . . . . . . . . . . . . . 3
1.1.4 Neural Network Basics . . . . . . . . . . . . . . . . . . . . . 3
1.2 Prior Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Specific Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2. FREQUENCY DOMAIN CHARACTERIZATION OF EXTERNAL STIM-


ULI 15
2.1 Considered Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.1 ADXL345 Development Board . . . . . . . . . . . . . . . . . 15
2.1.2 ArduIMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2 Final Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.1 Data Acquisition Board . . . . . . . . . . . . . . . . . . . . . 19
2.2.2 Microphone . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.3 Accelerometers . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.4 Robotic Platform . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.5 Security Door Analog . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5 Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

ii
3. APPLICATION TO TRACTION CONTROL FOR MOBILE ROBOTS 33
3.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1.1 Data Collection Practices . . . . . . . . . . . . . . . . . . . . 33
3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.1 Classification Accuracy . . . . . . . . . . . . . . . . . . . . . 39
3.2.2 Testing Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3 Traction Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4. APPLICATION TO INTRUSION DETECTION FOR SECURITY DOORS 48


4.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.1.1 Data Collection Practices . . . . . . . . . . . . . . . . . . . . 49
4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2.1 Classification Accuracy . . . . . . . . . . . . . . . . . . . . . 52
4.2.2 Testing Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

5. DISCUSSION AND FUTURE WORK 57


5.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.1.1 Traction Control Monitoring . . . . . . . . . . . . . . . . . . 57
5.1.2 Attack Tool Type Identification . . . . . . . . . . . . . . . . . 58
5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.2.1 Traction Control Monitoring . . . . . . . . . . . . . . . . . . 60
5.2.2 Attack Tool Identification . . . . . . . . . . . . . . . . . . . . 61

Appendix A. DATA SHEETS 63


A.1 ADXL345 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
A.2 cRIO-9022 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
A.3 cRIO-9114 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
A.4 NI-9234 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
A.5 Probe Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
A.6 Body Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
A.7 Microphone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Appendix B. MATLAB CODE 82


B.1 FFT Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
B.2 FFT Re-sampling Script . . . . . . . . . . . . . . . . . . . . . . . . . . 83
B.3 Truncated Data Script . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

iii
Appendix C. NN TRAINING PROCESS 86

REFERENCES 88

iv
LIST OF TABLES

3.1 Correct classification patterns for surface NN . . . . . . . . . . . . . 37


3.2 Friction Forces f f causing slip (TFS) . . . . . . . . . . . . . . . . . . 45
3.3 µs for all surfaces and all trials . . . . . . . . . . . . . . . . . . . . . . 45
3.4 Average µs for all surfaces . . . . . . . . . . . . . . . . . . . . . . . . 46

4.1 Correct classification patterns for tool type NN . . . . . . . . . . . . 55

v
LIST OF FIGURES

1.1 Infrared View from Foster-Miller Talon EOD Robot . . . . . . . . . 2


1.2 Simple Neural Network Connections . . . . . . . . . . . . . . . . . . 4
1.3 Classification System from [1] . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Confusion Matrix for Testing Phase of NN from [2] . . . . . . . . . 8
1.5 Classifier performance on individual terrains from [3] . . . . . . . . 11

2.1 Analog Devices ADXL345 Development Board. Source: A . . . . . 16


2.2 Data from ADXL345 Development Board . . . . . . . . . . . . . . . 16
2.3 Truncated data from ADXL345 Development Board . . . . . . . . . 17
2.4 Data from ADXL345, reconstructed time vector . . . . . . . . . . . . 17
2.5 Truncated data from ADXL345, reconstructed time vector . . . . . . 18
2.6 NI cRIO-9022 Real-Time Controller . . . . . . . . . . . . . . . . . . . 20
2.7 NI cRIO-9114 Reconfigurable Embedded Chassis . . . . . . . . . . . 20
2.8 NI 9234 Analog IO Module . . . . . . . . . . . . . . . . . . . . . . . 21
2.9 GRAS Type 40PH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.10 PCB 352C03 and 353B15 Accelerometers . . . . . . . . . . . . . . . . 22
2.11 Robotic Testing Platform . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.12 Security Door Analog . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.13 Initial Separation of Collected Data . . . . . . . . . . . . . . . . . . . 27
2.14 NN Data via FFT and ”Re-sampling” of data from Figure 2.13 . . . 29
2.15 NN Example Confusion Matrix (from training) . . . . . . . . . . . . 30
2.16 Confusion Matrix for selected NN . . . . . . . . . . . . . . . . . . . 31
2.17 Simulink Diagram from the deployed NN . . . . . . . . . . . . . . . 32

3.1 Surfaces used for Data Collection . . . . . . . . . . . . . . . . . . . . 35


3.2 Re-sampled Surface FFTs . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 Simulink Classification Scope (from Classification Scope in 2.17) . . 37
3.4 Simulink Classification Scope Asphalt and Concrete . . . . . . . . . 38
3.5 Simulink Classification Scope Dirt, Grass, and Gravel . . . . . . . . 39

vi
3.6 Neural Network Confusion Matrix . . . . . . . . . . . . . . . . . . . 40
3.7 Figure for static friction coefficient calculation . . . . . . . . . . . . 42

4.1 Placement of the accelerometers for testing . . . . . . . . . . . . . . 50


4.2 Example accelerometer data (Green is Edge, Red is Middle) . . . . 51
4.3 Confusion Matrix for trained Security Door NN . . . . . . . . . . . 52
4.4 Simulink Output for security door classification . . . . . . . . . . . 53
4.5 Confusion matrix for security door NN . . . . . . . . . . . . . . . . 53
4.6 Simulink Output for segmented security door classification . . . . . 54
4.7 Confusion Matrix of Segmented security door NN . . . . . . . . . . 55

C.1 NN Setup GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86


C.2 NN Training Percentages Selection . . . . . . . . . . . . . . . . . . . 86
C.3 NN Neurons Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 87
C.4 NN Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

vii
LIST OF ABBREVIATIONS

TFS Threshold Force to Slip


PGIP Probe-Ground Interaction Point
PNN Probabilistic Neural Network
NN Neural Network
EOD Explosive Ordnance Disposal
IED Improvised Explosive Device
NMT New Mexico Tech
SNL Sandia National Labs
FFT Fast Fourier Transform
UGV Unmanned Ground Vehicle
AGV Autonomous Ground Vehicle
PCA Principal Component Analysis
FPGA Field-Programmable Gate Array
SVM Support Vector Machine
NOTA None Of The Above
CCP Constant Current Power

viii
This thesis is accepted on behalf of the faculty of the Institute by the following
committee:

David Grow, Advisor

I release this document to the New Mexico Institute of Mining and Technology.

John Bryson Tidman Date


CHAPTER 1

INTRODUCTION

1.1 Background

1.1.1 Traction Control Monitoring

Mobile robotics is one of the fastest growing fields in modern science.


Since the invention of the first mobile (ground-traversing) robots in the mid 20th
century, improvements have been made to make these robots smarter, more user-
friendly, and more sustainable (longer operation lives). The addition of sensors
to these robots (light, sonar, cameras, etc.) has improved the quality of informa-
tion these robots can relay regarding the surrounding environment. As time pro-
gressed, intelligence was added in the form of sophisticated computer program-
ming. Not only did this programming make it possible for the robot to make de-
cisions regarding path planning, workspace manipulation, and group behavior.
In the late 20th century, NASA designed and deployed the M ARS PATHFINDER
with its rover S OJOURNER to Mars. This rover was able to make certain decisions
regarding steering and avoid hazardous terrain on Mars.
The invention of semi-autonomous robots in the modern world has cre-
ated the need for identification of the surface type traversed. Surface identifica-
tion has many applications to the field of mobile robotics. From the simplest
forms of mobile robots (remote controlled cars), to the most advanced types
of rovers (NASA’s Curiosity), mobile robots have many applications and uses.
Recreational mobile robots are most fun when they are able to traverse any sur-
face at any time, so an easily implementable method for surface identification
would provide these recreational robots with methods for traction control. More
complex robots, such as those used for Explosive Ordnance Disposal (EOD), not
only include the treads and wheels for traversing terrain, but also manipulator
arms and claws which are used to disarm or relocate Improvised Explosive De-
vices (IEDs). These complex and expensive robots have even more need for a
robust surface classification method.
The method of classification through vibration analysis is perhaps the most
easily implementable method for identifying surfaces as all that is required is an
accelerometer or microphone and a data acquisition device, as well as a com-
puter to process the data. When operating near a bomb, EOD robots often must

1
apply forces to move obstacles or to withdraw the bomb from an enclosure. Un-
fortunately, the user often has little to no feedback regarding exactly what type
of surface the robot is crossing or has just finished crossing. Camera feedback
(included with almost all types of EOD robots) typically provides the user with
rather limited information regarding the surface type. This ”method” of sur-
face classification relies on the operator’s ability to distinguish one surface from
another based on the camera’s visual representation of the surface. In certain
situations, however, this camera feedback is insufficient, e.g., during nighttime
or inside of dark buildings. These systems also often have signal transmission
problems, due either to bumpy terrain, or the robot moving out of the opera-
tor station’s transmitting/receiving range. An example of EOD camera feedback
from an infrared camera is shown in Figure 1.1.

Figure 1.1: Infrared View from Foster-Miller Talon EOD Robot

Due to this insufficient feedback (or a ”misclassification” by the operator),


it is possible for the user to inadvertently apply too much force to the bomb, or the
area around the bomb, causing the robot’s tires or treads to lose traction. This loss
of traction could cause many problems including collision with the explosive de-
vice, slipping down an inclined slope, or robot tipover (which requires the oper-
ator to travel near the bomb and risk injury). Monitoring the force applied to the
surrounding environment and warning the operator when this force approaches
the Threshold Force to Slip (TFS) would serve to keep mobile robots from being
destroyed and possibly endangering human lives. With such a dangerous task
to perform in defusing or disposing of bombs, uncontrollable situations are un-
desirable. Thus new and improved methods must be devised which can serve
to protect those who protect us. Through the use of both contact (vibration of a
probe system and subsequent accelerations) and non-contact (acoustic vibration
produced by probe dragging) methods, the subsequent paper will seek to prove
the validity of identifying various surfaces (with at least 90% accuracy) utilizing
relatively inexpensive sensors (less than $1000 each), and will relate those sur-
faces to a TFS.

2
1.1.2 Security Door

As an additional application for this method of system identification (sur-


faces), and in conjunction with the New Mexico Tech (NMT) Security Door Senior
Design Team, an application of this vibration-based classification will be devel-
oped. The goal of this project was to identify characteristic frequencies from the
vibrations produced by demolition and heat-based penetrative techniques on a
steel plate. These vibration characteristics will be used (by the team) to create
a security alarm database. This data will inherently contain a number of exper-
imental variables. However if any characteristics or unique profiles are found,
presumably these will still exist under different boundary conditions (such as
tool bit type, pressure, and speed of the tool). These alarm codes will be used
to classify an intrusion technique (angle grinder, drill, dremel, sawzall), and will
be used to notify the proper authorities according to the level of security risk as
classified by the NN. [4]
This project, sponsored by Sandia National Laboratories (SNL), was de-
signed to increase the effectiveness of ”access delay” for safes, vaults, and other
types of secure doors. This access delay could also be defined as the amount of
time it takes for security forces to arrive at the site of an intrusion. SNL tasked
NMT with developing an intrusion detection system that would minimize the
number of false alarms which can be triggered by environmental conditions. [5]
This team designed and built a security door analog (1/4 scale), attached the
same accelerometers and recorded data with the same microphone and data ac-
quisition system as was used for the surface identification. Since both of these
applications depend only on the frequency-domain data for processing, the same
analysis and same type of NN can be used for both. For the testing, the team
utilized a drill with 2 different drill bits, a dremel with an attached grinding disc,
an angle grinder, and a sawzall.

1.1.3 Semi-Autonomous Systems

Semi-Autonomous Systems are defined as systems which are ”partially


self-governing, especially with reference to internal affairs.” The goal in making
the robot autonomous is to allow an on-board computer to be aware of available
traction through identification of surface type. In the case of the security door,
the computer would alert the operator to a potential attack, and even provide a
classification of tool type for that attack.

1.1.4 Neural Network Basics

Neural Networks are used in situations where classification of unidenti-


fied inputs is required. These networks are used as processing devices that are

3
loosely modeled on the structure of mammalian cerebral cortices. These NNs are
typically organized in layers. These layers consist of a large number of connected
”nodes” which contain activation functions. Patterns are presented to the net-
work through the ”input layer,” which can map to one or more ”hidden layers”
where the processing is accomplished through a system of weighted connections.
These hidden layers then link to the output layer, which could also be called the
”classification” layer. It should be noted that the NN used in this thesis is a simple
3-layer network. An example of NN connections is shown in Figure 1.2. Many of
these networks contain a learning algorithm, which is used to modify the weights
of the connections according to the input patterns it is presented with. An anal-
ogy of this would be a child learning to recognize the class of cars given examples
of cars pointed out to them by their parents.

Input Hidden Layer Output


Layer Neurons Layer
Surface
FFTs 1
Surface
100x1 Classification

2 Dirt

100x1

3 Grass

100x1
Asphalt
4 Gravel
n = 90 Concrete
NOTA
n = 500

Figure 1.2: Simple Neural Network Connections

Similar to humans, NNs use ”neurons” to determine what is most im-


portant about certain patterns vs. others. In the case of this thesis research, the
patterns passed to the hidden layer consist of the binned FFT data for the 5 dif-
ferent surfaces. These FFTs make up the patterns of the various surfaces which
contain the frequencies specific to the 5 different surfaces. These neurons ”learn”
the weights and biases through a process of ”backpropagation.” Starting with
random connection weights, for a given set of inputs, a given set of outputs is
selected. Using the random weights, the network attempts to determine the out-
puts. The calculated outputs are then compared to the desired inputs by deter-
mining the mathematical difference between them. This difference is called the
”error” in the network. Once the errors (for each output class) have been deter-
mined, the connection weights are modified to produce smaller errors. This is
where backpropagation is used. The new weights are calculated using a formula
based on the old weights, the node’s input value, the error, and the ”learning
rate,” which can be adjusted (in typical Non-Matlab NNs). With the weights
adjusted, the hidden nodes re-calculate their own error using a similar formula

4
to before. Then these nodes with the new errors push the errors back through
the hidden nodes and adjust the weights behind them. This proceeds until all
the nodes have been adjusted. This therefore modifies nodes with larger errors
to limit the magnitude of these errors. With these adjusted weights, the map-
ping process (Inputs → Outputs) begins again with new inputs. Due to the ad-
justed weights, the outputs should be closer to the desired outputs, but error will
still be present. Thus the whole process is repeated until the desired outputs are
achieved.

1.2 Prior Work

Several authors have proposed surface characterization methods. Gra-


ham, Liu, and Sutcliffe [6] proposed a method for characterizing asphalt surfaces
using 1-Dimensional, line-scan vibration identification and application of the Fast
Fourier Transform (FFT). This method also produced an estimation of an asphalt
surface (assuming isotropy) and an FFT length of 512 data points. The Hanning-
windowed sample set pads the data with zeroes by a factor of two. This method,
however, was only designed to work for asphalt, assuming an isotropic distribu-
tion of the surface, and therefore is ill-equipped to deal with non-uniformity or
large variations in the surface texture. The research also concluded that the sim-
ulated surfaces were different from the experimental sample, with regard to both
the visual and tribological characteristics. This was most likely due to the fact that
the method used for generating data consisted of approximating a 2-D surface-
height field from 1-D, line-scan data by assuming an isotropic surface (uniform
in all directions). This 1-D line-scan data is similar to the testing methods used in
this thesis research, as the probe data gathered is technically representative of a
1-Dimensional cross-section of the traversed area.
DuPont et. al [1] classified surfaces for Unmanned Ground Vehicles (UGVs)
by measuring the frequency response of the vehicle’s vibrations across various
surfaces. The data were gathered at 200 Hz and limited to 10 second intervals,
and therefore the data they collected is unusable with any signals above 100 Hz.
In spite of this, the methods involved in these characterizations included the ap-
plication of previously recorded vibration profiles of the surfaces to a Neural Net-
work (NN). The classification phase of this referenced research involved the use
of several data sets to train the NN, which, by definition, selects weights and
biases to select characteristics that are deemed ”unique” to the various surfaces.
The training and testing data were collected at 1 speed only (0.5 m/s) over packed
gravel, loose gravel, sparse grass, tall grass, asphalt, and beach sand. As this clas-
sification system is similar to that used in the research presented in this thesis,
Figure 1.3 is taken from this paper to show the process involved in the classifica-
tion. While this previous research is useful in that it provides a basic foundation
for terrain classification through analysis of the frequency response of various
surfaces, it lacks in several key aspects. The data were collected at 200 Hz, mak-
ing it impossible to detect high-frequency components of the various surfaces,

5
and indeed the paper states ”[e]ach terrain produced a substantially unique FFT
magnitude in the frequency range of [3, 30] Hz,” [1, p. 341] a rather limited fre-
quency range. The methods used involved the use of an IMU (accelerometer),
which means that any audible frequency components to the surfaces were not
collected. Finally, the data collection (for validation of the NN) consisted of 150
seconds of data collection divided into 10 second segments, making it necessary
to always collect 10 seconds of data for the classification to be successful, further
reducing the robustness of this referenced research.

Figure 1.3: Classification System from [1]

Terrain classification for mobile robots was also studied by DuPont et. al
in [7]. This involved the use of UGVs for military applications. The goal of this
referenced research was to modify a UGV in order to classify surface type through
the use of Principal Component Analysis (PCA), a method which was considered
as an alternative to the NN. The NN requires more knowledge and computational
power to complete, but is not as math-intensive for the user, making it easier to
utilize. The methods involved in this referenced research included the collection
of data at 200 Hz, again limiting the frequency range associated with driving
across the terrain. There was only one sensor used to collect the data, which
was centered in the body of the robot. A 3-axis Crossbow accelerometer was
used which measured the ”3-axis accelerations and rotation rates about these axes
according to the robot’s body fixed coordinate.” [7, p. 3288] This data, therefore,
was influenced by the robot’s own natural stiffness and damping present in the
wheel-ground interaction, another reason the probe approach represents a new
addition to current research. Once again the frequency range was limited from 3
to 30 Hz, making it difficult to ascertain whether any high-frequency components
yield better classification results. The results showed terrain classification with
accuracy in the range [77.3%, 100%] [7, p. 3288] for all trained speeds, as well
as accuracy in the range [58.7%, 72.9%] for untrained speeds. The researchers
attribute the degradation of accuracy to the fact that the speeds used to train may
be better for some terrains than others. [7, p. 3288].

6
Macleod et. al. [8] utilized active whisker sensors to detect the roughness
of surfaces. This method provided the user with a relatively accurate ”map” of
the surface through very precise measurements of displacement values across
the surface. The system utilized an external Field Programmable Gate Array
(FPGA), similar to the one used in my thesis, connected to the composite ma-
terial whisker sensors, and sampled at 2 kHz by a host PC. Given enough data
collection, and assuming the addition of a neural network, surface characteriza-
tion could be accomplished through these mapped surfaces. This provides an
alternative to vibration-based surface classification, and could be studied in fur-
ther detail. The fact that this referenced research provided surface roughness or
texture information, however, makes it less useful in its application to this thesis
research.
Another pair of researchers, Philippe & Dudek [2], utilized a tactile probe
(single-axis accelerometer attached to an aluminum rod) dragged behind a card-
board box (and later on a mobile robot platform) to collect data and attempt to
identify various surfaces based on the collected vibration data. The data was
gathered at 4 kHz with 11-bit resolution using an Isaac stand-alone data-acquisition
system. This referenced research utilized a neural network to classify the data
based on 7 features in the time domain: mean, variance, skewness, kurtosis, fifth
moment, number of times 20 uniformly separated thresholds were crossed, sum
of the variation over time, and one in the frequency domain: sum of the higher
half of the amplitude spectrum. According to the reported results, the signal’s
variance indicated the amount of vertical motion of the rod, which was suffi-
ciently large for uneven surfaces (gravel, grass) [2, p. 538].
Histogram skewness was useful in determining surfaces with asymmetri-
cally distributed accelerations, often seen in surfaces with regularly spaced cracks
(e.g., cracks between tiles, expansion joints between concrete slabs). The research
presented the fact that the presence of high-frequency components helped to dis-
cern hard surfaces from others, due to the fact that hard surfaces are required to
sustain high-frequency vibrations in the probe. One other important aspect is that
the referenced research was performed at only one speed (70 cm/s) to simulate
a low-velocity land rover. The accelerometer’s cutoff frequency was set to 4 kHz
(equal to the sampling rate) which, the researchers admit, possibly induced alias-
ing of the signal. The ”good classification results” [2, p. 538] presented indicate
that the measured signals ”were not significantly aliased.” Typical terrain clas-
sification is determined by extracting features from frequency data only, while
this referenced research utilized data in the time and frequency domains com-
bined. The referenced research also included the use of a confusion matrix to
demonstrate the classification success, which is shown in Figure 1.4. This figure
is similar to the one produced by the NN used in this thesis.
Several other researchers have proposed methods of visually identifying
surfaces/terrains based on characteristics of the visual images they gather. One
such paper was written by Howard and Seraji in [9], which utilized a neural net-
work to distinguish one surface type from another. While this reference utilized
vastly different data collection methods (visual vs. physical), the classification
method of training the neural network to identify important characteristics is

7
Figure 1.4: Confusion Matrix for Testing Phase of NN from [2]

the same. There were two problems/limitations of this system. Firstly the sys-
tem misclassified terrain due to shadows. This difficulty in uniform illumination
caused the system to identify shadowed surfaces as obstacles rather than safe ter-
rain. The second limitation was the calibration of the system which affected the
calculation of the slope value (for safe traversal of sloped terrain). The methods
used to determine the passability of the terrain included the assumption that cor-
related points could be found which could be used to calculate the slope of the
surface and thus the possibility of safely traversing the surface. Although this
referenced research is not directly related to the research described in this thesis,
the application of neural networks to the surface characterization provides some
context for the methods used thus far for surface identification.
Still other researchers have proposed methods of material identification
by acoustic emission signals. Although the methods proposed in Kramer’s pa-
per [10] were used to distinguish between steel and ceramic materials, the basic
properties of this reference are the same as will be used in the research presented
in this thesis. Scraping one material against another and recording the acoustic
emissions, and extracting the frequency components through the use of the FFT
function are both methods used for surface identification. As with surface iden-
tification research, the time-domain data does not clearly indicate the surface or
material type, which is the reason for the FFT. Kramer and his fellow researchers
were successful in their attempts to identify materials in-process. Even though
this paper did not distinguish between surfaces, the basic principles and meth-
ods can be applied to surface identification.
One of the papers studied presented the idea of mixing types of collected
data to classify the various surface types. In Weiss’ study [11], the researchers
describe a combination of vision and vibration based terrain classification for
mobile robots. This system first classifies the area in front of the robot based
on the textures of the images. If the robot then drives over the area it has classi-
fied based on the images, it classifies the area using vibration data [11, p. 2204].
The researchers state that ”Experiments involving 14 different classes resulted in

8
classification rates of about 87%.” [11, p. 2204] While vision-based identification
is different from acoustic emission-based identification, this reference proves that
the combination of data types can lead to increased classification of the surface
type. These researchers also experimented with fusing data and fusing predic-
tions. Fusing the data led to lower classification rates, however when fusing the
predictions they discovered some advantages. ”Firstly, no assignment between
vision and vibration data must be known beforehand...Secondly, when fusing the
predictions, different lengths of vision- and vibration-based feature vectors have
no influence.” [11, p. 2207] The reason their approach is invariant to different
lengths of vectors is they utilized a Support Vector Machine (SVM) on the feature
vectors. This analysis was performed offline as the analysis was computationally
intensive. Due to the nature of SVMs, the input vectors need not be the same
lengths. This may seem to be a desirable quality as not every FFT will have the
same number of data points. NNs, however, are easier to implement (especially
in Matlab), and the training of both SVMs and NNs must be used offline. The
classification of either of these methods could be performed online as it is a very
simple process to feed either an SVM or NN with a vector of data and receive a
classification.
For a baseline of acoustic vibration in the use of identifying surfaces, Libby
and Stentz [12] utilized acoustic data collection to classify vehicle-terrain inter-
action. Data was collected across various environments. Some were classified
as being ”benign” interactions (grass, asphalt, gravel) and others were deemed
”hazardous” (splashing into water, hitting hard objects, wheels losing traction in
slippery terrain). [12, p. 3563] The vehicle used was a John Deere Gator, which is
significantly larger than most mobile robotic platforms, however the data should
still be usable in identifying whether acoustic data collection for surface identifi-
cation is a valid practice. Due to the high sampling rate of the microphone (44.1
kHz) and the relatively low frequency of the majority of the spectral power (<
2 kHz), these researchers truncated the FFT data in order to gain higher resolu-
tion of the frequencies seen while traveling. This reference also used a similar
”re-sampling” algorithm to group the frequency data into separate ”bins” which
allowed the entire spectrum to be captured while reducing the dimensionality.
[12, p. 3562] The results of this referenced research showed an average accuracy
in identifying ”benign” interactions of 92%, making the applications of acous-
tic data collection for surface identification clear. This paper also states ”The
fact that our algorithms are successful only using sound data, while a human
needs vision as well, speaks to the inherent power of sound as a computational
tool.” [12, 3561] This also demonstrates another reason why incorporating differ-
ent data types (acoustic vs. physical) may help to improve accuracy of surface
classification.
To identify the best methods for the classification, several options were
first studied. One paper, [3], Coyle and Collins compared different classifiers
based on their performance in identifying terrain. Once again, this reference fo-
cused on improving the performance and traction of UGVs (though this reference
called them Autonomous Ground Vehicles [AGVs]). Previous methods of surface
identification are described, including vision- and vibration-based methods. For

9
these methods, this paper describes several ways to classify the data. Probabilis-
tic methods, based on the Bayesian decision rule, states
i f p(ωi | x ) > p(ω j | x ) for all j 6= i, (1.1)
then x most likely belongs to class ωi . The paper then states that though these
ties do not often occur, they can be broken for any reason since ”any class with
the same probability is considered equally likely to be the correct class.” [3, p.
2] Discriminant functions are another method used to classify the surface type.
These functions determine relationships between k feature variables and the deci-
sion boundaries for those variables. The limitation of this method is the fact that
these discriminant functions are designed to solve classification problems with
two classes. Terrain classification is unlikely to be a two class problem, therefore
it will most likely be a 1 vs. 1 or 1 vs. the rest decision scheme. [3, p. 3] Nearest
Neighbor classifiers determine a ”test pattern x based on the class of the ”closest”
training pattern or patterns.” [3, p. 3] This method then uses Euclidian distance
(d) to determine which training pattern in T are nearest to the testing pattern.
This distance is calculated between the training sample tm and the test pattern
using the formula

d = ( x − tm )( x − tm ) T . (1.2)
This distance ties the test sample to a classification, and the test pattern is as-
signed to the class which has the lowest label. Neural Networks are also de-
scribed in this paper as being difficult to implement as determining activation
functions (responsible for activating each node for a desired target class) and
learning the weights associated with these target functions. These weights are de-
termined through the use of back propagation which becomes computationally
intensive, and also takes an exponentially increasing amount of time (depending
on the number of input neurons to the NN). This paper did not use NNs for the
vibration-based terrain classification comparison for these reasons. [3, p. 4] This
paper found mean accuracies for these methods shown in Figure 1.5.
The most successful classifier was the SVM Radial Kernel with a mean
accuracy of 83.9%. All of the classification methods, however, achieved greater
than 70% average maximum accuracy, and greater than 40% average minimum
accuracy, much greater than straight guessing (14%). All these methods came in
at an average testing time of 32.5 milliseconds (from Table 1 in [3]). The longest
testing time was with the Parzen Window Estimation (a probabilistic method) at
116 milliseconds. Given the range of testing times, and the fact that the longest
testing time was 1 second, NNs seem much slower at classification than these
other methods. Using a trained network, however, greater than 83.9% classifica-
tion accuracy can be achieved. This is the reason a NN has been selected for this
thesis research.
The above papers contain several unique applications towards surface
identification and classification for mobile robots and other unmanned vehicles.
The need for an easily implementable surface identification system can be eas-
ily seen throughout the majority of the literature studied. A system that has the

10
Figure 1.5: Classifier performance on individual terrains from [3]

ability to pass the classification to an off-line computer (i.e. a remote operating


station) is needed as the majority of EOD robots are controlled by an operator at a
specialized control station. According to the literature, frequency response anal-
ysis seems to be the most widespread method through which surface vibrations
are characterized and is the most useful way to characterize surface type.

1.3 Specific Aims

This thesis research seeks to be deemed robust enough to be subjected to


multiple testing conditions, across multiple surfaces, and with multiple types of
sensors. The first task was to select a data acquisition system that would be ro-
bust to multiple runs and require very little interaction between trials. A system
that could log multiple runs for post-processing of the data was determined to
be the most applicable system for this thesis research. A processing system that
could log data with sufficient bandwidth was also a requirement for this collec-
tion system. Least important for the data acquisition system was the form factor,
although a system that utilizes modular (interchangeable) parts that can be easily
switched out should the need for replacing parts arise. The system also needed
the ability to be run independently of a computer for extended periods of time,
which also includes any power requirements for the device and sensors. Finally

11
a data collection system that could collect multiple channels of data and store all
the data in one place was very important to have.
Selecting the sensors to be used to collect the data across multiple surfaces
presented a challenge. Since the two types of vibration sensing used were contact
and non-contact sensing, there were a very large number of choices of sensors
to select. Clearly for contact vibration sensing, accelerometers make the most
sense, having widespread uses from sensing orientation (for tablet and phone
screens), damage detection (e.g. Structural Health Monitoring), seismic moni-
toring, machine vibration, and hundreds of other uses. These sensors come in
all shapes, sizes, and types (piezoelectric, capacitive, piezoresistive, potentio-
metric, etc.) and have many types of connections necessary to read data from
them. These devices also are built very differently with many different meth-
ods of mounting available (screw-thread, mounting wax, etc.) which was also
a concern. In addition to the physical characteristics of the sensor, the specifi-
cations (cost, max G-load rating, sampling characteristics, etc.) were the most
important consideration for contact vibration sensor selection. A review of the
literature provided very little information regarding the cost of the sensors se-
lected, however a threshold of less than $1000 per sensor was decided to be an
affordable amount for research purposes. G-load rating was less of a concern as
the methods for data collection were designed to minimize G-loading and protect
the sensor from shock.
For the non-contact vibration sensing, microphones are the best solution
to collect acoustic data. Since most microphones are able to detect frequencies
up to 20 kHz (the range of human hearing), the considerations in selecting a mi-
crophone were less stringent. The type of connector was a concern, as many
different microphones use different types of connections, it was best to select a
connection type that was more or less universal to many data acquisition sys-
tems. The form factor of the microphone was also a criteria. As this microphone
would be deployed on a mobile robotic platform, the mass and size of the micro-
phone was a definite concern, especially since the microphone would most likely
be deployed on a ”boom” to allow it to be pointed at the source of the acoustic
excitation, the Probe-Ground Interaction Point (PGIP). As with the accelerometer,
there are many different types of microphone that utilize various methods to col-
lect acoustic waveforms (dynamic, capacitor, electret, etc.) and various opinions
exist regarding the best type to use for data collection. Ensuring the fidelity of
the original acoustic signal was maintained was a major concern, but with a suf-
ficient sampling rate and dynamic range, this concern can be minimized. Finally,
the microphone must be placed in a position where most (if not all) of the incom-
ing acoustic signal originates from the PGIP, thus ensuring that the acoustic data
is only from the probe’s interaction with the surfaces.
To achieve the goal of surface characterization, the data collection con-
sisted of several runs (1-5) for various lengths (7s, 1s), at different speeds (slow,
fast) across 5 surfaces (Asphalt, Concrete, Dirt, Grass, and Gravel) for all 3 sen-
sors (Probe accelerometer, Body accelerometer, Microphone). It should be noted
that these variables were included to prove whether an accurate classification

12
algorithm could be achieved even with all the variability in the data collection.
While it appears that there are many variables, there is a definitive threshold
with regard to the number and quality of variables which could be present for
this classification to still be accurate. For example, while the research may even-
tually show that the placement of the accelerometer on the mobile robot does not
affect the characterization accuracy, the mounting of the accelerometer (i.e. the
tightness of the connected mounting bolt/bracket) is surely one constraint that
must be considered. This is discussed in the DISCUSSION chapter.
The data acquisition system chosen was set to log the data from the two
accelerometers and microphone for 15 seconds, providing post processing with
2-7 second data sets and 1-1 second data set. This provided multiple ways in
which the classification could be robust to achieving correct classification results:
1) without respect to sensor type, 2) without respect to speed, and 3) without
respect to the length of the data collected. Thus this thesis research will prove
that surface identification can be achieved even with all these various parameters
combined (e.g. the NN would not care what speed, sensor, or for how long the
data was collected). This meant that the only criteria that would make a differ-
ence to the NN (with regard to surface classification) was the surface type, the
overall goal of this thesis. The acceleration and microphone data were analyzed
using Matlab computing software through the use of its FFT function, and the
data was stored in structures that ensured each data set was separated first by
surface, then speed, run #, sensor, and finally by length of signal (7s from begin-
ning, 1s from the middle, and the final 7s).
Once this data was stored in the various structures, certain sections of the
data were selected for training the NN. Matlab’s Neural Network Toolbox was
very useful in setting up the Pattern Recognition NN. This toolbox provides a
GUI that made it very easy to select the training data (the FFTs) and target clas-
sification (surface selections). Once these were selected, the data can be divided
according to the amount needed for testing, training, and validation of the NN.
The NN is also defined by the number of neurons used to select weights and
biases which is where the NN ”decides” what is important about the various
input-output relationships. The training, testing, and validation data were split
randomly by the computer and the NN was trained with different data sets each
time the training tool was run. This provided multiple results for a NN with
varying degrees of accuracy for characterization of the surface type. Once a NN
had been trained that presented near-perfect results (> 95% accuracy), the NN
was deployed using Simulink. This deployment meant a vector of inputs could
be passed to the ”hidden layer” of the NN, which contains the weights and bi-
ases determined by training the NN. The output of the NN is a simple Simulink
scope with 6 different graph lines. These 6 lines represent the number of possi-
ble outputs to the NN: the 5 surfaces and a 6th output designated ”None of the
Above” (NOTA) which was data collected which give the NN a choice if it can
not determine the type of surface.
Along with the type of surface being traversed, each surface will be de-
scribed by its own TFS, which must be applied in an orthogonal direction (with

13
respect to the surface). This TFS will vary between surfaces and will directly
relate to the coefficients of static friction across the various surfaces (as the nor-
mal force of the robot will be considered constant). This TFS is very important
to know as this force relates to the amount of force the robot’s manipulator can
impart on the surrounding environment. With the use of the Phantom Omni, the
orthogonal force can be measured and monitored. Relaying haptic feedback in-
formation to the operator of an EOD robot will allow these operators to ”know”
more about the environment they are operating in, and will serve to protect hu-
man life and these very expensive robots.

14
CHAPTER 2

FREQUENCY DOMAIN CHARACTERIZATION OF


EXTERNAL STIMULI

The idea of Frequency-Domain analysis for system identification has been


studied before in some detail. However, classification of surface type for mobile
robots through the use of these methods has not been extensively studied or the
methods used have been applicable only to certain speeds of travel and only to
certain sensor types, placements, and orientations. The methods proposed in
this thesis work include many variations in testing scenarios, including speed of
travel, placement of sensor, and even sensor type. The use of both a microphone
and accelerometers provides a certain depth to this thesis research, making it
applicable to many testing scenarios and sensor configurations.

2.1 Considered Hardware

2.1.1 ADXL345 Development Board

Several configurations of sensing equipment were considered before a fi-


nal sensor solution was decided upon. An ADXL345 Inertial Sensor Datalogger
and Development board was first selected to gather the accelerometer data. This
sensing solution had many good qualities, including the fact that it can log three
axes of acceleration (X,Y,Z) and outputs the time stamped data to a MicroSD card.
A picture of this considered platform is shown in Figure 2.1.
The measurement range is user selectable, and can record data up to ±16
g (maintaining 4 mg/LSB scale factor in all g ranges). This meant an extra step
in arriving at the true acceleration values, as all the data must be multiplied by
3.9. The data is logged to a MicroSD card in .csv format, a very useful format
for Matlab to utilize. One of the drawbacks of this sensor was the sampling rate.
The development board was only able to sample at a max of 100 Hz, meaning
any frequencies over 50 Hz would be unusable (aliased). In addition to this low
sampling rate, data collected from this board was inspected and showed multiple
acceleration values at the same time stamp. In order to identify the error in the
data collection, an experiment was devised in which the development board was
attached to a pendulum arm and pivot point. Data was collected as the arm

15
Figure 2.1: Analog Devices ADXL345 Development Board. Source: A

Figure 2.2: Data from ADXL345 Development Board

swung back and forth until the arm came to a standstill. The full data set (of the
vertical acceleration only) is shown in Figure 2.2.
To clarify the error in data logging, a truncated set of the recorded data is
shown in Figure 2.3. It was obvious from the above graphs that there are multiple
points of data logged at identical time values. This made it nearly impossible to
perform an FFT of any data the board collected, as the time stamps of the data
were not accurate. It was recommended that the time vector be re-constructed
to make the data usable. This was easy as the sampling rate was known to be
100 Hz. From this knowledge of sampling rate, it was obvious that a data point
was collected every 0.01 of a second. A new time vector was generated from
this knowledge, and the data was again plotted (Figure 2.4). Similar to above, a
truncated data set was also plotted to more clearly show the differences between
data sets (Figure 2.5). The differences between Figures 2.3 and 2.5 can be clearly
seen: the reconstructed time vector seems to have made the data collected by this
board usable in the sense that clearly the vertical axis of the accelerometer was
experiencing sinusoidally decaying motion (consistent with pendular motion).
The reason for this disparity between the expected time stamps of the data and
the actual time stamped data. One possible explanation for this difference was the

16
Figure 2.3: Truncated data from ADXL345 Development Board

Figure 2.4: Data from ADXL345, reconstructed time vector

fact that this development board was originally intended to only sense changes in
orientation and not necessarily actual vibrations (hence the slow sampling rate).
With further study, it appears that there was a sizable time difference between
when the data was collected (by the accelerometer) and when it was written to
the MicroSD card. It was possible that this buffering time between collection and
logging was causing errors in the time stamping of the data.
While the modification (reconstruction) of the time stamps made the data
technically ”usable,” there are several reasons this sensor solution was not used
in the final data collection stages of this thesis research. First, the development
board was only able to accurately sample accelerations of ±16 g. While this may
seem like a sufficient amount, there was really no way to sufficiently predict the
maximum values seen as the robot drives across various surfaces. Because of this,
it was better to err on the side of caution and select a sensor that can sample at
much higher g-loads. Second, the 100 Hz sampling rate severely limited the fre-
quency spectrum that could be observed by this platform. While it was true that

17
Figure 2.5: Truncated data from ADXL345, reconstructed time vector

most of the papers that used vibration to identify surfaces used low-frequency
data collection ([1, 2, 3, 7, 8, 11, 13, 14, 15]), there has not been extensive experi-
mentation with high-frequency identification. For this reason, a higher sampling
rate (by several orders of magnitude) was necessary to ensure a sufficiently wide
frequency range of data was gathered. Finally, the form factor of the develop-
ment board was determined to be much harder to mount to the robot than the
sensors that were decided upon for the final testing. This sensor provided many
good qualities, but eventually it was determined that this sensor had too many
shortcomings for use in this thesis research.

2.1.2 ArduIMU

The second sensor solution that was considered was the ArduIMU. This
sensor features an MPU-6000 3-axis accelerometer, a MPU-6050 Gyroscope, and
can be programmed using a USB interface. The accelerometer can measure ±16
gs and can output data at 1000 Hz. This device can be integrated with an Arduino
UNO microcontroller and an SD Shield for outputting the data from the IMU to
a MicroSD card. There were several reasons this sensor was not selected for the
final data collection. Firstly the form factor of the IMU+UNO+SD Shield would
have been very difficult to implement as the connections required external wires
between the IMU and the UNO. The addition of the SD Shield increased the size
of the setup significantly. Mounting the IMU to a probe would also have been
difficult to accomplish as the IMU itself is essentially a flat chip and would be
exposed to the ”elements” without a casing design. Protecting the setup from
dust and dirt and other environmental problems would have also taken extra
effort. The primary reason this system was not used, however, was the program-
ming knowledge necessary to make the UNO communicate with both the IMU
and the SD Shield. After spending several weeks on the coding of the IMU sys-
tem without much success, it was decided that it would be better to acquire data

18
and perform the analysis with another system that had a more manageable form
factor and much more user-friendly coding environment.

2.2 Final Hardware

The testing setup consists of two accelerometers, one microphone, and


a National Instruments data acquisition system. This setup was attached to a
mock-EOD robot, which is essentially two remote-controlled car drive axles held
together by an aluminum frame. This system was designed to be as vibration-
isolated as possible, which was designed to preserve the vibrations experienced
as the robot traversed the various surfaces. This was to ensure the only vibrations
that would affect the characterization were those produced by the surfaces.

2.2.1 Data Acquisition Board

The DAQ used was a CompactRIO NI cRIO-9022 Intelligent Real-Time


Embedded Controller with an attached CompactRIO NI cRIO-9114 Reconfigurable
Embedded Chassis which contains a reconfigurable FPGA chip. This system
(cRIO-9022+9114) acts as a real-time controller and is reconfigurable through Eth-
ernet or serial interface and samples at rates up to 51.2 kHz. The data sheets for
this setup are shown in Appendix A. This allows the system to be deployed in
the field without the need to be tethered to a base computer station. In addition,
the 9022 contains a USB slot, allowing collected data to be logged to a flash drive.
This was coded in the main program and the controller has the ability to log to
four (4) data storage devices at once, provided these are connected to a USB hub.
This system was configured using LabVIEW development software. This soft-
ware includes download-able examples and a very useful tool called the cRIO
Waveform Library which was the basis for the program deployed to the Real-
Time controller. This system was powered by a 9.2 V battery pack, which was
able to supply the power requirements for the 9022 controller, and the 3 inputs
of the 9234. The hardware is shown in Figures 2.6 and 2.7 and the data sheets for
these components are available in Appendix A.
The 9114 chassis contains 8 slots for NI C series I/O modules. There are
many different types of C series modules which can be used with this chassis,
including many with Digital and Analog IO. The module chosen was the NI
9234 which is a 4-channel analog IO module with BNC connections (Figure 2.8).
This module is designed to make high-accuracy measurements from IEPE sen-
sors. This module also delivers 102 dB of dynamic range and uses IEPE (2 mA
constant current) signal conditioning for accelerometers and microphones. This
module is able to acquire at rates up to 51.2 kHz and can be configured for multi-
ple sample rates and couplings (AC, AC/DC). The specifications for this module
are included in Appendix A.

19
Figure 2.6: NI cRIO-9022 Real-Time Controller

Figure 2.7: NI cRIO-9114 Reconfigurable Embedded Chassis

2.2.2 Microphone

The microphone selected was a GRAS Type 40PH array microphone. This
”is a low-cost microphone for general purpose measurements in arrays and ma-
trices” [Appendix A]. This microphone has a Frequency Response from 5 kHz to
20 kHz at ±2 dB, and a nominal sensitivity at 250Hz of 50 mVPa (±2 dB). The mi-
crophone uses a SMB Coaxial Socket for connection purposes, and the included
cable converts from SMB Coaxial to BNC, allowing the mic to be connected to
the first input of the 9234. This microphone requires a Constant Current Power
(CCP) Supply, which the 9234 provides in the way of IEPE coupling for each in-
put. This microphone is rather small, barely 7.0 mm in diameter and 59 mm in
length, making mounting difficult (see Figure 2.11). The microphone is shown in
Figure 2.9.

2.2.3 Accelerometers

The accelerometers selected were PCB Piezotronics 352C03 and 353B15


piezoelectric accelerometers. The specifications for these accelerometers are shown

20
Figure 2.8: NI 9234 Analog IO Module

Figure 2.9: GRAS Type 40PH

in Appendix A. As both accelerometers are nearly identical in appearance (with


the exception of size), only the 352C03 is pictured, but also accurately depicts the
appearance of the 353B15 accelerometer. Shown in Figure 2.10, both of these ac-
celerometers contained mounting materials used for this application. The 352C03
includes a 10-32 mounting thread used to attach this accelerometer to the probe
(shown in Figure 2.11), and the 353B15 has a 5-40 mounting thread (unused as
mounting wax was sufficient to constrain this accelerometer).

2.2.4 Robotic Platform

The cRIO, the accelerometers, and the microphone were are mounted to
a mock-EOD robot. The purpose of this robot is to investigate the feasibility

21
Figure 2.10: PCB 352C03 and 353B15 Accelerometers

of determining surface type while traversing various surfaces, for the point of
monitoring traction. Parameters one might expect to influence the vibration of
the chassis might include the speed, location of the sensor, and even the type
of sensor used. For this reason we incorporated a number and variety of sen-
sors, allowed the speed to vary, in addition to traversing various surfaces. This
platform was technically a remote controlled car that had been heavily modified
for testing and data collection purposes. The platform was designed to carry a
Phantom Omni on the front, which will allow teleoperation from another user-
driven Phantom Omni. This teleoperation will allow the user to impart forces
on the robot’s surrounding environment. The robotic setup is shown in Figure
2.11. The Omni code includes a section which monitors the force imparted on
the environment and provides force feedback (through a vibration in the user’s
Omni device) when the force applied nears the TFS for whichever surface the
NN classifies. This will demonstrate the substantial technical merit of this thesis
research.
As can be seen from Figure 2.11, the probe at the back of the robot includes
an attached accelerometer at the top. This probe was dragged across the ground
resting on a foot made of ABS plastic. This foot was added after initial testing
revealed significant clipping of the accelerometer signal on both concrete (due
to expansion joints spaced at regular intervals) and asphalt (due to the relative
sharpness of the rocks). The probe is able to move up and down in response to
the surface vibrations due to its connection to the main body via a parallel (or
nearly parallel) linkage between the probe and the body of the robot. The choice
of material for the probe was intended to prevent undue acceleration (thus undue
induced vibration) of the material. This was to ensure the accelerometer would
record only the excitation of the surfaces. To couple the accelerometer to the
probe, the top of the probe was drilled and tapped for an 8-32 screw thread. The
accelerometer included an 8-32 mounting screw which made it easy to attach to
the probe.
In addition to the probe accelerometer, a second accelerometer was mounted
to the main body of the robot, just beside the cRIO device. This accelerometer
was intended to record the vibration of the platform. Mounting this accelerom-

22
cRIO9022+9114
NI 9234 PhantomOmni

Probe Accelerometer

Microphone

BodyAccelerometer

Figure 2.11: Robotic Testing Platform

eter was simple as mounting wax was provided in the packaging. The mount-
ing wax interfaces well with the adhesive mounting base included with the ac-
celerometer, which allows some of the wax to interface with a grooved surface.
As the aluminum body was too thin to drill and tap for this accelerometer, the
mounting wax provided sufficient coupling between the body of the robot and
the accelerometer. As this wax is intended for quick mounting of sensors at room
temperature, it was deemed sufficient for data collection.
The microphone shown in Figure 2.11 was attached to an aluminum ”boom”
which was attached to the platform very tightly with locking washers. It must be
noted that all of the nuts were attached to the bolts with locking washers to keep
the hardware from coming loose and causing undue vibration. Additionally the
microphone was held in place by a 3-d printed clamp and 4 4-40 screws. The
inside of the clamp was made of a vibration isolation material, which helped to
prevent any undue vibration translating into the microphone. This prevented
the movement of the robot from interfering with the acoustic data collection. The
boom was necessary to keep the microphone pointed at the source of the acoustic
vibration, which ensured that (almost) all of the sound the microphone detected
was due to the probe skid’s interaction with the ground. This is very important

23
for this type of data collection. According to Libby, ”The distance from a partic-
ular sound source should not in itself have much of an effect since we normalize
the volume, but the ratio of source sound to background sound will be quite dif-
ferent for the two microphones.” [12, p. 3565] This referenced research utilized
a front facing microphone which collected ground interaction data (of the tires)
and a side facing microphone which recorded collision sounds to help to identify
when the vehicle collided with a rock or other obstruction. This statement clearly
demonstrates the need for the majority of the microphone data to be from the
excitation source only, the point of the microphone boom.

2.2.5 Security Door Analog

To demonstrate the effectiveness of the classification method used to iden-


tify classes, an analog security door was constructed by the NMT Security Door
Senior Design Team to assist in classification of the attacking tool. This analog
consists of an aluminum frame with attached casters (for mobility purposes) and
an additional steel frame attached to the body which allowed the attachment of
the steel attack plates. The steel frame included a thick back plate which protected
the sensors and prevented any possible breakthrough of the attacking tools. A
picture of this test setup is shown in Figure 2.12.
The accelerometers were attached at 2 different places with the same mount-
ing wax as was used for the robotic platform and the microphone was placed on
a tripod pointed at the back plate of the frame. The testing consisted of attacking
the security door with various tools (steel drill bits, a sawzall, an angle grinder,
and a dremel) while the data collection system logged the ensuing physical and
acoustic vibration data. There are many variables that one might expect to af-
fect the vibration of the door during attack. To test whether a classifier can be
made that is invariant to many of these parameters, we varied the tool attach-
ment, speed of the tool, angle of attack, attack location on the door, and the force
applied. The front part of the security door in Figure 2.12 is denoted as the attack
plate, which is where the door was attacked by the different tools. Each attack
took place in a different section of the door, to ensure each test took place on
an unmarred patch of the test plate. Additionally the attack plate was switched
when the damage to each plate was deemed severe enough to merit replacement.
Data sets were collected with this system using the same sampling rate, same
sensors, and same cRIO as the surface collection.
Similar to the placement of the microphone on the robotic platform, and
for the same reasons [12, p. 3565], the microphone for the security door collection
was placed at a distance from the sound excitation source. To keep the micro-
phone at a fixed distance from the security door, the door frame included heavy
duty lockable casters, which prevented the frame from being able to move during
collection and also allowed the door to be wheeled out of the way when collec-
tion was complete. A mount was designed for the microphone that allowed it
to be attached to a tripod placed exactly 18 inches from the back of the door, at
almost the exact center of the secure door’s back plate.

24
Edge Accelerometer

Center Accelerometer

Figure 2.12: Security Door Analog

2.3 Data Acquisition

The data collection practices are discussed in the Traction Control and
Security Door chapters individually, but the process for data collection for each
of these experiments is listed below.

1. Collection of data at 2 speeds from 7 different vehicle-terrain interactions


(Traction Control) and of 4 tools from different tool-attack plate interactions
(Security Door)
2. Labeling the data as belonging to certain classifications (surface, speed, run
#, sensor, segment [Traction Control], and tool type, run, segment [Security
Door])
3. Using FFT to extract frequency features (Both)

25
4. Training Neural Network using these features (Both)
5. Using the NN to predict the classification of the surface (Traction Control)
and classification of tool type (Security Door)
6. Comparing predicted classifications to known classifications (Both)

2.4 Data Preprocessing

The cRIO used for this data collection outputs the data to USB in NI’s
proprietary data collection format (TDMS) which has the ability to log multiple
columns of data for as many sensors are used in the collection. For the collec-
tion of the surface data1 , the sampling rate of the cRIO was set to 51.2 kHz, the
maximum setting, and the logger collected data from both accelerometers and
the microphone simultaneously. This data is time-stamped to ensure the fidelity
of the collection, and also allows for an accurate FFT. The programming for the
cRIO made it very easy to select an exact length of the data collection. After de-
liberation, it was decided that 15 seconds would be the best length, as it would
be easy to pull 7 seconds from the beginning of the collection, 1 second after that,
and 7 seconds after that. The data was separated in this fashion to prove that this
classification system would be robust to different lengths of data collection.
The cRIO programming allowed for multiple data files to be logged to the
flash drive, which made it much easier to collect the 10 runs per surface (5 fast,
5 slow) without having to pull the data from the drive. Upon collection of these
10 data sets, they were imported to the computer using NI’s DIAdem software.
There were several reasons for this, chiefly of which was the simple fact that im-
porting a TDMS file in Matlab is incredibly difficult, even with previously open
source code designed to do precisely that. Importing the NI proprietary data us-
ing NI software seemed to be the best method for data manipulation. DIAdem
has the ability to save data into a different format. As it is relatively easy to read
Comma Separated Values (CSV) format in Matlab, this was the data format cho-
sen. Upon importing of the data, the data were separated into multiple different
vectors and named according to the run type (surface, speed, run number) to eas-
ily manipulate the data and ensure the proper data was placed into the Neural
Network. A figure representing this separation of data is shown in Figure 2.13.
Once the data were imported into Matlab, an FFT of the data was taken
using the script in Appendix B.1. This script has several parts. Firstly the script
accepts an input vector of data. Then after the sampling frequency was selected
(knowing the sampling rate of the cRIO), the sample time is constructed (1/Fs).
The script pads with zero (0) up to the next power of 2 from the length of the data,
as with Lyons’ research [16], it then calculates the double sided energy spectrum,
1 Please
note that for the purposes of this section, the term SURFACE TYPE could be replaced
with ATTACK TOOL TYPE

26
Collected Data Set =768000 rows x 210 columns
Dirt Gravel Asphalt Concrete Grass NOTA
768000x30 768000x30 768000x30 768000x30 768000x30 768000x60
Probe 768000x10

Probe 768000x20
Probe 768000x10

Probe 768000x10

Probe 768000x10

Probe 768000x10
Body 768000x10

Body 768000x20
Body 768000x10

Body 768000x10

Body 768000x10

Body 768000x10
Mic 768000x10

Mic 768000x20
Mic 768000x10

Mic 768000x10

Mic 768000x10

Mic 768000x10
7s 1s 7s 7s 1s 7s 7s 1s 7s 7s 1s 7s 7s 1s 7s 7s 1s 7s
Separation into
3 time segments
7s = 358400x210
1s = 51200x210
7s = 358400x210
Total Segmented Data
has 630 columns

Figure 2.13: Initial Separation of Collected Data

the single sided energy spectrum, and the frequency range of the FFT (in this case
[0 25.6] kHz. Finally the script outputs the length of the signal (signal time), the
amplitude (time-domain) data, the frequency range ([0 25.6] kHz), and the energy
spectrum (the number of counts in each frequency. This script was called multiple
times from the script in Appendix B.3, which creates a large vector which includes
all the collected data (this is a 5 dimensional vector) and passes the vectorized
time-domain data to the FFT script one column at a time. The output of this script
is a structure that contains all 6 surfaces (5 surfaces and NOTA), both speeds, all 5
runs, all 3 sensors, and all 3 segments (7s, 1s, 7s). This structure was then broken
down into 6 separate structures, containing all the data per surface. This turned
out to be 630 columns of data, representing the FFTs of all the surfaces, speeds,
sensors, and lengths of data collection. From this, 90 data columns were taken
from the overall data set, creating 2 data sets (one for training of NN, one for
testing). 90 columns of data were taken for testing, all of which consisted of 18
data sets from each surface (6 [3 columns x 2 speeds] per sensor [6x3 = 18]).
To implement these data vectors in the NN, the vectored data needed to
be the same length. Unfortunately the outputs of the FFT (the energy portion at
least) are different lengths due to the different amounts of energy present in each
signal’s frequency range (0-25.6 kHz). To modify the FFT vectors to be the same
lengths, the script from Appendix B.2 was used. This script was called multiple

27
times for each of the vectors of FFT data, which produced a vector of FFTs of uni-
form length. For this thesis research the original FFTs were ”re-sampled” into 100
bins. Given the frequency range of the signals (due to the Shannon-Nyquist sam-
pling theorem) of [0 25.6] kHz, each bin contains the energy of 256 Hz (the first
bin from 0-256 Hz, the second from 257-506 Hz, etc.). A graphical representation
of this 100 binned data is shown in Figure 2.14. Given the large range of fre-
quencies which can be ”seen” by the FFT, this binned data still provides enough
resolution to distinguish certain peaks from others, which will be the job of the
NN to perform.

2.5 Neural Network

Taking the binned data, and knowing what each FFT represents (surface
type), Matlab’s Neural Network Toolbox was used to create, train, and test a
NN. Neural Networks are often used for pattern recognition in various applica-
tions. These tools are especially useful in classification of unknown types of data,
and Matlab includes a toolbox to easily set up, train, and test a NN. The Mat-
lab NNSTART (Neural Network Start GUI) was used which displays an interface
allowing the selection of the following:

1. Type of NN (Curve Fitting, Pattern Recognition, Clustering, etc.)


2. Selection of data to be used as Inputs and the Targets for those Inputs
3. Division of the data (% for Training, % for Training Validation, % for Train-
ing Testing)
4. Size of the network (# of Neurons)
5. Various classification metrics (Performance, Training State, Error Histogram,
Confusion Matrix, and Receiver Operating Characteristic)

In order for the NN to ”know” what the classification should be, the TAR -
GETS vector contains the same number of columns as the I NPUTS vector and as
many rows as there are classes (in this case, surfaces). To teach the NN what class
each vector of inputs should represent, the TARGETS consists of 0s in every row
except for the row that denotes which class the I NPUTS column represents. As
an example, the first column of data in the I NPUTS vector contained the FFT of
data taken while the robot drove across Asphalt. The first column in the TAR -
GETS vector contained the value 1 in the first row, followed by 5 zeros in the next
5 rows. This denotes that the first column of data from the I NPUTS vector should
be classified as the First class (i.e. Asphalt). The Neural Network Toolbox uses
the numbers contained in this TARGETS vector to train the NN. The figures con-
tained in Appendix C show the process for creating and training a NN, provided
the correct type of data is already present in the Matlab workspace.

28
FFT FFT FFT FFT FFT FFT
7s: 262,144 (218) points 1s: 32,768 (215) points 7s: 262,144 (218) points

Resampled Resampled Resampled Resampled Resampled Resampled


FFT FFT FFT FFT FFT FFT

Total Frequency Range Condensed Into 100 Bins (100x630)

Re-sampled FFT Data = 100x630


Dirt Gravel Asphalt Concrete Grass NOTA
100x90 100x90 100x90 100x90 100x90 100x180
Probe 100x30

Probe 100x30

Probe 100x30

Probe 100x30

Probe 100x30

Probe 100x60
Body 100x30

Body 100x30

Body 100x30

Body 100x30

Body 100x30

Body 100x60
Mic 100x30

Mic 100x30

Mic 100x30

Mic 100x30

Mic 100x30

Mic 100x60
NN Training Data = 100x540

Dirt Gravel Asphalt Concrete Grass NOTA


100x72 100x72 100x72 100x72 100x72 100x180
Probe 100x24

Probe 100x24

Probe 100x24

Probe 100x24

Probe 100x24

Probe 100x60
Body 100x24

Body 100x24

Body 100x24

Body 100x24

Body 100x24

Body 100x60
Mic 100x24

Mic 100x24

Mic 100x24

Mic 100x24

Mic 100x24

Mic 100x60

NN Testing Data = 100x90


Dirt Gravel Asphalt Concrete Grass
100x18 100x18 100x18 100x18 100x18
Probe 100x6

Probe 100x6

Probe 100x6

Probe 100x6

Probe 100x6
Body 100x6

Body 100x6

Body 100x6

Body 100x6

Body 100x6
Mic 100x6

Mic 100x6

Mic 100x6

Mic 100x6

Mic 100x6

Figure 2.14: NN Data via FFT and ”Re-sampling” of data from Figure 2.13

29
TrainingxConfusionxMatrix ValidationxConfusionxMatrix
0 0 0 0 0 0 NaN% 0 0 0 0 0 0 NaN%
1 1
0.0% 0.0% 0.0% 0.0% 0.0% 0.0% NaN% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% NaN%

0 0 0 0 0 0 NaN% 0 0 0 0 0 0 NaN%
2 2
0.0% 0.0% 0.0% 0.0% 0.0% 0.0% NaN% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% NaN%

OutputxClass

OutputxClass
8 6 33 4 7 0 56.9% 1 0 12 3 2 0 66.7%
3 3
2.1% 1.6% 8.7% 1.1% 1.9% 0.0% 43.1% 1.2% 0.0% 14.8% 3.7% 2.5% 0.0% 33.3%

13 9 2 22 1 0 46.8% 3 0 0 4 0 0 57.1%
4 4
3.4% 2.4% 0.5% 5.8% 0.3% 0.0% 53.2% 3.7% 0.0% 0.0% 4.9% 0.0% 0.0% 42.9%

0 0 0 0 0 0 NaN% 0 0 0 0 0 0 NaN%
5 5
0.0% 0.0% 0.0% 0.0% 0.0% 0.0% NaN% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% NaN%

32 37 16 20 40 128 46.9% 6 7 2 7 11 23 41.1%


6 6
8.5% 9.8% 4.2% 5.3% 10.6% 33.9% 53.1% 7.4% 8.6% 2.5% 8.6% 13.6% 28.4% 58.9%

0.0% 0.0% 64.7% 47.8% 0.0% 100% 48.4% 0.0% 0.0% 85.7% 28.6% 0.0% 100% 48.1%
100% 100% 35.3% 52.2% 100% 0.0% 51.6% 100% 100% 14.3% 71.4% 100% 0.0% 51.9%
1 2 3 4 5 6 1 2 3 4 5 6
TargetxClass TargetxClass

TestxConfusionxMatrix AllxConfusionxMatrix
0 0 0 0 0 0 NaN% 0 0 0 0 0 0 NaN%
1 1
0.0% 0.0% 0.0% 0.0% 0.0% 0.0% NaN% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% NaN%

0 0 0 0 0 0 NaN% 0 0 0 0 0 0 NaN%
2 2
0.0% 0.0% 0.0% 0.0% 0.0% 0.0% NaN% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% NaN%
OutputxClass

OutputxClass
3 2 3 1 2 0 27.3% 12 8 48 8 11 0 55.2%
3 3
3.7% 2.5% 3.7% 1.2% 2.5% 0.0% 72.7% 2.2% 1.5% 8.9% 1.5% 2.0% 0.0% 44.8%

1 1 0 7 0 0 77.8% 17 10 2 33 1 0 52.4%
4 4
1.2% 1.2% 0.0% 8.6% 0.0% 0.0% 22.2% 3.1% 1.9% 0.4% 6.1% 0.2% 0.0% 47.6%

0 0 0 0 0 0 NaN% 0 0 0 0 0 0 NaN%
5 5
0.0% 0.0% 0.0% 0.0% 0.0% 0.0% NaN% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% NaN%

5 10 4 4 9 29 47.5% 43 54 22 31 60 180 46.2%


6 6
6.2% 12.3% 4.9% 4.9% 11.1% 35.8% 52.5% 8.0% 10.0% 4.1% 5.7% 11.1% 33.3% 53.8%

0.0% 0.0% 42.9% 58.3% 0.0% 100% 48.1% 0.0% 0.0% 66.7% 45.8% 0.0% 100% 48.3%
100% 100% 57.1% 41.7% 100% 0.0% 51.9% 100% 100% 33.3% 54.2% 100% 0.0% 51.7%
1 2 3 4 5 6 1 2 3 4 5 6
TargetxClass TargetxClass
Figure 2.15: NN Example Confusion Matrix (from training)

Matlab’s NN Toolbox segments the training data to perform 3 parts of the


training phase: 1) Training, 2) Testing, 3) Validation. The amount of data used
for each of these 3 parts is user selectable in the construction of the NN, however,
the selection of which data columns are used for which part of the training phase
is randomly chosen by the computer’s algorithm. This division of training phase
data is shown in the paragraph below.
Although the steps shown in Appendix C represent the exact steps taken
for training the selected NN, these figures are only included as examples of the
training steps and do not represent the actual training steps that led to the final
NN selection. This is because the process of training and re-training led to very
different results each time and predicting how the NN would perform is practi-
cally impossible as each re-training of the NN produced different results and con-
fusion matrices due to different initial conditions for each training attempt. After
training several times, and creating multiple networks of varying sizes, training
percentages, and output classification percentages, a network with 500 neurons
was eventually selected, with 70% of the data selected for training, 15% for vali-

30
Surface Classification Confusion Matrix
18 0 0 0 0 0 100%
Asphalt
20.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%

0 17 0 0 0 0 100%
Concrete
0.0% 18.9% 0.0% 0.0% 0.0% 0.0% 0.0%

Predicted Surface 0 0 17 0 0 0 100%


Dirt
0.0% 0.0% 18.9% 0.0% 0.0% 0.0% 0.0%

0 0 0 18 0 0 100%
Grass
0.0% 0.0% 0.0% 20.0% 0.0% 0.0% 0.0%

0 0 1 0 18 0 94.7%
Gravel
0.0% 0.0% 1.1% 0.0% 20.0% 0.0% 5.3%

0 1 0 0 0 0 0.0%
NOTA
0.0% 1.1% 0.0% 0.0% 0.0% 0.0% 100%

100% 94.4% 94.4% 100% 100% NaN% 97.8%


0.0% 5.6% 5.6% 0.0% 0.0% NaN% 2.2%

Asphalt Concrete Dirt Grass Gravel NOTA

Target Surface

Figure 2.16: Confusion Matrix for selected NN

dation, and 15% for testing. This network classified the training data with 97.8%
accuracy, leading to a failure rate of 2.2%. The confusion matrix produced by the
training of this network is shown in Figure 2.16. This confusion matrix is pro-
vided to demonstrate the confusion of the NN in classifying the training data,
not the confusion of the testing classification.
As the figure above shows, only 2 surface FFTs were misclassified. Gravel
was once misclassified as Dirt and a NOTA run was misclassified as Concrete.
Several explanations exist for these misclassifications. Firstly the dirt surface
driven over (shown in Figure 3.1c) contains a fair amount of small pebbles, which
possibly caused the misclassification. Secondly the robotic platform creates a cer-
tain amount of sound from the speed controller. This sound was part of the sound
recorded as part of the NOTA data, as the robot was run in-air (with no contact
between tires and ground) and data was recorded which consisted of (mostly)
only the sound produced by the speed controller. This was another possibility
for the misclassification, although if this were truly the case it would be expected
that the NN would have misclassified more surface data.
The next step in the classification phase involved the deployment of the
NN to Simulink. This deployment produced the diagram shown in Figure 2.17.
In this figure, the constant input is the vector of testing data. This vector is 100
rows by 90 columns (Figure 2.14). The 100 rows consist of the FFT spectrum

31
In1 Select
Constant Out1 Input NNET Outputb ClassificationbScope
Idx Columns
InputbVector Clock VariablebSelector PatternbRecognitionbNeuralbNetwork Classification
TestingbData
ClassificationbOutputs
Figure 2.17: Simulink Diagram from the deployed NN

data, which represent the characteristic peaks of the time-domain data. The 90
columns consist of 18 columns from each surface (6 per sensor, 3 sensors per sur-
face, 5 surfaces). As the Pattern Recognition Neural Network (PRNN) block only
takes 1 column (100x1) of data for the classification, and running this model 90
times would be very tedious, the variable selector block was added. This block
takes a multidimensional vector as an input and uses the clock function to select
a single row or column (a single column in this case) and passes that single col-
umn to O UT 1. The index of this variable selector is based on the clock, which is
based on the ”simulation time.” This simulation time is user selectable, and for
the purposes of this model was set to 90. This made the variable selector index
from 1 to 90, passing out the 1st to 90th column to the PRNN, and keeping track of
the classification in both the C LASSIFICATION S COPE and C LASSIFICATION O UT-
PUTS .

32
CHAPTER 3

APPLICATION TO TRACTION CONTROL FOR MOBILE


ROBOTS

3.1 Methods

As shown in Figure 2.11, the testing platform was designed to be able to


traverse multiple types of ground terrain. All the attached parts (the cRIO, the
probe, and the accelerometers) were secured with locking nuts to allow them
to stay attached regardless of the vibrations induced while driving across the
surfaces. The tires were designed for deployment on an RC car, many of which
are designed to travel across many types of terrain, and so include rubber studs to
improve traction. There were only a few times when the robot lost traction during
testing. Due to the rigid body of the platform, and the fact that the suspension
did not function because of the way the axles were attached to the aluminum
body, this robot had difficulty traversing very uneven terrain. In several cases, the
robot attempted to traverse a small ”ravine” that had been carved by rainwater
(on the dirt and gravel surfaces), and this caused the robot to get stuck and spin
its wheels. Other instances of loss of traction occurred when the robot got stuck
against large enough rocks on the surfaces. The data collected during these losses
of traction were not kept for the analysis phase as the data were not collected for
the full 15 seconds.
To prove the robustness of this classification method, there were several
variables with regard to the testing parameters. Speed was varied between ”slow”
and ”fast” speeds (the fastest and slowest the robot could travel). This was done
as the classification was intended to be robust to any speed. Upon finishing the
collection, the 15 second data sets were split into 3 segments, as explained above
in Figure 2.13. Thus this classification will be proved to be accurate, independent
of speed or length of data collection. This was purposefully done so the only
variable the classification would depend on was the surface type.

3.1.1 Data Collection Practices

There were 5 surfaces upon which data was collected: Asphalt, Concrete,
Dirt, Gravel, and Grass. A 6th selection was also collected which consisted of the
robot driving over tile, as well as the vibration of the robot as it drove without its

33
wheels touching the ground (in-air). These 2 NOTA classifications were included
to give the NN the opportunity to select a ”None of the Above” classification
if it was unable to determine whether a surface belonged to any of the other
classifications. The testing data did not include any of this NOTA data, as it was
not particularly important for the robot to recognize surfaces outside the scope
of this research. The data collection and surface identification techniques could
easily be implemented on new surfaces, provided similar techniques are applied.
As the program written for the cRIO controller contained its own timing
system, timing the data collection manually proved unnecessary. Each data set
recorded was exactly 15 seconds long and each data file was named according
to the number of the previous run (when run 1 had been recorded, the next file
created and logged to was run 2). This was all dependent on a FOR loop in the
cRIO programming, ensuring the length of each data set was exactly the same.
This allowed for ease of data collection and allowed for the data to be transferred
”offline” to the computer for analysis. During the course of data collection, it was
discovered that the file naming system put in place in the cRIO programming
began to interfere with the data collection when there were more than 15 files
on the flash drive at any one time. This was due to the fact that the program
would first scan the flash drive for a suitable file name to log the data to (one that
would be different from all others). This meant that when the program initiated,
it would first check for files matching certain names. The more files were present,
the more the program would have to check to ensure none of the previous data
files were overwritten. Upon inspection of several of the data files, it appeared
that this ”file checking” part of the program was keeping the cRIO from logging
any data, leading to decreased data collection times between 5 and 10 seconds
(total collected data). To keep this from happening, 10 runs was determined to be
the max number which could be recorded to the flash drive at any one time. This
proved to be a very effective solution, and no difference in data collection time
was noticed from that point on.
The surfaces data was collected on are shown in Figure 3.1. This figure
contains a cross-section of each of the surfaces and while they do not depict the
exact cross-sections which were driven over, they closely approximate the type of
surfaces used in the data collection. While these surfaces are not identical to all
other surfaces mobile robots drive over, they are certainly representative of any
other Asphalt, Concrete, Grass, Gravel, or Dirt surface.
It was very important when performing the surface research to ensure cor-
rect data collection practices were used for collecting the training/testing surface
data. To ensure validity of the data collected (and the ensuing analysis), the plat-
form was never driven across the same cross-section of surface more than once.
This was done to ensure each data set would be ”unique” to different areas of
surface travel. Collecting unique surface data ensured that there was no chance
of the NN classifying the same cross-section of the surfaces.

34
(a) Asphalt (b) Concrete (c) Dirt

(d) Gravel (e) Grass

Figure 3.1: Surfaces used for Data Collection

3.2 Results

The data collection produced vectors of dimensions 768000x1 (51200 sam-


ples/second x 15 seconds) for each sensor, leading to a 768000x3 matrix of values
per run. These vectors were passed to the FFT script, and subsequently to the FFT
re-sampling script (Appendix B.2). This process produced a 100x3 matrix which
was placed into a larger matrix containing all the FFTs of all the runs (on all the
surfaces). Examples of the re-sampled FFTs are shown in Figure 3.2. These re-
sampled FFTs represent the range of actual frequency data, albeit with reduced
dimensionality. This was done to conserve the computer’s resources and allowed
for easier inclusion of the FFT data in the NN. After construction of the larger ma-
trix (100x630, see Figure 2.13 for explanation), 14% (90 testing runs from 630 total
runs) of the data was moved to another matrix, denoted as the testing data. The
remainder of the data (540 runs) were used in training the NN, and were called
the training data. Once training of the NN was completed, and the NN was de-
ployed using Simulink (Figure 2.17), the testing data was passed to the NN one
column at a time (Figure 2.17). It must be noted that NONE of the testing data,
used in construction of the Figure 2.16 confusion matrix, was used in creating
and training the NN. Thus only data that had not been previously classified by
the NN in the training phase was used to test the accuracy of this network.

35
−4 Resampled Asphalt Microphone FFT −4 Resampled Concrete Microphone FFT
x 10 x 10

2 2
Amplitude

Amplitude
1 1

0 0
0 20 40 60 80 100 0 20 40 60 80 100
Frequency Bin Frequency Bin

(a) Asphalt FFT (b) Concrete FFT


−4 Resampled Dirt Microphone FFT −4 Resampled Gravel Microphone FFT
x 10 x 10
Amplitude

Amplitude

1 1

0 0
0 20 40 60 80 100 0 20 40 60 80 100
Frequency Bin Frequency Bin

(c) Dirt FFT (d) Gravel FFT


−4 Resampled Grass Microphone FFT
x 10

2
Amplitude

0
0 20 40 60 80 100
Frequency Bin

(e) Grass FFT

Figure 3.2: Re-sampled Surface FFTs

36
1
Asphalt
Concrete
Dirt
0.9 Grass
Gravel
NOTA
0.8
Misclassification
ConcreteDasDNOTA
0.7
Classification

0.6

0.5 Misclassification
DirtDasDGravel
0.4

0.3

0.2

0.1

0 10 20 30 40 50 60 70 80 90
ColumnDNumber

Figure 3.3: Simulink Classification Scope (from Classification Scope in 2.17)

The output of the testing of this NN is shown in Figure 3.3. This plot has
been modified from the original output in the form of markers for the different
classifications, and text boxes have been added to show where the NN misclassi-
fied the surface type. There were only 2 misclassifications by this NN, denoted by
the boxes in Figure 3.3 and again in Figures 3.4 and 3.5. The truncated figures (3.4
and 3.5) are included as the first figure was difficult to see details because of the
large amount (and complexity) of data included in Figure 3.3. This figure shows
every trial (90) and the classifications for each surface. The correct classification
patterns for the trials are as follows:

Testing Numbers Surface Sensor


Mic. Probe Body
1-18 Asphalt 1-6 7-12 13-18
19-36 Concrete 19-24 25-30 31-36
37-54 Dirt 37-42 43-48 49-54
55-72 Grass 55-60 61-66 67-72
73-90 Gravel 73-78 79-84 85-90

Table 3.1: Correct classification patterns for surface NN

Thus it can be seen from the graph that the first 18 trials were classified as
Asphalt, the next 18 (with 1 misclassification) were classified as Concrete, and so
on. It should be noted that the classification algorithm takes a ”best guess” at the
classification type, and therefore whichever classification of a trial is nearest to a
value of 1 is what the NN ”picks” as being the correct classification. This graph

37
1
Misclassification
of
0.9 ConcreteDasDNOTA

0.8 Asphalt
Concrete
0.7 Dirt
Grass
Classification

Gravel
0.6
NOTA
0.5

0.4

0.3

0.2

0.1

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
ColumnDNumber

Figure 3.4: Simulink Classification Scope Asphalt and Concrete

is used to construct a confusion matrix and is included in Figure 3.6. Figure 3.3 is
included to show the overall classification of surface type by the NN. From this
graph it would be rather difficult to truly get a sense for how well the NN has
classified the surface type, but the general trend of the classification can be seen
through the colors used to represent the various surfaces, and the fact that the
classifications take place in sections of 18, as was expected.
Figure 3.4 shows the first 2 surface classifications of Asphalt and Concrete
(trials 1-36). The Asphalt surface was classified with 100% accuracy, which is not
surprising as this surface has a uniquely bumpy ”texture,” making it easy to clas-
sify vs other surfaces. Trial 31, however, was misclassified as being ”NOTA” as
opposed to the correct classification of ”Concrete.” While Trial 31 was the only
one misclassified, there are several other trials that were nearly misclassified. Tri-
als 24, 25, 26, 27, and 33 were nearly misclassified as ”NOTA” as well. There
are several possible reasons for this near misclassification. Firstly the NOTA data
used for the training was taken across tile and while the tires were suspended (in-
air). The tile surface is very smooth and therefore the probe would have experi-
enced less extreme vibrations while driving across it. Secondly concrete surfaces
(including the one tested upon) contain expansion joints. These joints provide a
regularly occurring ”jump” for the probe running across the surface. Tile, while
consisting of much smaller sections (12”x12”), also contains joints between the
tiles. While these joining parts are much smaller than those of concrete, the NN
may have seen this ”jump” as being common to both surfaces, and this caused
the near misclassifications (and the 1 misclassification). While the reasons for
this misclassification are not fully known, a ”best guess” was made regarding
the possible causes. Collecting NOTA data in a different manner (across carpet,
treadmill, etc.) may eliminate this misclassification in the future. Even with this
misclassification, however, the accuracy of the NN in classifying the concrete sur-

38
1 Asphalt
Concrete
0.9 Dirt
Grass
0.8
Gravel
NOTA
Classification

0.7

0.6

0.5 Misclassification
of
0.4
DirtMasMGravel
0.3

0.2

0.1

37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
ColumnMNumber
Figure 3.5: Simulink Classification Scope Dirt, Grass, and Gravel

face was 94.44% and this still achieves the minimum 90% accuracy sought for this
research. The next figure (3.5) shows the other 3 surface classifications (Trials 37-
90) of the NN.
The NN classifies the surface type very well, with a few exceptions. Trial
51 (Dirt) was misclassified as Gravel and was nearly misclassified as Concrete
as well (Trial 49). With a few exceptions, dirt surfaces are most definitely not
homogenous in their composition. Not only do they contain soil, but, similar
to gravel, dirt surfaces may contain many small pebbles and rocks, which may
have caused the NN to classify the dirt surface as gravel. Additionally, the NN
nearly misclassified the dirt surface as Concrete in Trial 49. This may have been
caused when the robot crossed a nearly uniform (homogeneous) section of the
surface when it was traversing a particular patch of dirt. These reasons are only
an educated guess, and only further research could prove whether these misclas-
sifications were truly due to the non-homogeneous structure of the surface. With
the accuracy of this classification, however, the real reason for these misclassifi-
cations does not have a particularly important bearing on the applications of this
classification method to multiple testing scenarios, surfaces, and robots.

3.2.1 Classification Accuracy

Figures 3.3, 3.4, and 3.5 show the output of the Simulink model. This out-
put is rather difficult to discern as the multiple crossing lines make it hard to
distinguish one classification from another. The figures also do not depict which
classifications are correct as there is no reference point (regarding the correct clas-
sification) for the column output data. To present this information in a way easy

39
to understand, a confusion matrix is the best graphical representation of this clas-
sification data. Figure 3.6 is the confusion matrix for the NN’s classification of the
surface trial data. Each surface contained 18 trials (20% of the data). The NN con-
fused 2 trials, Gravel for Dirt and NOTA for Concrete. These misclassifications
can be seen easily in grids (if 0,0 is the upper left corner) 3,5 and 2,6. These values
of 1 are representative of the data sets which were wrongly classified.

Surface Classification Confusion Matrix


18 0 0 0 0 0 100%
Asphalt
20.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%

0 17 0 0 0 0 100%
Concrete
0.0% 18.9% 0.0% 0.0% 0.0% 0.0% 0.0%
Predicted Surface

0 0 17 0 0 0 100%
Dirt
0.0% 0.0% 18.9% 0.0% 0.0% 0.0% 0.0%

0 0 0 18 0 0 100%
Grass
0.0% 0.0% 0.0% 20.0% 0.0% 0.0% 0.0%

0 0 1 0 18 0 94.7%
Gravel
0.0% 0.0% 1.1% 0.0% 20.0% 0.0% 5.3%

0 1 0 0 0 0 0.0%
NOTA
0.0% 1.1% 0.0% 0.0% 0.0% 0.0% 100%

100% 94.4% 94.4% 100% 100% NaN% 97.8%


0.0% 5.6% 5.6% 0.0% 0.0% NaN% 2.2%

Asphalt Concrete Dirt Grass Gravel NOTA

Target Surface

Figure 3.6: Neural Network Confusion Matrix

The lower right corner of the confusion matrix shows the overall accu-
racy of the NN in predicting the surface type of the testing data. In the case of
this trained NN, the correct classification was chosen with 97.8% accuracy. This
means the NN can correctly classify any data taken on these surfaces 9.7 times out
of 10. Given that the NN was constructed using Matlab’s easy to use NN Tool-
box, and was designed to be invariant with regard to speed and length of data
collection, this level of accuracy is rather impressive. There are obvious merits to
the accuracy of these classification methods (see DISCUSSION AND FUTURE
WORK).

40
3.2.2 Testing Time

When researching new methods of identification, it is also important to


know the time it would take for new classification of data. Providing the data is
taken from 1 of the 5 surfaces, the total time for completing this analysis (from
data importing to classification) is under 2 minutes. The longest step in this
process is the data importing, in which must be converted to a different format
(TDMS to CSV) before being brought into Matlab. Once in Matlab the scripts
used to place the data into the proper arrays, take the FFTs of the sensor data,
and insert the new data into the Simulink NN, takes about 1 minute total the
way the process is currently set up. It would be possible to re-write the code to
perform all the data manipulation in one script, and this would both cut down
on the user interaction, and would shorten the time it takes for the classification
drastically.
Similarly, the Simulink NN model could be modified to accept a matrix of
input from the Matlab workspace, rather than requiring the user to modify the
”Constant” block to accept the recently created input matrices. The classification
by the NN is very fast. As the weights and biases are all constants, a mathematical
process takes place that takes less than 1 second to complete. The classification
NN was run several times and the max time from clicking ”run” to observing
the output scope was about 4 seconds with a minimum time of > 1 second. This
fast classification is a very desirable quality of this method, as this process can
be developed for implementation for in-process surface identification for mobile
robots (see Future Work).

3.3 Traction Control

Selection of surface type is only one of the steps towards traction control
for these mobile robots. The next step involves the relation of these surfaces to
assigned coefficients of static friction. Several resources were used to gain an
understanding of the nature of static friction [17, 18, 19, 20, 21]. These coefficients
are directly proportional to the maximum TFS, which varies for different surfaces
(given a constant normal force). Even on one type of surface, the static friction
coefficient may vary due to the non-isotropic nature of most surfaces. Here we
will assume a Coulomb Friction model, shown below.

f f ≤ µs N, or f max = µs N (3.1)

where:
f f = Force of Friction,
µs = Coefficient of static friction
N = The normal force

41
Frw
θ
N
Fwr

y ff
W
x
Figure 3.7: Figure for static friction coefficient calculation

Thus, the value of µs , which governs the robot’s loss of traction, must be identi-
fied. In order to accomplish this, the mass of the robot is also important, and will
vary based on the configuration and equipment attached to the robot. In truth,
however, the formula for determining the force of the robot on the environment
is slightly more difficult to calculate.
For simplicity, we assume the following:

1. The environment (wall) the robot is interacting with is orthogonal to the


ground.
2. The robot is resting perfectly parallel on the ground.
3. We are only concerned with a static analysis (i.e. just before incipient mo-
tion)

A diagram of the situation will assist in understanding of the forces in-


volved.
Where Fx , Fy are the forces in the x and y directions, f f is the force of static
friction resisting movement, N is the normal force of the robot, θ is the angle
(from horizontal) at which the robotic arm interacts with the wall/environment,
w is the wall and r is the robot. Ergo Fwr is the force the wall imparts on the robot,
equal to − Frw (force the robot imparts on the wall).

∑ Fx = ma x = − f f + Fwr cosθ (3.2)


∑ Fy = may = N − mg − Fwr sinθ

Knowing from statics that all forces must equal 0 for no acceleration, a x
and ay go to 0, leading to

42
∑ Fx = 0 = − f f − Frw cosθ (3.3)
∑ Fy = 0 = N − mg + Frw sinθ

Therefore

∑ Fx = f f = − Frw cosθ (3.4)


∑ Fy = N = mg − Frw sinθ

Now, according to Equation 3.1, and assuming static analysis,

∑ Fx = − Frw cosθ = f f ≤ µs N (3.5)


∑ Fy =N = mg − Frw sinθ

and including the formula for N from 3.6

− Frw cosθ ≤ µs (mg − Frw sinθ ) (3.6)



− Frw cosθ ≤ µs mg − µs Frw sinθ

Next we know

µs mg
− Frw ≤ − µs Frw tanθ (3.7)
cosθ

µs mg
Frw (µs tanθ − 1) ≤
cosθ

And finally

µs mg
Frw ≤
(µs tanθ − 1)cosθ

µs mg
Frw ≤ (3.8)
µs sinθ − cosθ

Therefore the TFS is defined (from Equation 3.8) as the force that is less
than or equal to Frw . From the above equation, the TFS depends on µs and θ, the

43
angle at which the robot’s arm interacts with the surface. Knowing this, these
are the 2 things that must be accounted for when applying a force to the envi-
ronment on the various surfaces. With most (if not all) robotic arms it is possible
to arrive at the angle of interaction through a knowledge of the joint positions
(through rotational encoders) and the kinematics of the robot manipulator. To
monitor the force the robot imparts on the environment, force sensors are rela-
tively inexpensive and are designed to experience force and output a voltage that
can be measured, calibrated, and monitored by a central operating system.
Based on the research conducted, the value of µs does not depend only
on the surface, but also the material interacting with the surface (tires). Unfortu-
nately, not may sources reported the value of the static friction coefficient between
studded robot tires and the 5 testing surfaces. As it was difficult to find informa-
tion regarding these coefficients of friction, 5 short experiments were conducted
to discover the static friction coefficients for these surfaces. To calculate the co-
efficients of friction for the 5 surfaces, the robotic base (the platform without the
cRIO, Omni, or sensors) was taken to the testing locations where the surface data
was collected and the wheels were locked to avoid rolling, a situation not unlike
an actual EOD situation as the wheels/treads of the robot should be locked to
prevent rolling during manipulation. The full robot setup with the cRIO, Omni,
and sensors was not needed for this experiment as (according to Equation 3.1) N
and f f are directly proportional to each other and do not affect the static friction
coefficient of any surface. The robot was then attached to a force sensor and the
value of the force imparted on the robot was recorded when the robot began to
slide on the surfaces. This force is nearly equivalent to the Frw described in 3.8,
requiring only simple translation from the force sensor interaction point to the
point at which the robot’s manipulator contacts the environment (Figure 3.7). It
should be noted that the robot was moved between runs, meaning the same area
of surface was never tested more than once. This was done with the intention of
making the analysis as broad as possible. The data from these tests is included
in Table 3.2. This table displays the friction force when the robot began to slip on
the surfaces and is labeled according to trial number and surface. The units are
in lbf.
Note the variability in the magnitude of force causing slip. Not only does
this force vary between surfaces, but within each surface there is also some vari-
ance in the TFS. While this variability is low for some surfaces (e.g. 9.6% differ-
ence between low and high for Dirt) it varies up to 20% between surfaces (Gravel
vs. Grass). This high variability presents an option for future work and is de-
scribed in Section 5.2.1.
Knowing the weight of the robot base (N from Equation 3.8) to be 9.024
lb (4.093 kg), the coefficients of friction could be calculated using Equation 3.1.
These are shown in Table 3.3. Finally the average values of µs were calculated by
taking the mean of each of these columns of data.
The most interesting aspect of Table 3.4 is the fact that Grass apparently
has a coefficient of static friction greater than 1 (average of 1.24). This is most
likely due to the static friction involved between the grass and the studded tires.

44
Trial Dirt Gravel Grass Concrete Asphalt
1 7.37 4.35 11.96 7.66 8.38
2 7.30 4.98 11.88 7.18 7.24
3 7.64 5.51 11.54 7.01 6.80
4 7.76 5.50 11.80 6.78 7.72
5 7.89 4.63 11.40 7.12 9.07
6 7.47 5.31 10.46 7.30 9.75
7 8.08 5.52 10.46 7.17 8.70
8 7.35 4.48 10.67 7.15 7.86
9 7.79 5.97 10.94 7.08 8.17
10 7.87 5.37 11.55 7.28 8.08

Table 3.2: Friction Forces f f causing slip (TFS)

Trial Dirt Gravel Grass Concrete Asphalt


1 0.82 0.48 1.33 0.85 0.93
2 0.81 0.55 1.32 0.80 0.80
3 0.85 0.61 1.28 0.78 0.75
4 0.86 0.61 1.31 0.75 0.86
5 0.87 0.51 1.26 0.79 1.01
6 0.83 0.59 1.16 0.81 1.08
7 0.90 0.61 1.16 0.79 0.96
8 0.81 0.50 1.18 0.79 0.87
9 0.86 0.66 1.21 0.78 0.91
10 0.87 0.60 1.28 0.81 0.90

Table 3.3: µs for all surfaces and all trials

The tires of the robotic platform are designed to drive across many different sur-
faces with many different characteristics. It is possible that some ”adhesive” bond
was generated between the tires and this surface due to the tiny edges and bris-
tles present in almost all forms of grass. This means that a force greater than that
of the weight of the robot (the normal force) must be imparted in order to make
the robot slip on grass, which is very good news for most EOD robot operators.
Asphalt’s coefficient was the next highest, expected due to the relatively
jagged nature of most types of asphalt. This is most likely the reason (along
with the relatively cheap cost to produce) asphalt is used for most US roadways.
The ability of this surface to prevent tires from spinning makes for much safer
roads and a longer lifetime for car tires. The value of 0.9 was compared to a few
sources to determine the accuracy of the predicted µs . From [22], the coefficient
of static friction for rubber on dry asphalt is 0.9. The fact that this experiment was
conducted to collect more data regarding surface friction, and ended up arriving
at the accepted value, is a sign that this methodology is most likely correct in its

45
Dirt Gravel Grass Concrete Asphalt
µs 0.84 0.57 1.24 0.79 0.90

Table 3.4: Average µs for all surfaces

implementation.
Next highest was Dirt which had a µs of 0.84. This was also expected as
the dirt surfaces tested upon were all very rough to the touch and made for a
perfect surface to test this theory of traction monitoring. Second to lowest was
Concrete at 0.74. this came as a surprise as it was expected to be much higher due
to the roughness of the surface. Again [22] was consulted and the accepted value
of 0.6 was found. This produces an error of 23.3% between the calculated and
accepted values, however, as not all concrete surfaces are created equal, it is still
an acceptable value. The smoothness of this surface (compared to asphalt and
dirt) is the most likely cause for this generally accepted value of 0.6. As the robot
has rubber studded tires, it is not surprising that the value determined through
the experiment was larger than the accepted value.
Finally, lowest on the list was Gravel, with a µs of only 0.54. This means
that the robot would barely be able to apply half of its own normal force to the
environment (providing a θ of 0 and completely level ground), making manipula-
tion of the environment (in the direction orthogonal to the robot’s body) difficult
if not impossible. With positive angles of θ above 45◦ , the robot can apply (nearly)
as much force as is physically possible. This is because the increasing magnitude
of force in the +y direction (from Figure 3.7’s coordinate system) would augment
the normal force the robot exerts on the ground, increasing the static friction force
and keeping the robot in place. As long as the robot does not tip over it can lift
or manipulate the bomb environment without losing traction, which the subject
of other research currently taking place in the RIL.
If there is one thing to be gained from this analysis, it is that EOD robot
technicians should prefer to operate on grass or asphalt, as they can apply more
force to the environment without losing traction. It should also be noted, how-
ever, that these coefficients of static friction are only useful if the component of
the force is in the proper direction (i.e. a direction that does not affect the normal
force either positively or negatively). With θ less than 0◦ , Frw (and thus Fwr ) con-
tains components in the +y direction, which has the effect of canceling some of
the normal force. In reality the normal force simply gets distributed between the
back wheels and the robotic arm, but this reduces the load on the front wheels, de-
creasing the contact the robot’s wheels have with the ground, which decreases the
area for the friction force to interact with. As stated above, at values of θ greater
than 45◦ , the normal force is augmented by the x-component of Fwr , pushing the
robot into the ground. Thus for angles between +45◦ and −90◦ , the x-component
Fwr is maximized, making these the angles where loss of traction is most likely to
occur.

46
3.4 Conclusions

The specific aim of this research was to be able to identify surfaces based
on the acoustic and physical vibrations produced when driving across them and
to relate those surfaces to some TFS the operator could impart on the environ-
ment. The classification of surface type was desired to be invariant to both the
speed of the robot and the length of the data collected. Upon collection of the
data, and after some slight modifications of the data organization, a NN was
trained to the desired accuracy and was deployed using Simulink. This classi-
fication method proved to be very successful in identifying the surfaces based
on their vibrations, reaching an accuracy of 97.8%. This accuracy means that if
new data is collected across any of the 5 trained surfaces, and an FFT is performed
(with 100 frequency ”bins”), the NN could classify the new data with near-perfect
accuracy. There were several misclassifications, but these were mostly between
gravel and dirt and concrete and NOTA. With the subtraction of the NOTA clas-
sification, the accuracy may be even higher, but as it was important for the NN to
be able to classify a surface that had not been ”seen” before.
The coefficients of friction for the 5 surfaces were calculated, with some
interesting results. Grass and asphalt appear to be the safest terrains to operate
on, but it must be noted that these results only apply to the constraints labeled
in Figure 3.7 (flat surface orthogonal to the environment). In addition, the angle
of interaction (θ) must be known to calculate the TFS. Using the µs values from
Table 3.4, and knowing the force being imparted on the environment through a
force sensor and the angle of interaction through kinematics, a warning could be
relayed to the operator regarding his proximity to the TFS, and could serve to
prevent the loss of a very expensive robot or the endangering of human life.

47
CHAPTER 4

APPLICATION TO INTRUSION DETECTION FOR SECURITY


DOORS

4.1 Methods

As an additional application of the previously described research meth-


ods, data collection, and analysis, and to demonstrate the technical merit of this
thesis work, this vibration-based classification was found to have applications to
one of the NMT Senior Design teams. The Intrusion Detecting Security Door team
utilized the same data acquisition system as was used for the surface data collec-
tion to detect vibrations (both acoustic and physical) on a steel security door ana-
log. This door, shown in Figure 2.12, was designed to closely approximate doors
used to secure valuable or sensitive items, and included the ability to switch out
the test plate when necessary. These secure doors are often used to store parts
or equipment in remote locations, where testing is conducted on-site and trans-
portation of data logging or other equipment is not practical. The goal of this
research was to determine the tool type used to attack the door based on the vi-
brations produced, and was desired to be invariant to the pressure applied, the
bit type/size (for the drill and dremel), and the speed of the tool’s operation. It
must be noted that the same analysis methods (in the NN) were used for both
applications, and the Matlab code for the surface data collection was modified to
manage the security door data.
Four (4) tools used to attack the door. These tools were:

• Dremel
• Drill
• Angle Grinder
• Sawzall

A very useful aspect of this application is the fact that the microphone
will be used to detect both the sound of the tool’s interaction with the steel at-
tack plate, and the sound of the tool itself. This is the reason the accelerometers
and microphone were used in conjunction. The accelerometers would record the

48
physical vibration of the tool interaction. This, in conjunction with the micro-
phone data, should provide the NN with a sort of reference point for classify-
ing the tool type. This is similar to the collected surface data, where the acous-
tic data emanates from the PGIP, and the physical interaction (accelerometers)
record data at the PGIP and at the interface between the robot and the ground
(through the tires). The similarity between these applications is one of the rea-
sons the same classification method can be used to identify both the surfaces and
the tool types.

4.1.1 Data Collection Practices

Data was collected using the cRIO 9022+9114 setup with the same ac-
celerometers and microphone used in the surface data collection. Because of the
51.2 kHz sample rate, frequencies up to 25.6 kHz could be detected, above the
upper range of human hearing. For the security door data collection, this was
very advantageous as the frequencies were much higher than the surface data
collection. High frequency dataloggers are expensive, however, so the imple-
mentation of this system in a situation that would require secure doors may not
be feasible. However, having a centralized datalogger that collects from many
different sensors could very easily be the best method of deployment in almost
any situation.
The Type 40PH microphone was placed 18 inches from the back plate of
the door and 31 inches from the base (ground) attached to a tripod. This put the
microphone roughly in the center of the door, opposite the attack plate. As the
40PH is an array microphone, it is intended to be pointed directly at the source of
the sound being measured, and can be placed in an array of microphones to pro-
vide the option of determining the location of a particular sound source. How-
ever, as long as one of these microphones is not placed too far away from the
source, and is pointed directly at the excitation, it can still be used for general
acoustic data collection.
The accelerometers were placed at the top edge and the middle of the
back plate (Figure 4.1), though an analysis (overlay) of the vibration data from
these sensors demonstrated that sensor placement did not noticeably affect the
frequencies of the vibration data, as shown in Figure 4.2. While the amplitude
is quite obviously different (i.e. the amplitude of the vibrations is lower for the
green Edge Accelerometer data), the frequency peaks occur in the same place.
This is an important discovery as these security doors are often placed in tight
spaces, leaving little room for attaching sensors (and the cables inherent with
them). The fact that the sensor placement did not have a large effect on the in-
coming data means that the accelerometer can in fact be placed anywhere on the
door, leaving the center of the door clear (for opening).
Data was collected for 10 seconds, which was determined to be long enough
to ”see” any characteristic frequencies for each tool type. 3 attack plates were

49
Figure 4.1: Placement of the accelerometers for testing

used allowing for a total of 132 (44 per sensor) data sets to be taken. The list of
collected data sets is in Table 4.1.
These data sets comprised the sum total of all the tool types, attachments,
speeds, and pressures tested on the attack door. It should be noted that, with the
exception of the Sawzall and Grinder, there were different numbers of data sets
taken for each tool type. According to intuition, however, the number of data
sets should not severely limit the training/testing ability of the NN. While more
data is better, the NN should (conceptually) still be able to determine weights
and biases for the different surfaces, regardless of these different amounts of data
provided for training and testing.
One advantage in using the cRIO system for data collection was the format
of the data output. Exactly the same for the surface data and the security door
data, the security door data was able to be converted to CSV format from TDMS,
allowing the same Matlab code (with slight modifications due to the unequal
numbers of data sets per tool) to be used to save the data, pass it to the FFT and
FFT re-sampling algorithms, and finally to create a NN for classifying the tool
type. These similarities in data types are encouraging and have applications to
many different forms of classification by NNs. While TDMS is not the simplest
format to manipulate (without the use of NI software like DIAdem), it can be
modified to interact with Matlab, Excel, or other data processing software.

4.2 Results

The data were passed as a whole to the NN training tool and a maximum
accuracy of 96.3% was reached after multiple trainings were performed. The

50
0.1

Accelerometers 1 & 2 [V]


0.075

0.05

0.025

-0.025

-0.05

-0.075

-0.1
0 2.5 5 7.5 10
Time (s)

0.00125
Amplitude Peak [V]

0.001

0.00075

0.0005

0.00025

0
0 5000 10000 15000 20000 25000
Frequency

Figure 4.2: Example accelerometer data (Green is Edge, Red is Middle)

overall confusion matrix for the trained NN is shown in Figure 4.3. This was
the maximum accuracy reached after training multiple (>50) times and was se-
lected as the NN to be used for the classification of attack tool type. Deploying
this NN to Simulink produced the exact same (overall) diagram as Figure 2.17,
with the differences appearing under the ”mask” of the NN (i.e. the weights and
biases). For reference, the overall confusion matrix (including that of the training,
validation, and testing) is included in Figure 4.3. This figure is included only for
reference purposes, and does not demonstrate the actual effectiveness of the NN
in classifying the tool type of the testing data.
This overall accuracy of 96.3% demonstrates the effectiveness of this NN
in predicting the tool type of the training data. To determine the true accuracy of
this NN, it was deployed to Simulink. Upon deployment, the testing data, which
consisted of 24 trials (6 trials from each tool type), was passed to the Simulink
constant block (shown in Figure 2.17) and fed into the variable selector of the NN.
Upon running this simulation, the Simulink output (Figure 4.4 demonstrated the
classification results.

51
Training Confusion Matrix Validation Confusion Matrix

19 0 0 0 100% 7 0 0 0 100%
1 1
25.0% 0.0% 0.0% 0.0% 0.0% 43.8% 0.0% 0.0% 0.0% 0.0%

0 29 0 0 100% 0 5 0 0 100%
2 2
0.0% 38.2% 0.0% 0.0% 0.0% 0.0% 31.3% 0.0% 0.0% 0.0%

Output Class

Output Class
0 1 12 0 92.3% 0 0 2 0 100%
3 3
0.0% 1.3% 15.8% 0.0% 7.7% 0.0% 0.0% 12.5% 0.0% 0.0%

0 0 1 14 93.3% 0 0 0 2 100%
4 4
0.0% 0.0% 1.3% 18.4% 6.7% 0.0% 0.0% 0.0% 12.5% 0.0%

100% 96.7% 92.3% 100% 97.4% 100% 100% 100% 100% 100%
0.0% 3.3% 7.7% 0.0% 2.6% 0.0% 0.0% 0.0% 0.0% 0.0%

1 2 3 4 1 2 3 4
Target Class Target Class

Test Confusion Matrix All Confusion Matrix

4 0 0 0 100% 30 0 0 0 100%
1 1
25.0% 0.0% 0.0% 0.0% 0.0% 27.8% 0.0% 0.0% 0.0% 0.0%

0 7 0 0 100% 0 41 0 0 100%
2 2
0.0% 43.8% 0.0% 0.0% 0.0% 0.0% 38.0% 0.0% 0.0% 0.0%
Output Class

3
0 0 2 0 100% Output Class 3
0 1 16 0 94.1%
0.0% 0.0% 12.5% 0.0% 0.0% 0.0% 0.9% 14.8% 0.0% 5.9%

0 1 1 1 33.3% 0 1 2 17 85.0%
4 4
0.0% 6.3% 6.3% 6.3% 66.7% 0.0% 0.9% 1.9% 15.7% 15.0%

100% 87.5% 66.7% 100% 87.5% 100% 95.3% 88.9% 100% 96.3%
0.0% 12.5% 33.3% 0.0% 12.5% 0.0% 4.7% 11.1% 0.0% 3.7%

1 2 3 4 1 2 3 4
Target Class Target Class

Figure 4.3: Confusion Matrix for trained Security Door NN

4.2.1 Classification Accuracy

As before, while Figure 4.4 is included for completeness, it does very little
in the way of explanation of the results of this classification test. To explain the
results, a confusion matrix was created to show the overall accuracy of this NN in
classifying the testing data. Shown in Figure 4.5, this figure shows the accuracy
of the NN to be 83.3%. As it is expected that the tested NN would perform as
well as the trained NN, this result is rather counter-intuitive. According to the
figure, the NN had the most trouble identifying the Drill class, as it misclassified
2 of the drill runs as Grinder runs (grid 3,1). The drill was also misclassified as
a Sawzall, and the Grinder was once misclassified as the Drill (grids 4,2 and 2,3
respectively).
While the trained NN had an accuracy of 96.3%, the tested classification

52
Dremel
1 Drill
Grinder
Sawzall
0.8

0.6
Classification

0.4

0.2

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
ColumnSNumber

Figure 4.4: Simulink Output for security door classification

Attack Tool Confusion Matrix

6 0 0 0 100%
Dremel
25.0% 0.0% 0.0% 0.0% 0.0%

0 5 2 1 62.5%
Drill
0.0% 20.8% 8.3% 4.2% 37.5%
Predicted Tool

0 1 4 0 80.0%
Grinder
0.0% 4.2% 16.7% 0.0% 20.0%

0 0 0 5 100%
Sawzall
0.0% 0.0% 0.0% 20.8% 0.0%

100% 83.3% 66.7% 83.3% 83.3%


0.0% 16.7% 33.3% 16.7% 16.7%

Dremel Drill Grinder Sawzall

Target Tool

Figure 4.5: Confusion matrix for security door NN

53
Dremel
1 Drill
Grinder
Sawzall
0.8

Classification
0.6
Classification

0.4

0.2

0
ColumnfNumber
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48
ColumnfNumber

Figure 4.6: Simulink Output for segmented security door classification

was much less than the desired accuracy of 90% (83.3%). Because of this, an alter-
nate method of NN training/testing was attempted. The original training data
was split into 2-4 second intervals (from 1-5s and 5-9s) and an FFT was performed
on these 4 second data sets. This resulted in 216 trials for training and 48 for test-
ing (12 trials*4 tools). After creating a new NN and training multiple times, the
highest accuracy seen was 91.2%. This accuracy is much lower than that of the
surface classification, and even lower than the first security door classification,
but it was deemed to be good enough for testing and was deployed in Simulink.
The output of the testing data (through the Simulink NN) is shown in Figure 4.6.
Once again it is difficult to gain a full understanding regarding the ac-
curacy of this NN in classification of the new segmented data. Therefore the
confusion matrix is shown in Figure 4.7. This confusion matrix shows that the
segmented-data-trained NN classified the data with only 75% accuracy, misclas-
sifying a full quarter of the testing data. This network did not perform very well
for the purposes of identifying the tool type. It is interesting to note that splitting
the security door data similar to how the surface data was split reduces the accu-
racy by almost 10% (83.3% to 75%). The most misclassified tool for the segmented
testing was the Sawzall, with 4 of the 12 (33.3%) sets being misclassified as 2 Drill
and 2 Grinder. There are several reasons for these misclassifications, discussed in
the Conclusions below. The correct classifications for the segmented data analy-
sis are included in Table 4.1.

54
Testing Numbers Tool Type Sensor
Mic. Edge Accel. Mid Accel.
1-36 Dremel 1-12 13-24 25-36
37-84 Drill 37-52 53-68 69-84
85-109 Grinder 85-93 94-101 102-109
110-133 Sawzall 110-117 118-125 126-133

Table 4.1: Correct classification patterns for tool type NN

Attack Tool Confusion Matrix

11 1 0 0 91.7%
Dremel
22.9% 2.1% 0.0% 0.0% 8.3%

1 10 4 2 58.8%
Drill
2.1% 20.8% 8.3% 4.2% 41.2%
Predicted Tool

0 0 7 2 77.8%
Grinder
0.0% 0.0% 14.6% 4.2% 22.2%

0 1 1 8 80.0%
Sawzall
0.0% 2.1% 2.1% 16.7% 20.0%

91.7% 83.3% 58.3% 66.7% 75.0%


8.3% 16.7% 41.7% 33.3% 25.0%

Dremel Drill Grinder Sawzall

Target Tool

Figure 4.7: Confusion Matrix of Segmented security door NN

4.2.2 Testing Time

Similar to the surface data analysis, the security door process did not take
a very long amount of time to classify the data. From collection to classification
took less than 2 minutes, with the data manipulation taking the longest amount
of time to complete. Testing setup took much longer, however, as it was necessary
to label the attack plates in various segments which denoted the tool type.

55
4.3 Conclusions

It is apparent that too many things were varied for this classification. As
this it was hoped to be invariant to pressure, tool bit type, and power (speed)
of the tool, this classification was designed to be robust with respect to many
variables. The different tools also produced different amounts of damage to the
attack plate, which is noteworthy as removing material from the test may have af-
fected the vibrations of the later tests. The angle grinder caused the most amount
of damage, creating a 1/8” x 1.5” gash in the attack plate’s surface. While a hole
was never cut through the plate (which would have changed the vibrations sig-
nificantly), the damage to the plates may have been the cause of the misclassifi-
cations. The dremel’s grinding disc also caused some damage to the plates, but
as the grinder damage was much greater, it is doubtful the dremel damage af-
fected the results. Another possibility for the poor classification is the fact that
the location of attack was varied across the trials. The plate was never attacked in
the same location twice, further increasing variability for this classification. The
dremel’s grinding disc also caused some damage to the plates, but as the grinder
damage was much greater, it is doubtful the dremel damage affected the results.
Another possibility for the poor classification is the fact that the location of attack
was varied across the trials. The plate was never attacked in the same location
twice, further increasing variability for this classification.
While neither of these networks performed as well as the traction control
network (83.3% and 75% vs. 97.8%), they both were able to classify the tool type
much better than simply ”guessing,” which would have resulted in 25% accuracy
(1 random choice in 4). Based on the results of the testing of the NNs, segmenting
the data into 4 second sets did not improve the classification as was expected (as
with the surface NN), but instead reduced the accuracy by almost 10%. With a
max classification of 83.3%, the Security Door NN is still able to determine the
correct tool type 8.3 times out of 10, a rather impressive figure due to the amount
of variables inherent in these experimental methods (pressure, speed, power).

56
CHAPTER 5

DISCUSSION AND FUTURE WORK

5.1 Discussion

In this thesis, frequency-domain characterization has been applied to two


situations where semi-autonomous behavior is necessary. Remote observers are
often provided with limited information regarding the surrounding environment
of their systems. To reduce cognitive load of the operator, system identification
techniques can be used to provide the user with critical feedback regarding the
status of these remote systems.

5.1.1 Traction Control Monitoring

With the arrival of semi-autonomous robots in the modern profession of


EOD and Search and Rescue, mobile robot traction control has become the mo-
tivation for increased research in this field. These robots are being used more
and more in environments where the operator has little to no knowledge of the
surface type. Especially in dark rooms or at night, the user can traverse a sur-
face that is unable to withstand certain amounts of force from the manipulator
without losing traction. This force, designated the Threshold Force to Slip (TFS),
can be related to the user through a selection of a surface type. Each surface type
can be related to an approximate coefficient of static friction, which along with
the normal force, can be used to calculate the maximum friction force resisting
sliding motion. Through a knowledge of the robot’s kinematics, especially the
manipulator used to interact with the bombs, the user can be provided with a
warning when the force he/she applies to the environment approaches the max-
imum friction force (or the negative of the TFS).
Vibration-based surface identification has been researched to an extent
[1, 2, 3, 7, 11, 12, 14, 15], but the amount of research with respect to a combi-
nation of acoustic and physical vibration for this type of classification is still in
relative infancy. The method of collecting surface data using accelerometers and
a microphone was selected to prove the utility of this vibration data gathering
in identifying a surface type through the application of FFT analysis. Acoustic
and physical vibration data was gathered on 5 surfaces (Dirt, Grass, Gravel, As-
phalt, and Concrete) to construct a large array of data to be used as surface ”base-
lines.” These baselines were put through a process that performed FFT analysis

57
and used these FFTs to create a Neural Network. This NN was trained and re-
trained to achieve peak accuracy using Matlab’s Neural Network Toolbox. After
conducting multiple runs to determine the best NN, a NN with an accuracy of
97.8% was selected for the classification trial. This NN had 500 neurons, each
input consisted of 100 data points, which contained the binned frequency data
(51200/100 bins). Upon deployment of this system to Simulink, the testing data
was applied to the NN and the classification results were determined (Figure 3.6).
This NN classified the surface type with 97.8% accuracy, confusing Gravel
with Dirt once and a single Concrete run as the NOTA class. The accuracy of
this classification, especially given the variability in data collection with respect
to speed, sensor, and length of data collection (7s and 1s). With the relative sim-
plicity of the setup, training, and testing of this NN, the broad applications of
this thesis research are truly revealed (see Future Work: Traction Control Mon-
itoring). It should be noted that these constraints were weighted equally in the
testing parameters, meaning there was no priority regarding which constraints
were more important. Due to the high classification accuracy with accelerometers
placed on the probe and on the body of the robot, it appears that the placement
of the accelerometer does not affect the accuracy of this system. Therefore, for
a field-deployable system it would be better for the accelerometer to be placed
on the body of the robot, making it much easier to deploy (vs. the probe which
hangs off of the robot and would make it difficult for undamaged transportation).

5.1.2 Attack Tool Type Identification

In addition to this method for surface identification, a method for identify-


ing the type of tool used to attack a security door was needed for the New Mexico
Tech Intrusion Detecting Security Door Team. Security doors are used to protect
storage containers, especially those used in remote locations for storing sensitive
equipment (or other valuable items). The need to identify an attacking tool is im-
portant for security personnel to be able to identify the type of security response
necessary. One of the advantages to using vibration-based identification is the
fact that the vibrations produced when these situations arise (i.e. when driving
across a surface and when attacking a steel door with a tool) can obviously be
analyzed by performing the FFT.
This classification method was designed to be invariant to tool bit type,
speed, and magnitude of pressure applied. This would make the classification
robust to many different scenarios, and the implementation of this system would
undoubtedly meet the desired specifications. One of the best things about this ap-
plication is the fact that the same data acquisition equipment and analysis meth-
ods was used to probe the validity of both classification problems (surface vs tool
type). Data was collected from four (4) tools attacking the security door analog.
These tools (Dremel, Drill, Angle Grinder, Sawzall) produced noticeably different
audible sounds when attacking. It was believed, therefore, that the classification
method may show much promise for tool type identification. The NN training

58
tool produced a network that predicted the tool type with 96.3% accuracy (Fig-
ure 4.3). This network was deployed to Simulink, and the testing data was passed
to the network for training. This produced an overall accuracy of 83.3%, much
less than the desired accuracy of 90%. Therefore the data were split in much the
same manner as the surface data (each split into 2-4s data sets) and the NN was
re-trained. This new NN had a trained accuracy of 91.2%, however, when the
deployed Simulink network was tested with the new testing data, the network
predicted the tool type with a mere 75% accuracy.
There are many possible reasons for the poor performance of these NNs in
classifying tool type. One such reason is the possibility that too many variables
were present in the data gathering of this application. For the surface classifi-
cation, there were only 3 variables the NN was hoped to be robust to: speed,
length of data collection, and sensor type. The security door data collection in-
cluded these and 3 more variables than the surface data: tool bit type, speed, and
pressure. This required the classification NN to be robust to 6 different variables.
It therefore appears that these 6 variables were too much for the NN to handle,
resulting in the large decrease in accuracy (88.3% to 75%).
From the large difference between the accuracies of these two networks,
and upon inspection of Figures 4.5 and 4.7, it is obvious that changing the length
of the data collection made it vastly more difficult to classify the Grinder and the
Sawzall data (Figure 4.7), and this reduced the accuracy of the overall NN pre-
diction by nearly 20% (18.75%, or 9 misclassifications/24 total between Grinder
and Sawzall). This variance with respect to the length of data collection means
that the NN may have to classify based on the full length of the data sets in order
to achieve the necessary accuracy. In short, it is evident that the length of data
collection severely impacted the accuracy of these classification algorithms.
It is also possible that the low accuracy (< 90%) of both the full-length
and segmented data collection NNs was due to the changing location of attack
on the door. No two attacks occurred in the same place on the attack plates, lead-
ing to different excitation parameters for each test. This additional variability in
testing practices may have added to the inaccuracy of the tool-type classification.
Attacking the plates in different places each time may have produced unknown
vibrational changes between runs. In essence, attacking the door in the middle of
the plate vs. the edge changed the standing wave conditions of the plate, result-
ing in different vibrational characteristics. Additionally, the dimension of vari-
ability causing the most amount of error is the sensor type. As the microphone is
not only collecting the sound of the tool interacting with the door (as the probe
interacted with the ground for the surface identification), it is also recording the
sound produced by the tool itself. To identify which sensor type is causing the
most amount of error, the NN could be re-trained with only one type of sensor
data. This could be researched in future work.

59
5.2 Future Work

5.2.1 Traction Control Monitoring

Based on previous work done in various types of classification, NNs may


be the best ”black box” method for identifying the important characteristics of
various systems (e.g. surface or tool type). While the time-domain data is vastly
different between these two applications, and even between different runs of the
same application, the FFT allows for conversion of this time-domain data to be
converted to the frequency domain. While previous authors have produced sim-
ilar results using different methods (PCA, Cross-Correlation, etc. [23]), and some
even using NNs for different classifications [1, 2, 24, 25, 26], further research into
the application of NNs to system identification will serve to prove the utility of
these tools to identification and characterization of different classes (of surfaces
or tools). Specifically for traction control, research has been done with respect to
the traction of wheeled robots [27, 28, 29, 30]. However, increasing the amount of
work done in the field of traction monitoring for these mobile robots will assist
in furthering the usefulness of this classification system.
Additionally, new surfaces can be added to the ”baseline” database by
collecting 10 data sets of 15 seconds each and using the provided FFT algorithm
(Appendix B.1). Surface frequency data can then be passed to the NN and in-
cluded as a classification. Provided adequate training is performed using the
Matlab Neural Network Toolbox, and as long as the Simulink model is deployed
in the same fashion as this section of thesis research, this system could be de-
ployed quickly and easily. The possibility also exists that this Simulink model
could be converted to a LabView program, which, combined with the already
extant programming on the cRIO, could be used to relay a visual representation
of the selected surface (e.g. an LED array or other display). This would make
the system completely deployable to the cRIO and would make the classification
phase much simpler and more efficient.
Finally, to quantitatively test how much better the classification system
would be with solely an accelerometer or microphone, the NN could be trained
to recognize patterns from only one of these data types and re-tested. Addition-
ally it was noticed in the surface data collection that 2 frequencies were seen in
each and every collected data set (for both microphone and accelerometer data).
These frequencies were most likely the vibrational characteristics of the platform
(accelerometer) and the sound produced by the speed controller while driving
(microphone). Another possible avenue for future work would include the addi-
tion of Adaptive Noise Cancellation to this analysis. This would have the effect
of filtering out these frequencies unique to the robotic testing platform, which
would make the collection and classification algorithm applicable to many dif-
ferent mobile robots.
Several questions arise from these results. First of all, does it matter what
surface the robot is traversing? The answer is provided upon inspection of Table
3.2. With variations in TFS from 10-20%, knowledge of actual surface type is

60
essential to preventing loss of traction. If, for example, the robot was on a Gravel
surface but the camera feedback to the operator appeared to be a Dirt surface, the
actual TFS would be much lower than the presumed TFS. The operator would
only be able to apply around half the magnitude of force while resting on Gravel
as on Dirt. This makes the need for surface identification clear. It should be noted,
however, that the variability of the values of µs from Table 3.3 was only produced
by being placed on different places of the same patches of land. The variability
between one field of grass or dirt or gravel and another may be as different as
one surface type and another. This presents obvious room for future work in the
expansion of the data collection to include multiple locations of the surface data
collected (i.e. different dirt surface locations).
Another question which arises with respect to this analysis is whether 98%
accuracy is good enough. After all with an expensive robot and lives on the line,
the margin for error should be as small as possible. The answer to this is provided
in Figure 3.3. As can be seen, and as was explained in the chapter, the NN makes a
”best guess” with respect to the classification of surface type (and tool type, seen
in Figure 4.4), meaning there is some uncertainty inherent in the classification.
Every classification decision the NN made included other possibilities for the
classification. These are denoted by the points of data near the bottom of the
graph’s y-axis with values less (most times MUCH less) than 1. This means that
the NN has the ability to choose more than one surface if it is uncertain enough.
Given the large variability in the coefficients of static friction between the surfaces
(Table 3.3), if the NN classifies the surface as Grass but it is actually on Gravel,
the TFS is much lower than what the classification would suggest. Therefore an
algorithm could be designed which makes a conservative classification of surface
type based on the difference between the certainty of the correct classification
and the next most likely surface type. For example the operator drives across an
unknown surface that is classified by the NN as being ”most likely” Grass with
the next most likely surface being classified as Gravel. In this situation the system
could select Gravel as the possible surface, making a conservative classification
to prevent loss of traction.

5.2.2 Attack Tool Identification

One of the main differences between these two applications, other than the
obvious, is the fact that there was no NOTA class used in the security door testing.
NOTA data for the security door would have most likely consisted of collecting
data while carrying the door in a vehicle. The road noise and vibrations caused
by the road would have provided some context for the rest of the data gathering,
as these doors are most often installed on armored trucks. However, as the at-
tack tool testing took place in a laboratory environment, where extraneous noise
and vibration are minimized, this NOTA data would have been very different
in composition. Even without the NOTA classification, it is extremely likely that
any attacking tool would have a significantly different signature from any of the

61
NOTA classification signatures. However, the addition of a NOTA class to this
analysis provides some room for future work. It should be noted that the original
request from SNL was for the system to classify when an attack was occurring.
This research has taken it one step further in its classification of the nature of the
attack (tool type). As NOTA data (which would most likely have been collected
in a benign or non-attack state) was not collected for this application, it can not be
said for certain, but intuition indicates that detection of an attack would be fairly
trivial. This, as stated before, lends itself to future work for this application.
Classification of tool type could be further researched with additional data
collection and additional NN training. It is possible that enough data sets could
provide this classification application with sufficient accuracy to be deployed to
an actual secure door system. The tool bit type could also be modified to include
many different types of tools and, as long as rigorous scientific practices are main-
tained (separation of the data with respect to bit type), this classification may be
able to distinguish between different tool bit types as well. The addition of more
tools to the database could eventually lead to an easily-deployable system for in
situ identification of tool type.
Performing the NN construction through other means may possibly lead
to more accurate classifications of surface type and attack tool type. Other meth-
ods of constructing NNs were listed in Chapter 10 of the book written by Shiff-
man [31]. These methods involve mathematical manipulation of the data through
a selection of weights and biases, similar to Matlab’s NN toolbox. It would ap-
pear that using software (or pre-made applets) similar to Matlab’s NN toolbox
is the best way to construct a NN for classification purposes, however, given
enough coding experience, this could most likely be accomplished using different
code programming practices. Matlab’s NN toolbox was used for ease of training,
testing, and implementation of NNs. This approach also allowed for multiple
NNs to be trained very quickly and with varying degrees of accuracy.

62
CHAPTER A

DATA SHEETS

63
A.1 ADXL345

Evaluation Board User Guide


UG-065
One Technology Way • P.O. Box 9106 • Norwood, MA 02062-9106, U.S.A. • Tel: 781.329.4700 • Fax: 781.461.3113 • www.analog.com

iMEMS ADXL345/ADXL346 Inertial Sensor Datalogger and Development Board

FEATURES
Ultralow power ADXL345/ADXL346 accelerometer
Inertial sensor development board
Datalogs onto MicroSD card
Fully programmable via serial interface; firmware examples
provided
Battery-powered for portable applications

REQUIREMENTS
2 AAA batteries
MicroSD card and card reader (for datalogging)
Computer with serial port (for programming)

08658-001
Figure 1. ADXL345 Inertial Sensor Development Board

GENERAL DESCRIPTION
It is often a timesaver in hardware development to make progress Two AAA batteries power the development board, and thus it
on the firmware and the hardware simultaneously. The challenge is integrates seamlessly into portable applications. Communications
that it proves difficult to develop firmware before the hardware and processing are done by an ARM7-based ADuC7024 micro-
exists. The iMEMS® ADXL345/ADXL346 development board controller, and the interface provided is fully reprogrammable.
is an easy-to-use tool that facilitates prototyping by providing Moreover, all ADuC7024 pins are broken out into headers to
a platform that can be duplicated in the final application. facilitate design of compatible expansion boards. Data is logged
Additionally, the development board can be configured as a onto a MicroSD memory card, providing essentially unlimited
datalogger and can be used to gather data for refining algorithms, memory capacity and operating system versatility. Data is stored in
tuning thresholds, and generally familiarizing oneself with a text file; therefore, there is no need to install any software to
accelerometer data. operate the board or read data. Software is provided to assist
with programming the board.

PLEASE SEE THE LAST PAGE FOR AN IMPORTANT


WARNING AND LEGAL TERMS AND CONDITIONS. Rev. A | Page 1 of 12

64
Evaluation Board User Guide UG-065

OVERVIEW
The ADXL345/ADXL346 inertial sensor development board of the ADuC7024 is pulled to GND. Because the RST pin is
has the following features: Schmitt-triggered internally, there is no need to use an external
• A 2-layer printed circuit board (PCB), 1.20 inches × Schmitt trigger on this pin.
2.30 inches form factor To enter serial download mode, the user must hold the P0.0/BM
• A two AAA battery power supply pin low while reset is toggled. On the development board, serial
• A 4-pin UART header to connect to an RS-232 interface download mode can be easily initiated by holding down the serial
cable download push button (PROG) while inserting and releasing
• Reset/download push buttons the reset button (RESET), as illustrated in Figure 2.
• Power indicator/general-purpose LEDs
Power Indicator/General-Purpose LEDs
• Access to microcontroller I/Os from the external header
• Demonstration firmware logs 100 Hz acceleration data Two general-purpose LEDs are available on the board. A red
LED (LED1) is connected to P4.5 of the ADuC7024, and a green
FEATURES LED (LED2) is connected to P4.4. Both LEDs can be repurposed
Power Supply via firmware.
A pair of AAA batteries powers the board, and the battery holder is Breakout Header
located on the back of the board. An on/off switch on the lower Many of the ADuC7024 pins are connected to headers on either
left of the front of the board controls power to it. The battery side of the board. The headers come unpopulated but can be
voltage is not regulated but is decoupled with a 10 μF capacitor populated using standard 0.1 inch header pins.
globally and an additional 1 μF capacitor at the device supply
The thin form factor of the top of the board allows the design
pins to ground.
of an expansion board to connect above the development board,
RS-232 Interface with the header pins providing both electrical and physical
The ADuC7024 (UC1) P1.1 and P1.0 lines are connected to the connections.
RS-232 interface cable via the connector (UART). The interface Firmware
cable generates the required level shifting to allow direct connection
Sample firmware is provided on the ADXL345 product page
to a PC serial port. Ensure that the supplied cable is connected
under the Development Board heading. The Firmware
to the board correctly; that is, VDD is connected to VDD and
link downloads a Keil project that implements the 100 Hz
GND is connected to GND.
datalogging firmware. This project can be modified as needed.
RESET/PROG Push Buttons
A RESET push button is provided to allow the user to manually
reset the part. When the RESET button is inserted, the RST pin

(A) RESET AND PROG RELEASED (B) PUSH PROG (C) PUSH RESET

RESET PROG RESET PROG RESET PROG

(D) RELEASE RESET (E) RELEASE PROG


08658-002

RESET PROG RESET PROG

Figure 2. Entering Serial Download Mode to Reprogram the Board

Rev. A | Page 3 of 12

65
UG-065 Evaluation Board User Guide

USING THE BOARD


GETTING STARTED 3. Click Configure… (see Figure 3) and select the Parts tab,
The development board comes preprogrammed as a datalogger shown in Figure 4. Make sure the ADuC7024 is selected in
at a 100 Hz datarate. To log data, do the following: the Select Part pull-down list (see Figure 4). Additionally, in
the Comms tab, make sure the Baudrate is set to 115200,
1. Insert two AAA batteries into the battery holder. and the Serial Port is set to COM1, and then click OK.
2. Insert the MicroSD card into the slot. The card should be 4. In the ARMWSD window, click Browse… (see encircled in
formatted with a FAT32 file system; most MicroSD cards Figure 3) and navigate to the location of the .hex file to be
come this way. loaded onto the board. Select the file and click Open.
3. Push the on/off switch to the on position to power up the
board. The red LED turns on, and the green LED blinks to
indicate that the board is logging data.
4. When logging is completed, slide the on/off switch to the
off position.
5. Remove the card from the slot and insert it into the card
reader.
6. Insert the card reader into the USB port on your computer.
The acceleration log file is written to the path
\XL345\DATA0000.TXT on the MicroSD card. The data in the
text file consists of a set of comma-separated t, x, y, and z values,
where t corresponds to time and x, y, and z correspond to the x-, y-,

08658-003
and z-axis acceleration data for each time point. Refer to the
Appendix: Sample Output File for an example of a data file.
Acceleration values are logged in LSB, where the nominal scale Figure 3. ARMWSD Window
factor is 3.9 mg/LSB. To convert an acceleration value from LSB to
mg, simply multiply by 3.9 (nominally, or measure the sensitivity
of the part for a more accurate conversion).
To plot the logged data using Microsoft® Excel, download the
Plotting Tool (XL345DB_DataPlotter.xls) from the ADXL345
product page (under the Development Board heading) and
follow the instructions described in the file. Users are prompted
to browse to their logged data file (DATA0000.TXT), the data is
imported and plotted in a new workbook, and users are then
prompted to save that workbook.
PROGRAMMING THE BOARD
08658-005

The board can be repurposed with no programming required


using the .hex files provided on the ADXL345 product page. The
.hex files are uploaded onto the board using the ARMWSD program, Figure 4. ARMWSD Configure Window: Parts Tab
which can be downloaded at www.analog.com/static/imported- 5. Connect the programming cable to the serial port on the
files/eval_boards/ARMWSDv1.8.zip. Simply unzip the folder to PC and to the 4-pin header near the on/off switch on
a known location and open the ARMWSD.exe file to use the the board, matching up the corresponding pins.
program. No installation is required. 6. In the ARMWSD window, click Start. The Status
To reprogram the board, use the cable provided with the board and frame then prompts users to Press Download and
follow these instructions: pulse Reset on Hardware. Follow the illustrations in
Figure 2.
1. Download the desired .hex file from the ADXL345 product
7. When download is complete, click the Reset button on
page to a known location, or locate it on your machine.
the evaluation board. Users can now close the
2. Open ARMWSD.
ARMWSD program.

Rev. A | Page 4 of 12

66
UG-065 Evaluation Board User Guide

APPENDIX: SAMPLE OUTPUT FILE


t,x,y,z 1664,25,0,261
0,60,20,247 1674,27,0,263
50,60,19,259 1683,24,-2,263
100,57,17,258 1693,22,-1,265
151,58,18,260 1703,23,-2,264
201,61,14,252 1713,22,0,265
252,58,10,252 1723,22,0,260
302,63,21,248 1732,22,-1,261
353,66,23,255 1742,23,0,258
403,67,21,243 1752,23,-1,259
454,53,35,254 1762,25,-1,256
504,63,32,251 1772,26,-1,256
555,63,31,241 1781,21,-2,257
605,65,33,256 1791,21,-4,256
656,59,34,254 1801,20,-4,259
706,60,34,247 1811,21,-2,260
757,55,41,250 1821,19,-5,260
807,56,41,252 1830,17,-4,258
858,58,43,245 1840,18,-3,260
908,60,40,246 1850,20,-5,260
959,60,38,246 1860,20,-2,260
1009,66,37,249 1870,20,-2,261
1060,64,29,252 1870,20,-2,264
1110,69,36,251 1880,25,2,264
1161,68,31,253 1890,23,2,262
1211,66,47,233 1901,25,-1,260
1262,63,40,246 1911,24,0,264
1312,59,36,246 1922,27,1,263
1363,48,41,244 1932,30,0,265
1413,49,41,248 1942,30,2,265
1464,46,51,252 1953,27,2,263
1514,52,39,264 1963,28,1,263
1565,47,42,260 1974,27,1,264
1576,47,43,254 1984,29,0,261
1585,25,-5,263 1994,29,0,263
1595,26,-2,263 2005,27,0,261
1605,26,-5,257 2015,26,0,259
1615,26,-4,257 2026,28,-1,257
1625,28,-3,258 2036,26,-3,257
1634,28,-1,261 2046,27,-2,259
1644,24,-2,263 2057,23,-1,262
1654,24,1,263 2067,24,-3,261

Rev. A | Page 10 of 12

67
A.2 cRIO-9022

Technical Sales

(866) 531-6285
[email protected]

Requirements and Compatibility | Ordering Information | Detailed Specifications | Pinouts/Front Panel Connections
For user manuals and dimensional drawings, visit the product page resources tab on ni.com.

Last Revised: 2011-04-14 16:59:41.0

Real-Time Controller with 256 MB DRAM, 2 GB Storage


NI cRIO-9022

Small and rugged real-time embedded controller Dual Ethernet ports for deterministic expansion I/O
Execution target for LabVIEW Real-Time applications -20 to 55 °C operating temperature range
Reliable and deterministic operation for stand-alone control, monitoring, and logging RS232 serial port for connection to peripherals; dual 9 to 35 VDC supply inputs
533 MHz Freescale MPC8347 real-time processor Hi-Speed USB host port for connection to USB flash and memory devices

Overview
The NI cRIO-9022 embedded real-time controller is part of the high-performance CompactRIO programmable automation controller (PAC) platform. It features an industrial 533
MHz Freescale MPC8347 real-time processor for deterministic, reliable real-time applications and contains 256 MB of DDR2 RAM and 2 GB of nonvolatile storage for holding
programs and logging data.

Back to Top

Requirements and Compatibility


OS Information Driver Information Software Compatibility

VxWorks NI-RIO LabVIEW


LabVIEW FPGA Module
LabVIEW Professional Development System
LabVIEW Real-Time Module

Back to Top

Application and Technology


System Configuration
The NI cRIO-9022 controller features an industrial 533 MHz Freescale MPC8347 real-time processor for deterministic and reliable real-time applications. This embedded controller
is designed for extreme ruggedness, reliability, and low power consumption with dual 9 to 35 VDC supply inputs that deliver isolated power to the CompactRIO chassis and a -20
to 55 °C operating temperature range. The cRIO-9022 accepts 9 to 35 VDC power supply inputs on power-up and 6 to 35 VDC power supply inputs during operation, so it can
function for long periods of time in remote applications using a battery or solar power. With the 10/100 Mbits/s and 10/100/1000 Mbits/s Ethernet and serial ports, you can
communicate via TCP/IP, UDP, Modbus/TCP, and serial protocols. The cRIO-9022 also features built-in Web (HTTP) and file (FTP) servers and a Hi-Speed USB host port to
which you can connect external USB-based storage media (flash drives and hard drives) for embedded logging applications requiring more data storage. In addition, the
cRIO-9022 incorporates a fault-tolerant file system that provides increased reliability for data logging. CompactRIO real-time controllers connect to any four- or eight-slot NI
cRIO-911x reconfigurable chassis. The embedded field-programmable gate array (FPGA) in the chassis controls each I/O module and passes data to the controller through a
local PCI bus using built-in communications functions.

The CompactRIO real-time controller connects to any four- or eight-slot CompactRIO reconfigurable chassis. The user-defined FPGA circuitry in the chassis controls each I/O
module and passes data to the controller through a local PCI bus, using built-in communication functions.

Embedded Software

1/7 www.ni.com

68
Back to Top

Detailed Specifications
The following specifications are typical for the entire operating temperature range, – 20 to 55 °C, unless otherwise noted.

Network

Network interface
10BaseT, 100BaseTX, and 1000BaseTX Ethernet
Ethernet port 1

Ethernet port 2 10BaseT and 100BaseTX Ethernet

IEEE 802.3
Compatibility

Communication rates
10 Mbps, 100 Mbps, and 1000 Mbps, auto-negotiated
Ethernet port 1

Ethernet port 2 10 Mbps, 100 Mbps, auto-negotiated

100 m/segment
Maximum cabling distance

RS-232 DTE Serial Port

300–230,400 bps
Baud rate

Data bits 5, 6, 7, 8

Stop bits 1, 1.5, 2

Odd, even, mark, space, none


Parity
RTS/CTS, XON/XOFF, DTR/DSR, none
Flow control

USB Port

480 Mb/s
Maximum data rate
500 mA
Maximum current

Memory

2 GB
Nonvolatile

For information about the life span of the nonvolatile memory and about best practices for using nonvolatile memory, go to ni.com/info and enter the info code SSDBP.
256 MB
DRAM

Internal Real-Time Clock

Accuracy 200 ppm; 35 ppm at 25 °C

Integrated Voltage Input Monitor

The integrated voltage input monitor underreports the voltage at the power connector by up to 400 mV because of voltage drops across internal circuits.

Power Requirements

Caution You must use a National Electric Code (NEC) UL Listed Class 2 power supply with the cRIO-9022.
55 W secondary, 35 VDC max
Recommended power supply

Power consumption with controller supplying power to eight CompactRIO 35 W


modules

Voltage requirement

On powerup 9 to 35 V

After powerup 6 to 35 V

Note The cRIO-9022 is guaranteed to power up when 9 V is applied to V and C. After powerup, it can operate on as little as 6 V.

Physical Characteristics

If you need to clean the controller, wipe it with a dry towel.


12–18 AWG copper conductor wire with 10 mm (0.39 in.) of insulation stripped
Screw-terminal wiring
from the end
Torque for screw terminals 0.5 to 0.6 N · m (4.4 to 5.3 lb · in.)

4/7 www.ni.com

69
Approx. 609 g (21.5 oz)
Weight

Environmental

The cRIO-9022 is intended for indoor use only. For outdoor use, mount the CompactRIO system in a suitably rated enclosure.
Operating temperature (IEC 60068-2-1, IEC 60068-2-2) – 20 to 55 °C

Caution For information about how mounting configuration can affect the accuracy of C Series modules, go to ni.com/info and enter the info code rdcriotemp.

Storage temperature (IEC 60068-2-1, IEC 60068-2-2) – 40 to 85 °C

IP 40
Ingress protection
10 to 90% RH, noncondensing
Operating humidity (IEC 60068-2-56)

Storage humidity (IEC 60068-2-56) 5 to 95% RH, noncondensing

2,000 m
Maximum altitude
2
Pollution Degree (IEC 60664)

Shock and Vibration

To meet these specifications for shock and vibration, you must panel mount or wall mount the CompactRIO system, affix ferrules to the ends of all terminal wires, install a strain
relief on the power cable, and install tie wraps on the Ethernet and power cables. You can order the NI 9979, a strain-relief kit for the power cable, from National Instruments. The
kit is NI part number 196939-01. For information about using the USB port in high shock and vibration environments, contact National Instruments.
Operating vibration
5 grms, 10 to 500 Hz
Random (IEC 60068-2-64)
5 g, 10 to 500 Hz
Sinusoidal (IEC 60068-2-6)
30 g, 11 ms half sine, 50 g, 3 ms half sine, 18 shocks at 6 orientations
Operating shock (IEC 60068-2-27)

Safety

Safety Voltages
Connect only voltages that are within these limits.
35 V max, Measurement Category I
V-to-C

Measurement Category I is for measurements performed on circuits not directly connected to the electrical distribution system referred to as MAINS voltage. MAINS is a
hazardous live electrical supply system that powers equipment. This category is for measurements of voltages from specially protected secondary circuits. Such voltage
measurements include signal levels, special equipment, limited-energy parts of equipment, circuits powered by regulated low-voltage sources, and electronics.

Caution Do not connect to signals or use for measurements within Measurement Categories II, III, or IV.

Safety Standards
This product is designed to meet the requirements of the following standards of safety for electrical equipment for measurement, control, and laboratory use:

IEC 61010-1, EN 61010-1


UL 61010-1, CSA 61010-1

Note For UL and other safety certifications, refer to the product label or the Online Product Certification section.

Hazardous Locations
Class I, Division 2, Groups A, B, C, D, T4; Class I, Zone 2, AEx nA IIC T4
U.S. (UL)

Canada (C-UL) Class I, Division 2, Groups A, B, C, D, T4; Class I, Zone 2, Ex nA IIC T4

Ex nA IIC T4
Europe (DEMKO)

Electromagnetic Compatibility

This product meets the requirements of the following EMC standards for electrical equipment for measurement, control, and laboratory use:

EN 61326 (IEC 61326): Class A emissions; Industrial Immunity


EN 55011 (CISPR 11): Group 1, Class A emissions
AS/NZS CISPR 11: Group 1, Class A emissions
FCC 47 CFR Part 15B: Class A emissions
ICES-001: Class A emissions

Note For the standards applied to assess the EMC of this product, refer to the Online Product Certification section.

Note For EMC compliance, operate this product according to the documentation.

CE Compliance

This product meets the essential requirements of applicable European Directives, as amended for CE marking, as follows:

2006/95/EC; Low-Voltage Directive (safety)


2004/108/EC; Electromagnetic Compatibility Directive (EMC)

5/7 www.ni.com

70
A.3 cRIO-9114

Technical Sales

(866) 531-6285
[email protected]

Ordering Information | Detailed Specifications


For user manuals and dimensional drawings, visit the product page resources tab on ni.com.

Last Revised: 2013-07-23 14:41:43.0

Reconfigurable Chassis for NI CompactRIO


NI cRIO-911x

Easy-to-use LabVIEW FPGA automatically synthesizes electrical circuit 4- or 8-slot chassis for any CompactRIO I/O module
implementation DIN-rail mount, 19 in. rack mount, and panel mount options
NI CompactRIO Extreme Industrial Certifications and Ratings
RIO FPGA core executes at default rates of 40 MHz, and can be compiled to run even
Design hardware in LabVIEW faster

Comparison Tables

Chassis Module Slots FPGA LUTs and Flip-Flops Multipliers

cRIO-9111 4 Virtex-5 LX30 19,200 32

cRIO-9112 8 Virtex-5 LX30 19,200 48

cRIO-9113 4 Virtex-5 LX50 28,800 48

cRIO-9114 8 Virtex-5 LX50 28,800 48

cRIO-9116 8 Virtex-5 LX85 51,840 48

cRIO-9118 8 Virtex-5 LX110 69,120 64

Back to Top

Application and Technology


NI CompactRIO reconfigurable chassis are the heart of the CompactRIO system because they contain the reconfigurable I/O (RIO) core. You program the RIO
field-programmable gate array (FPGA) core, which has an individual connection to each I/O module, with easy-to-use elemental I/O functions to read or write signal information
from each module. Because there is no shared communication bus between the RIO FPGA core and the I/O modules, you can precisely synchronize I/O operations on each
module with 25 ns resolution. The RIO core can perform local integer-based or fixed-point signal processing and decision making and directly pass signals from one module to
another. It is connected to the CompactRIO real-time controller through a local PCI bus interface. The real-time controller can retrieve data from any control or indicator on the
RIO FPGA application front panel through an easy-to-use scan interface or simple FPGA Read/Write function. The RIO FPGA can also generate interrupt requests (IRQs) to
synchronize the real-time software execution with the RIO FPGA. Typically, the real-time controller is used to convert the integer-based I/O data to scaled floating-point numbers.
In addition, it performs single-point control, waveform analysis, data logging, and Ethernet/serial communication. The reconfigurable chassis, real-time controller, and I/O modules
combine to create a complete stand-alone embedded system.

Key Features

Create any local or timing, triggering, and synchronization scheme with 25 ns resolution
Use multiple While Loops to create a parallel processing application for high-performance signal processing or multirate control systems
Take advantage of built-in proportional integral derivative (PID) control functions for control system loop rates greater than 100 kHz
Generate waveforms or implement nonlinear lookup tables (LUTs) using LabVIEW FPGA Express VIs
Integrate widely available third-party HDL cores using the LabVIEW FPGA Module HDL Node
Enforce critical logic and interlocks in silicon hardware circuitry or use the parallel RIO architecture to create dual, triple, or quadruple redundant systems

1/6 www.ni.com

71
NI measurement hardware is calibrated to ensure measurement accuracy and verify that the device meets its published specifications. To ensure the ongoing accuracy of your
measurement hardware, NI offers basic or detailed recalibration service that provides ongoing ISO 9001 audit compliance and confidence in your measurements. To learn more
about NI calibration services or to locate a qualified service center near you, contact your local sales office or visit ni.com/calibration.

Technical Support
Get answers to your technical questions using the following National Instruments resources.
Support - Visit ni.com/support to access the NI KnowledgeBase, example programs, and tutorials or to contact our applications engineers who are located in NI sales
offices around the world and speak the local language.
Discussion Forums - Visit forums.ni.com for a diverse set of discussion boards on topics you care about.
Online Community - Visit community.ni.com to find, contribute, or collaborate on customer-contributed technical content with users like you.

Repair
While you may never need your hardware repaired, NI understands that unexpected events may lead to necessary repairs. NI offers repair services performed by highly trained
technicians who quickly return your device with the guarantee that it will perform to factory specifications. For more information, visit ni.com/repair.

Training and Certifications


The NI training and certification program delivers the fastest, most certain route to increased proficiency and productivity using NI software and hardware. Training builds the skills
to more efficiently develop robust, maintainable applications, while certification validates your knowledge and ability.
Classroom training in cities worldwide - the most comprehensive hands-on training taught by engineers.
On-site training at your facility - an excellent option to train multiple employees at the same time.
Online instructor-led training - lower-cost, remote training if classroom or on-site courses are not possible.
Course kits - lowest-cost, self-paced training that you can use as reference guides.
Training memberships and training credits - to buy now and schedule training later.
Visit ni.com/training for more information.

Extended Warranty
NI offers options for extending the standard product warranty to meet the life-cycle requirements of your project. In addition, because NI understands that your requirements may
change, the extended warranty is flexible in length and easily renewed. For more information, visit ni.com/warranty.

OEM
NI offers design-in consulting and product integration assistance if you need NI products for OEM applications. For information about special pricing and services for OEM
customers, visit ni.com/oem.

Alliance
Our Professional Services Team is comprised of NI applications engineers, NI Consulting Services, and a worldwide National Instruments Alliance Partner program of more than
700 independent consultants and integrators. Services range from start-up assistance to turnkey system integration. Visit ni.com/alliance.

Back to Top

Detailed Specifications
The following specifications are typical for the range – 40 °C to 70 °C unless otherwise noted. These specifications are for the cRIO-911 x reconfigurable embedded chassis only.
For the controller and I/O module specifications, refer to the operating instructions for the controller and I/O modules you are using.

Reconfigurable FPGA

cRIO-9111 and cRIO-9112

FPGA type Virtex-5 LX30

19,200
Number of flip-flops

Number of 6-input LUTs 19,200

32
Number of DSP48 slices (25 × 18 multipliers)

Embedded block RAM 1,152 kbits

cRIO-9113 and cRIO-9114


Virtex-5 LX50
FPGA type
28,800
Number of flip-flops

Number of 6-input LUTs 28,800

Number of DSP48 slices (25 × 18 multipliers) 48

1,728 kbits
Embedded block RAM

cRIO-9116
Virtex-5 LX85
FPGA type

3/6 www.ni.com

72
51,840
Number of flip-flops
51,840
Number of 6-input LUTs

Number of DSP48 slices (25 × 18 multipliers) 48

Embedded block RAM 3,456 kbits

cRIO-9118
Virtex-5 LX110
FPGA type

Number of flip-flops 69,120

69,120
Number of 6-input LUTs
64
Number of DSP48 slices (25 × 18 multipliers)

Embedded block RAM 4,608 kbits

40, 80, 120, 160, or 200 MHz


Timebases
±100 ppm (max)
Accuracy

Frequency-dependent jitter (peak-to-peak, max)

40 MHz 250 ps

80 MHz 422 ps

422 ps
120 MHz
402 ps
160 MHz
402 ps
200 MHz

Power Requirements

These power requirements are for a fully loaded chassis and exclude the power requirements of the controller and the I/O modules in the chassis. For more information about the
controller and the I/O module power requirements, refer to the operating instructions for the controller and for each I/O module.
Chassis power consumption/dissipation

cRIO-9111 and cRIO-9112


500 mW (max)
+5 VDC

+3.3 VDC 2,100 mW (max)

Total chassis power consumption 2,600 mW (max)

cRIO-9113 and cRIO-9114


500 mW (max)
+5 VDC
2,800 mW (max)
+3.3 VDC

Total chassis power consumption 3,300 mW (max)

cRIO-9116
500 mW (max)
+5 VDC
4,600 mW (max)
+3.3 VDC

Total chassis power consumption 5,100 mW (max)

cRIO-9118
500 mW (max)
+5 VDC
5,400 mW (max)
+3.3 VDC

Total chassis power consumption 5,900 mW (max)

Note The power consumption specifications in this document are maximum values for a LabVIEW FPGA application compiled at 80 MHz. Your application power
requirements may be different. To calculate the power requirements of the CompactRIO system, add the power consumption/dissipation for the chassis, the controller,
and the I/O modules you are using. Keep in mind that the resulting total power consumption is a maximum value and that the CompactRIO system may require less
power in your application.

Physical Characteristics

If you need to clean the chassis, wipe it with a dry towel.


Chassis weight

cRIO-9111 and cRIO-9113 Approx. 581 g (20 oz)

cRIO-9112, cRIO-9114, cRIO-9116, and cRIO-9118 Approx. 880 g (31 oz)

4/6 www.ni.com

73
A.4 NI-9234

Technical Sales

(866) 531-6285
[email protected]

Ordering Information | Detailed Specifications | Pinouts/Front Panel Connections


For user manuals and dimensional drawings, visit the product page resources tab on ni.com.

Last Revised: 2010-11-03 14:35:12.0

4-Channel, ±5 V, 51.2 kS/s per Channel, 24-Bit IEPE


NI 9234

24-bit resolution Antialiasing filters


102 dB dynamic range TEDS read/write
4 simultaneous analog inputs Supported in NI CompactDAQ, CompactRIO, and Hi-Speed USB carrier
±5 V input range

Overview
The National Instruments 9233 and 9234 are four-channel dynamic signal acquisition modules for making high-accuracy measurements from IEPE sensors. The NI 9233 and
9234 C Series analog input modules deliver 102 dB of dynamic range and incorporate IEPE (2 mA constant current) signal conditioning for accelerometers and microphones. The
four input channels simultaneously acquire at rates from 2 to 50 kHz or, with the NI 9234, up to 51.2 kS/s. In addition, the modules include built-in antialiasing filters that
automatically adjust to your sampling rate. Compatible with a single-module USB carrier and NI CompactDAQ and CompactRIO hardware, the NI 9233 and 9234 are ideal for a
wide variety of mobile/portable applications such as industrial machine condition monitoring and in-vehicle noise, vibration, and harshness testing.

Back to Top

Comparison Tables

Model Max Sampling Rate IEPE Coupling

NI 9233 50 kS/s Always enabled (2 mA) AC coupling

NI 9234 51.2 kS/s Software selectable (0 or 2 mA) Software selectable AC/DC coupling

Back to Top

Application and Technology


Hardware
Each simultaneous signal is buffered, analog prefiltered, and sampled by a 24-bit deltasigma analog-to-digital converter (ADC) that performs digital filtering with a cutoff frequency
that automatically adjusts to your data rate. The NI 9233 and 9234 feature a voltage range of ±5 V and a dynamic range of more than 100 dB. In addition, the modules include the
capability to read and write to transducer electronic data sheet (TEDS) Class 1 smart sensors. The NI 9233 and 9234 provide ±30 V of overvoltage protection (with respect to
chassis ground) for IEPE sensor connections. The NI 9234 has three software-selectable modes of measurement operation: IEPE-on with AC coupling, IEPE-off with AC coupling,
and IEPE-off with DC coupling. IEPE excitation and AC coupling are not software-selectable and are always enabled for the NI 9233.

The NI 9233 and 9234 use a method of A/D conversion known as delta-sigma modulation. If, for example, the data rate is 25 kS/s, then each ADC actually samples its input signal
at 3.2 MS/s (128 times the data rate) and produces samples that are applied to a digital filter. This filter then expands the data to 24 bits, rejects signal components greater than
12.5 kHz (the Nyquist frequency), and digitally resamples the data at the chosen data rate of 25 kS/s. This combination of analog and digital filtering provides an accurate
representation of desirable signals while rejecting out-of-band signals. The built-in antialiasing filters automatically adjust themselves to discriminate between signals based on the
frequency range, or bandwidth, of the signal.

USB Platform

1/10 www.ni.com

74
The NI Hi-Speed USB carrier makes portable data acquisition easy. Simply plug the NI 9233 or 9234 into the USB carrier and begin acquiring data. Communication to the USB
carrier is over Hi-Speed USB, guaranteeing data throughput.

NI CompactDAQ Platform
NI CompactDAQ delivers the simplicity of USB to sensor and electrical measurements on the benchtop, in the field, and on the production line. By combining the ease of use and
low cost of a data logger with the performance and flexibility of modular instrumentation, NI CompactDAQ offers fast, accurate measurements in a small, simple, and affordable
system. Flexible software options make it easy to use NI CompactDAQ to log data for simple experiments or to develop a fully automated test or control system. The modular
design can measure up to 256 channels of electrical, physical, mechanical, or acoustical signals in a single system. In addition, per-channel ADCs and individually isolated
modules ensure fast, accurate, and safe measurements.

NI CompactRIO Platform
When used with the small, rugged CompactRIO embedded control and data acquisition system, NI C Series analog input modules connect directly to reconfigurable I/O (RIO)
field-programmable gate array (FPGA) hardware to create high-performance embedded systems. The reconfigurable FPGA hardware within CompactRIO provides a variety of
options for custom timing, triggering, synchronization, filtering, signal processing, and high-speed decision making for all C Series analog input modules. For instance, with
CompactRIO, you can implement custom triggering for any analog sensor type on a per-channel basis using the flexibility and performance of the FPGA and the numerous
arithmetic and comparison function blocks built into NI LabVIEW FPGA.

Analysis Software
The NI 9233 and 9234 are well-suited for noise and vibration analysis applications. The NI Sound and Vibration Measurement Suite, which specifically addresses these
applications, has two components: the NI Sound and Vibration Assistant and LabVIEW analysis VIs (functions) for power spectra, frequency response (FRF), fractional octave
analysis, sound-level measurements, order spectra, order maps, order extraction, sensor calibration, human vibration filters, and torsional vibration.

NI Sound and Vibration Assistant


The Sound and Vibration Assistant is interactive software designed to simplify the process of acquiring and analyzing noise and vibration signals by offering:

A drag-and-drop, interactive analysis and acquisition environment


Rapid measurement configuration
Extended functionality through LabVIEW

Interactive Analysis Environment


The Sound and Vibration Assistant introduces an innovative approach to configuring your measurements using intuitive drag-and-drop steps. Combining the functionality of
traditional noise and vibration analysis software with the flexibility to customize and automate routines, the Sound and Vibration Assistant can help you streamline your
application.

Rapid Measurement Configuration


There are many built-in steps available for immediate use in the Sound and Vibration Assistant. You can instantly configure a measurement and analysis application with:

Hardware I/O – generation and acquisition of signals from a variety of devices, including data acquisition devices and modular instruments
Signal processing – filtering, windowing, and averaging

2/10 www.ni.com

75
Training and Certifications
The NI training and certification program delivers the fastest, most certain route to increased proficiency and productivity using NI software and hardware. Training builds the skills
to more efficiently develop robust, maintainable applications, while certification validates your knowledge and ability.
Classroom training in cities worldwide - the most comprehensive hands-on training taught by engineers.
On-site training at your facility - an excellent option to train multiple employees at the same time.
Online instructor-led training - lower-cost, remote training if classroom or on-site courses are not possible.
Course kits - lowest-cost, self-paced training that you can use as reference guides.
Training memberships and training credits - to buy now and schedule training later.
Visit ni.com/training for more information.

Extended Warranty
NI offers options for extending the standard product warranty to meet the life-cycle requirements of your project. In addition, because NI understands that your requirements may
change, the extended warranty is flexible in length and easily renewed. For more information, visit ni.com/warranty.

OEM
NI offers design-in consulting and product integration assistance if you need NI products for OEM applications. For information about special pricing and services for OEM
customers, visit ni.com/oem.

Alliance
Our Professional Services Team is comprised of NI applications engineers, NI Consulting Services, and a worldwide National Instruments Alliance Partner program of more than
700 independent consultants and integrators. Services range from start-up assistance to turnkey system integration. Visit ni.com/alliance.

Back to Top

Detailed Specifications
The following specifications are typical for the range –40 to 70 °C unless otherwise noted.

Input Characteristics

Number of channels 4 analog input channels

ADC resolution 24 bits

Type of ADC Delta-Sigma (with analog prefiltering)

Sampling mode Simultaneous

Type of TEDS supported IEEE 1451.4 TEDS Class I

Internal master timebase (ƒM )

Frequency 13.1072 MHz

Accuracy ±50 ppm max

Data rate range (ƒs ) using internal master timebase

Minimum 1.652 kS/s

Maximum 51.2 kS/s

Data rate range (ƒs ) using external master timebase

Minimum 0.391 kS/s

Maximum 52.734 kS/s

Data rates 1 (ƒs )

Input coupling AC/DC (software-selectable)

AC cutoff frequency

–3 dB 0.5 Hz

–0.1 dB 4.6 Hz max

AC cutoff frequency response

5/10 www.ni.com

76
Input range ±5 V

AC voltage full-scale range

Minimum ±5 Vpk

Typical ±5.1 Vpk

Maximum ±5.2 Vpk

Common-mode voltage range (AI– to earth ground) ±2 V max

IEPE excitation current (software-selectable on/off)

Minimum 2.0 mA

Typical 2.1 mA

Power-on glitch 90 μA for 10 μs

IEPE compliance voltage 19 V max

If you are using an IEPE sensor, use the following equation to make sure your configuration meets the IEPE compliance voltage range.

(Vcommon-mode + Vbias ± Vfull-scale ) must be 0 to 19, where Vcommon-mode is the common-mode voltage applied to the NI 9234, Vbias is the bias voltage of the IEPE sensor, and V

full-scale
is the full-scale voltage of the IEPE sensor.

Overvoltage protection (with respect to chassis ground)

For a signal source connected to AI+ and AI– ±30 V

For a low-impedance source connected to AI+ and AI– –6 to 30 V

Input delay 38.4/ƒs + 3.2 μs

Accuracy 2

Measurement Conditions Percent of Reading (Gain Error) Percent of Range 3 (Offset Error)

Calibrated max (–40 to 70 °C) 0.34%, ±0.03 dB ±0.14%, 7.1 mV

Calibrated typ (25 °C ±5 °C) 0.05%, ±0.005 dB ±0.006%, 0.3 mV

Uncalibrated max (–40 to 70 °C) 1.9%, ±0.16 dB ±0.27%, 13.9 mV

Uncalibrated typ (25 °C ±5 °C) 0.48%, ±0.04 dB ±0.04%, 2.3 mV

Gain drift

Typical 0.14 mdB/°C (16 ppm/°C)

Maximum 0.45 mdB/°C (52 ppm/°C)

Offset drift

Typical 19.2 μV/°C

Maximum 118 μV/°C

Channel-to-channel matching

Gain

Typical 0.01 dB

Maximum 0.04 dB

Phase (ƒin in kHz) ƒin · 0.045° + 0.04 max

Passband

Frequency 0.45 · ƒs

6/10 www.ni.com

77
A.5 Probe Accelerometer

78
A.6 Body Accelerometer

79
A.7 Microphone

High-Sensitivity Array Microphone Type 40PH

Product Data
Features and Applications
■ Multi-channel measurements
■ Sound-field analyses
■ Sound-power measurements
■ Concurrent spatial and transient measure- Fig. 1 Array Microphone Type 40PH with inte-
ments grated CCP preamplifier

The G.R.A.S. Array Microphone Type 40PH (Fig. 1) major advantage when used in multiples forming
is a low-cost microphone for general purpose meas- arrays and matrices.
urements in arrays and matrices. It has a wide useful The low cost of the Type 40PH is a key considera-
frequency range reaching up to 20 kHz (Fig. 2) and a tion when setting up measurements requiring a mul-
large dynamic range topping at around 135 dB. tiplicity of concurrent transient and spatial data.
It has an integrated CCP 1 preamplifier and is deliv- Calibrating the Type 40PH with a G.R.A.S. piston-
ered with a built-in TEDS 2 chip which enables it to phone, e.g. the Type 42AA, is as straight forward as
be programmed as a complete unit. The Type 40PH calibrating any other G.R.A.S. ¼-inch microphone.
requires a constant-current power supply, e.g. the All G.R.A.S. microphones are individually checked
G.R.A.S. CCP Supply Type 12AL, or any other CCP and calibrated before leaving the factory. An indi-
compatible power supply. vidual calibration chart is supplied with each micro-
Close manufacturing tolerances together with the phone.
advantages of the TEDS chip, provide the Type 1 Constant Current Power.
40PH with a high degree of interchangeability; a 2 Transducer Electronic Data Sheet - as proposed by
IEEE-P1451.4

Specifications
Nominal Sensitivity: Influence of axial vibration:
at 250Hz . . . . . . . . . . . . . . 50 mV/Pa (±2 dB) for 1 m/s 2 . . . . . . . . . . . . . . 50 dB re. 20 μPa
Frequency Response (re. 250 Hz): Temperature Range:
±3 dB . . . . . . . . . . . . . . . . . . . . 10 Hz - 50 Hz -10 °C to +50 °C
±1 dB . . . . . . . . . . . . . . . . . . . . 50 Hz - 5 kHz Output impedance:
±2 dB . . . . . . . . . . . . . . . . . . . .5 kHz - 20 kHz < 50 Ω
Upper Limit of Dynamic Range: Output connector:
Max. ouput 135 dB re. 20 μPa SMB coaxial socket
Lower Limit of Dynamic Range: Length:
Thermal noise . . . . . . . . < 32 dBA re. 20 μPa 59.1 mm (2.33 inches)
Phase Match: Diameter:
50 Hz - 100 Hz . . . . . . . . . . . . . . . . . . . . . ± 5° 7.0 mm (0.28 inches)
100 Hz - 3 kHz . . . . . . . . . . . . . . . . . . . . . ± 3°
3 kHz - 5 kHz . . . . . . . . . . . . . . . . . . . . . . ± 5° Weight:
5 kHz - 10 kHz . . . . . . . . . . . . . . . . . . . . ± 10° 5.5 g (0.2 oz.)
Vers.7 November 2011

Power supply:
2 m A to 20 m A (typically 4 m A)

G. R . A . S.
SOUND & VIBRATION
Skovlytoften 33,
2840 Holte, Denmark
www.gras.dk [email protected]

80
High-Sensitivity Array Microphone Type 40PH

5.00
Decibels re. level at 250 Hz

0.00

-5.00

-10.00

-15.00
10 100 1000 10000 100000
Frequency [Hz]

Fig. 2 Typical frequency response for Type 40PH. Upper curve shows free-field response
at 0 °, lower curve (dotted line) shows pressure response.

Accessories
CCP supply . . . . . . . . . . . . . . . . . . . Type 12AL Pistonphones/Calibrator:
Windscreens (set of 6) . . . . . . . . . . . . AM0364 Pistonphone . . . . . . . . . . . . . . . . . . . Type 42AA
Rain-protection cap . . . . . . . . . . . . . . . RA0092 Pistonphone . . . . . . . . . . . . . . . . . . . Type 42AD
Array Module . . . . . . . . . . . . . . . . . . . . PR0001 Sound Calibrator . . . . . . . . . . . . . . . . Type 42AB
Array Module . . . . . . . . . . . . . . . . . . . . PR0002
Cables (SMB to BNC):
3 m . . . . . . . . . . . . . . . . . . . . . . . . . . . AA0027
10 m . . . . . . . . . . . . . . . . . . . . . . . . . . . AA0028
30 m . . . . . . . . . . . . . . . . . . . . . . . . . . . AA0029
also available for customer-specified lengths

G.R.A.S. Sound & Vibration reserves the right to change specifications and accessories without notice.

G. R . A . S.
SOUND & VIBRATION
Skovlytoften 33,
2840 Holte, Denmark
www.gras.dk [email protected]

81
CHAPTER B

MATLAB CODE

B.1 FFT Script

1 % Note: You must import data from your CSV file in order to analyze
2 % Save data timeseries values as x.
3 %%
4 % Sampling info contained in the TDMS file
5 function [t, amp, freq, energy] = findFFT(data)
6
7 Fs = 51200; % Sampling frequency
8 dt = 1/Fs; % Sample time
9 L = length(data); % Length of signal
10 t = [dt:dt:dt*L]';
11 %%
12 amp = data;
13
14 % figure
15 % plot(t,x)
16 % title('Original Signal')
17 % xlabel('time (seconds)')
18 %%
19 NFFT = 2ˆnextpow2(L); % Next power of 2 from length of y
20 energy double sided = fft(amp,NFFT)/L; %X
21 energy = 2*abs(energy double sided(1:NFFT/2+1));
22 freq = (Fs/2*linspace(0,1,NFFT/2+1))'; %f
23 %%
24 % % Plot single−sided amplitude spectrum.
25 % figure
26 % plot(f,2*abs(X(1:NFFT/2+1)))
27 % title(sprintf('%s',file))
28 % xlabel('Frequency (Hz)')
29 % ylabel( '|X(f)| ')

82
B.2 FFT Re-sampling Script

1 function [fft resampled] = NMT fft resample(fft, n) %input is ...


the energy of the fft and number of bins
2
3 freq band width = length(fft) / n; %bin size dependent on n
4
5 fft resampled = zeros(n,1); %outputs new frequency range
6

7 for i=1:n %this loop sums all the data inside the original fft ...
between bins and averages it
8 start = round( (i−1) * freq band width +1 );
9 stop = round( i * freq band width );
10 fft resampled(i) = mean( fft(start:stop) );
11 end
12
13 fft resampled = fft resampled'; %transposes data to maintain ...
consistency

B.3 Truncated Data Script

1 %% This script loads data from the workspace (MicrophoneData) ...


and splits
2 % the data into 3 segments: The first 7 seconds of data, a 1 ...
second middle
3 % segment, and the final 7 seconds of data. This data is then ...
placed into
4 % a structure containing surface type, speed, run number, sensor ...
type, and
5 % segment number (1,2,3).
6
7 % Author: John Tidman 5−18−14
8
9 clc;
10 clear all;
11 close all;
12
13 load('trainingData')
14 n trials = 5; %Number of Trials for all data
15
16 %% FFT of Segmented Data
17 sampling rate = 51200; % samples / sec
18 dt = 1/sampling rate;
19

83
20 surf = {'grass', 'conc', 'asphalt', 'gravel', 'dirt', 'tile', ...
'air'};
21 speed = {'slow', 'fast'};
22 run = {'run1', 'run2', 'run3', 'run4', 'run5'};
23 sensor = {'mic' , 'probe acc', 'body acc'};
24 segment = {'first','middle', 'last'};
25
26 all data=zeros(768000,5,7,2,3); % time point, run , ...
surf, speed, sensor
27 all data(:,:,1,1,1)=asphalt fast mic;
28 all data(:,:,1,2,1)=asphalt slow mic;
29 all data(:,:,1,1,2)=asphalt fast probe;
30 all data(:,:,1,2,2)=asphalt slow probe;
31 all data(:,:,1,1,3)=asphalt fast body;
32 all data(:,:,1,2,3)=asphalt slow body;
33
34 all data(:,:,2,1,1)=concrete fast mic;
35 all data(:,:,2,2,1)=concrete slow mic;
36 all data(:,:,2,1,2)=concrete fast probe;
37 all data(:,:,2,2,2)=concrete slow probe;
38 all data(:,:,2,1,3)=concrete fast body;
39 all data(:,:,2,2,3)=concrete slow body;
40
41 all data(:,:,3,1,1)=dirt fast mic;
42 all data(:,:,3,2,1)=dirt slow mic;
43 all data(:,:,3,1,2)=dirt fast probe;
44 all data(:,:,3,2,2)=dirt slow probe;
45 all data(:,:,3,1,3)=dirt fast body;
46 all data(:,:,3,2,3)=dirt slow body;
47

48 all data(:,:,4,1,1)=grass fast mic;


49 all data(:,:,4,2,1)=grass slow mic;
50 all data(:,:,4,1,2)=grass fast probe;
51 all data(:,:,4,2,2)=grass slow probe;
52 all data(:,:,4,1,3)=grass fast body;
53 all data(:,:,4,2,3)=grass slow body;
54
55 all data(:,:,5,1,1)=gravel fast mic;
56 all data(:,:,5,2,1)=gravel slow mic;
57 all data(:,:,5,1,2)=gravel fast probe;
58 all data(:,:,5,2,2)=gravel slow probe;
59 all data(:,:,5,1,3)=gravel fast body;
60 all data(:,:,5,2,3)=gravel slow body;
61
62 all data(:,:,6,1,1)=nota mic 1;
63 all data(:,:,6,2,1)=nota mic 2;
64 all data(:,:,7,1,1)=nota mic 3;
65 all data(:,:,7,2,1)=nota mic 4;
66 all data(:,:,6,1,2)=nota probe 1;
67 all data(:,:,6,2,2)=nota probe 2;

84
68 all data(:,:,7,1,2)=nota probe 3;
69 all data(:,:,7,2,2)=nota probe 4;
70 all data(:,:,6,1,3)=nota body 1;
71 all data(:,:,6,2,3)=nota body 2;
72 all data(:,:,7,1,3)=nota body 3;
73 all data(:,:,7,2,3)=nota body 4;
74 %%
75 % time point, run , surf, speed, sensor
76
77 for i = 1:length(surf)
78 for j = 1:length(speed)
79 for k = 1:length(run)
80 for m = 1:length(sensor)
81 for n = 1:length(segment)
82 % index = (i−1)*6 + (m−1)*2 + (j−1) + 1; % ...
see cases above
83 switch n
84 case 1; first = 1;
85 last=first + 7*sampling rate−1;
86 case 2; first = 1+7*sampling rate;
87 last = first + 1*sampling rate−1;
88 case 3; first = 1+8*sampling rate;
89 last = first + 7*sampling rate−1;
90 end
91 [t, amp, freq, energy] = ...
findFFT(all data(first:last,k,i,j,m));
92
93 trials.surf{i}.speed{j}.run{k}.sensor{m}. ...
94 segment{n}.time = t;
95 trials.surf{i}.speed{j}.run{k}.sensor{m}. ...
96 segment{n}.amp = amp;
97 trials.surf{i}.speed{j}.run{k}.sensor{m}. ...
98 segment{n}.freq = freq;
99 trials.surf{i}.speed{j}.run{k}.sensor{m}. ...
100 segment{n}.energy = energy;
101 end
102 end
103 end
104 end
105 end

85
CHAPTER C

NN TRAINING PROCESS

Figure C.1: NN Setup GUI

Figure C.2: NN Training Percentages Selection

86
Figure C.3: NN Neurons Selection

Figure C.4: NN Training

87
REFERENCES

[1] E. M. DuPont, C. A. Moore, E. G. C. Jr, and E. Coyle, “Frequency response


method for terrain classification in autonomous ground vehicles,” Auton
Robot, vol. 24, 2008.
[2] P. Giguere and G. Dudek, “A simple tactile probe for surface identification
by mobile robots,” IEEE Transactions on Robotics, vol. 27, pp. 534–544, June
2011.
[3] E. Coyle and E. Collins, “A comparison of classifier performance for
vibration-based terrain classification,” 26th Army Science Conference, 2008.
[4] O. Sugarman, V. Saeger, B. Newall, L. Hernandez, and J. Tidman, “Finding
characteristic signals in vibration and sound, a feasibility analysis for an ac-
cess delay security door,” in NMT Student Research Symposium, NMT SRS,
2014.
[5] V. Saeger, B. Newell, L. Hernandez, and O. Sugarman, “Design of an intru-
sion detecting security door,” tech. rep., New Mexico Tech, 2014.
[6] W. R. Graham, F. Liu, M. Sutcliffe, and M. Dale, “Characterization and sim-
ulation of asphalt road surfaces,” Wear, vol. 271, 2011.
[7] E. M. DuPont, C. A. Moore, and R. G. Roberts, “Terrain classification for mo-
bile robots traveling at various speeds: An eigenspace manifold approach,”
IEEE International Conference on Robotics and Automation, 2008.
[8] C. N. Macleod, S. G. Pierce, J. Sullivan, and A. Pipe, “Remotely deployable
autonomous surface inspection and characterization using active whisker
sensors,” European Workshop on Structural Health Monitoring, vol. 2, 2012.
[9] A. Howard and H. Seraji, “Vision-based terrain characterization and
traversability assessment,” Journal of Robotic Systems, vol. 18, no. 10, pp. 577–
587, 2001.
[10] N. Kramer, “In-process identification of material-properties by acoustic
emission signals,” Annals of the CIRP, vol. 56, 2007.
[11] C. Weiss, H. Tamimi, and A. Zell, “A combination of vision- and vibration-
based terrain classification,” RSJ International Conference on Intelligent Robots
and Systems, 2008.

88
[12] J. Libby and A. J. Stentz, “Using sound to classify vehicle-terrain interac-
tions in outdoor environments,” IEEE International Conference on Robotics and
Automation, 2012.
[13] P. Giguere and G. Dudek, “Surface identification using simple contact dy-
namics for mobile robots,” IEEE International Conference on Robotics and Au-
tomation, 2009.
[14] L. Ojeda, J. Borenstein, G. Witus, and R. Karlsen, “Terrain characterization
and classification with a mobile robot,” Journal of Field Robotics, vol. 23(2),
pp. 103–122, 2006.
[15] A. Larson, G. Demir, and R. Voyles, “Terrain classification using weakly-
structured vehicle/terrain interaction,” Autonomous Robots, pp. 41–52, 2005.
[16] R. G. Lyons, “How to interpolate in the time domain by zero-padding in the
frequency domain.” online, 1999.
[17] S. Dahmen, H. Hinrichsen, A. Lysov, and D. E. Wolf, “Coupling between
static friction force and torque for a tripod,” J. Stat. Mech.: Theor. Exp., 2005.
[18] K. Hashiguchi and S. Ozaki, “Constitutive equation for friction with transi-
tion from static to kinetic friction and recovery of static friction,” International
Journal of Plasticity, vol. 24, pp. 2102–2124, 2008.
[19] D. Kaplan, “Observing the forces involved in static friction under static sit-
uations,” The Physics Teacher, vol. 224, 2013.
[20] M. Muser, L. Wenning, and M. Robbins, “Simple microscopic theory of
amontons’ laws for static friction,” The American Physical Society, 2001.
[21] E. Rabinowicz, “The nature of the static and kinetic coefficients of friction,”
Journal of Applied Physics, vol. 22, No. 11, 1951.
[22] “Friction coefficients for some common material and material combina-
tions.” Online Resource.
[23] L. I. Smith, A Tutorial on Principal Component Analysis, February 26 2002.
[24] N. N. Charniya and S. V. Dudul, “Classification of material type and its sur-
face properties using digital processing techniques and neural networks,”
Applied Soft Computing, vol. 11, 2011.
[25] N. N. Charniya and S. V. Dudul, “Intelligent sensor system for discrimina-
tion of material type using neural networks,” Applied Soft Computing, vol. 12,
2012.
[26] C. Zang and M. Imregun, “Structural damage detection using artificial neu-
ral networks and measured frf data reduced via principal component pro-
jection,” Journal of Sound and Vibration, vol. 242(5), pp. 813–827, 2001.

89
[27] L. Ray, D. Brande, and J. Lever, “Estimation of net traction for differential-
steered wheeled robotics,” Journal of Terramechanics, vol. 46, pp. 75–87, 2009.
[28] L. Gracia and J. Tornero, “Kinematic modeling of wheeled mobile robots
with slip,” Advanced Robotics, vol. 21, No. 11, pp. 1253–1279, 2007.
[29] K. Iagnemma and S. Dubowsky, “Mobile robot rough-terrain control (rtc) for
planetary exploration,” 2001.
[30] O. A. Ani, H. Xu, Y. ping Shen, S. gang Liu, and K. Xue, “Modeling and mul-
tiobjective optimization of traction performance for autonomous wheeled
mobile robot in rough terrain,” Journal of Zhejiang University. Science C,
vol. 14 No. 1, pp. 11–29, 2013.
[31] D. Shiffman, The Nature of Code. Daniel Shiffman, 2012.

90

You might also like