0% found this document useful (0 votes)
32 views15 pages

Pavement Raveling Detection and Measurement From Synchronized Intensity and Range Images2013

Uploaded by

zhaoqian163
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views15 pages

Pavement Raveling Detection and Measurement From Synchronized Intensity and Range Images2013

Uploaded by

zhaoqian163
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

1 Pavement raveling detection and measurement from

2 synchronized intensity and range images


3
4 S Mathavan
5 Visiting Research Fellow
6 School of Architecture, Design and the Built Environment,
7 Nottingham Trent University
8 Burton Street, Nottingham NG1 4BU, UK
9 E-mail: [email protected]
10

11 M M Rahman
12 Senior Lecturer
13 Department of Civil Engineering
14 Brunel University
15 Kingston Lane, Uxbridge, Middlesex UB8 3PH
16 tel: +44 (0)1895 267590
17 [email protected]
18

19 M Stonecliffe-Jones
20 Head of European Consultancy
21 Dynatest UK Limited
22 Unit 12 Acorn Enterprise Centre,
23 Frederick Road, Kidderminster, DY11 7RA, UK
24 [email protected]
25
26
27 K Kamal
28 Assistant Professor
29 Department of Mechatronics Engineering,
30 College of Electrical and Mechanical Engineering,
31 National University of Science and Technology,
32 Rawalpindi, Pakistan
33 [email protected]
34
35 ABSTRACT
36 Raveling on asphalt surfaces is a loss of fine and coarse aggregates from the asphalt matrix. The severity of
37 raveling could be an important indicator of the state of pavements as excessive raveling not only reduces the
38 ride quality, but eventually leads to pothole formation or cracking. Hence, it is important to detect and quantify
39 raveling. In the present study, an effort has been made, for the first time, to quantify raveling from a
40 combination of 2D and 3D images. First, a texture descriptor method called Laws’ texture energy measure is
41 used in conjunction with Gabor filters and other morphological operations to distinguish road areas from others.
42 Then, digital signal processing techniques are used to detect and quantify raveling. Hundreds of images captured
43 by an automated pavement surveying system are used to test and to show the promise of the proposed algorithm.
44
45
46 Key word: Raveling, range images, 3D imaging, Gabor filter, Laws’ texture energy, region segmentation
47
48
49 Corresponding author
50 Submission date: 31/07/2013
51 Word count: 4932, 11 Figures
52 INTRODUCTION
53 It is fundamental for road authorities across the world, to define, firstly, a data collection method to acquire
54 knowledge of the pavement condition within a limited time and management cost whilst minimizing traffic
55 disruption and ensuring the safety of the workforce. In the last twenty years, the rapid advancement in the
56 processing power of computer, communication, laser and imaging technologies have made it possible to collect
57 and analyze large amounts of road surface distress information. It is widely accepted that automated surveys can
58 reduce the variability within datasets, provide meaningful quantitative information, and avoid inconsistencies.
59 Many of the current methods for distress identification use vehicles equipped with high resolution cameras and
60 sensors to record pavement surface images and profile and are capable of carrying out surveys at traffic speeds.
61 The surveys provide accurate information for optimal maintenance and rehabilitation needs despite some
62 limitation that still exist regarding the accuracy and repeatability of crack detection [1]. Another major issue
63 with the 2D video-based systems is their inability to discriminate dark areas that are not caused by pavement
64 distresses such as tire marks, oil spills, shadows, and recent filings [2]. Moreover, the shadows and poor
65 illumination are also major problems for daytime operation even though they can be overcome using additional
66 lighting systems or by acquiring data during the night [3].
67
68 Other than the more traditional 2D image analysis to detect pavement distress (cracks, patches, potholes, etc.),
69 new systems and procedures are emerging. 3D pavement evaluation is one of such technologies that can capture
70 more accurate surface features and extract and quantify information that were extremely difficult from the 2D
71 dimensional survey. For example, until now, the extent and the severity measurement of raveling has been a
72 subjective assessment, estimating a given road area with missing aggregates.
73
74
75 APPLICATION OF 3D TECHNOLOGY IN PAVEMENT CONDITION SURVEY
76 Research in 3D imaging technology for pavement evaluation is a recent development. Consequently, the
77 literature in this area is limited. The 3D laser, photogrammetry and stereo vision techniques are the most popular
78 among various types of 3D technology available [4, 5]. All these technologies have great potentials but also
79 have limitations when equipment and management costs are considered. There are two main challenges to
80 overcome before they are widely used in pavement evaluation. The first one is to capture the image in a
81 consistent manner overcoming the effect of lighting, shadows, etc. The second challenge is to develop a fast and
82 accurate algorithm to separate different defects accurately. The following sections highlight the key research on
83 the 3D image capturing and processing techniques for pavement monitoring.
84
85 An early system for the 3D imaging of pavement surfaces was based on the photogrammetric principle [6].
86 Although the system yielded good results, ensuring a satisfactory lighting requirement was very difficult for the
87 paired camera used in the system to obtain high fidelity 2D images of the pavement surfaces. Another system,
88 known as LIDAR (Light Detection and Ranging) was widely used and this composed of a rotating laser
89 scanning system, GPS receiver and an IMU [7]. Although the system attracted widespread attention initially,
90 due to the difficulty in making any significant improvement in the resolution of the system in the past decades,
91 and due to the growing popularity of laser based 2D imaging system, the usage of the technique has been limited
92 to niche applications [7].
93
94 In 2008, Laurent et al. proposed a 3D Transverse Laser Profiling system for the Automatic Measurement of
95 Road Cracks, which was then subsequently implemented as a commercial system with a custom made software
96 to preprocess the data [8]. In this system, known as Laser Crack Measurement System (LCMS), high-speed
97 cameras were used together with custom optics and laser line projectors to acquire both 2D images and high-
98 resolution 3D profiles of the road. The system could be operated by night or by day under all types of lighting
99 conditions — in both sunlit and shaded areas. Various pavement types like regular or open-graded asphalt,
100 chipseal and concrete, can be measured at survey speeds up to 100km/h, and on roads reaching 4m in width.
101
102 Wang et al. also used the same technique as LCMS, developed an automated vehicular platform prototype
103 including laser based sensors and associated algorithms and software that can capture 3D representation of
104 pavement surface, at 1mm resolution, and processes data to produce results on pavement distresses [9]. The
105 software, Pavevision 3D, used in this system has substantially better performance than LCMS in terms of 3D
106 line rate and 2D visual data. Another recent laser based 3D system, proposed by [10], is a real-time 3D scanning
107 system used for the inspection of pavement distortion such as visualization of rutting and shoving using a high-
108 speed 3D transverse scanning technique based on structured light triangulation. To improve the accuracy of the
109 system, a multi-view coplanar scheme was employed in the calibration procedure so that more feature points can
110 be used and distributed across the field of view of the camera. A sub-pixel line extraction method is applied to
111 the laser stripe location, which includes filtering, edge detection and spline interpolation. The pavement

2
112 transverse profile is then generated from the laser stripe curve and approximated by line segments. Sun et al.
113 proposed a new method of analysis based on the sparse representation to decompose the pavement profile
114 variation into a summation of pavement profile and cracks [11].
115
116
117 RAVELING OF ASPHALT SURFACES
118 In simple terms, raveling on asphalt surfaces is a loss of fine and coarse aggregates from the asphalt matrix due
119 to the adhesion failure at the interface. In many cases, raveling is a combination of more than one contributing
120 physical mechanism by which the aggregate is separated from the binder; the predominant mechanisms are
121 surface type, improper mixture design (lower binder content then the specification, high proportion of dust),
122 inadequate compaction, weathering, traffic, ageing of bitumen, the high intense hydrostatic pressure created by a
123 combination of traffic and water entering the pavement through interconnecting voids [12, 13], moisture or
124 freeze-thaw cycles due to seasonal variations, effect of snow plowing in winter months. Excessive raveling not
125 only reduces the ride quality, but eventually leads to pothole formation or cracking. In addition, surface
126 dressing and other thin surfacing systems are increasingly used in recent years as a means of preventative
127 maintenance for pavement preservation. These surfaces are prone to raveling because of a combination of
128 factors mentioned earlier. Therefore, the severity of raveling could be an important informant to evaluate the
129 state of a pavement.
130
131 Currently, the measurement of raveling is based on the visual observation rather than any derived quantification.
132 The severity level is rated by the degree of aggregate loss within a segment of a road. The segment is typically
133 one tenth of a mile or a kilometer and expressed relative to the surface area of the surveyed lane. It is important
134 to note that raveling is measured or observed differently depending on the surface type. For Bituminous Surface
135 Treatment (BST), raveling is caused by the loss of aggregates and the binder is exposed. On the other hand, for
136 chip sealed pavements, as they tend to look raveled because of the inherent nature of the chip seal surface, it
137 may be mistaken as raveling whereas it is actually an excess asphalt resulting in the loss of aggregates, and
138 should be rated as flushing [12]. The various stages of raveling are usually described as light (loss of surface
139 fines), moderate (loss of fines and some larger aggregates being exposed), and severe (loss of fine and coarse
140 aggregates). The extent of raveling could be localized (patchy areas, usually in the wheel paths), on the wheel
141 path (the majority of wheel tracks are affected, but little or none elsewhere in the lane) or could extend through
142 the entire lane width (most of the lane is affected) [12].
143
144
145 RESEARCH OBJECTIVES
146 In this paper, Laws’ texture energy measures are used to detect texture boundaries in intensity (i.e. 2D) images
147 (Figure 1) to distinguish road surfaces from lane marking and other painted surfaces. In addition, the Gabor
148 filter, a frequency domain based technique, is used to enhance the edges that result from the texture boundary
149 detection as described above. Furthermore, some morphological operations are performed to further improve the
150 segmentation accuracy.
151

152
153 (a) (b)
154 FIGURE 1 An intensity image (a) and its corresponding range image(b) [Courtesy Dynatest UK Ltd.]
155

3
156
157 LAWS’ TEXTURE ENERGY MEASURES
158 Surfaces can be distinguished from their texture. Several texture analysis methods exist. Co-occurrence
159 matrices, autocorrelation features and wavelet-based methods are to name a few [14]. Laws’ texture energy
160 method measures the amount of texture variation within a finite-sized window. Texture energy is computed
161 within a 5x5 window, usually. According to Laws’ method, a number of two-dimensional masks are formed
162 from the vectors shown below.
163
164 L5 (Level) = [ 1 4 6 4 1]
165 E5 (Edge) = [ -1 -2 0 2 1]
166 S5 (Spot) = [ -1 0 2 0 -1]
167 R5(Ripple) = [ 1 -4 6 -4 1]
168
169 R5 vector has been designed to detect ripples in the images. The rationale behind these detections is that the
170 texture of a given image can be broken down into very fundamental geometric shape descriptions like edges,
171 spots, levels, etc. These 4 vectors are then used to form 5x5 masks.
172

173 The mask L5S5 is formed in the following manner: [ ]

[ ] [ ]
174
175 In total, 16 such masks are formed: L5L5, L5E5, L5S5, L5R5, E5L5, E5E5, E5S5, E5R5, S5L5, S5E5, S5S5,
176 S5R5, R5L5, R5E5, R5S5 and R5R5.
177
178 Pavement images are convolved with the above 5x5 pixel masks. The graphics in Figure 2 explain the
179 convolution operation on an image with the help of a 3x3 pixel mask. The 3x3 mask is first moved left to right
180 along the rows of the input image, by a pixel. Then, at the end of each row the mask is moved down by one row
181 the foregoing shift from left to right is followed. The operation is continued until the bottom row of the mask is
182 directly on the last row of input image pixels. During convolution, at no instance the mask has an overhang over
183 the input image pixels. Hence, it is always the case that the extreme row and column (i.e. peripheral) pixels of
184 the input image will not have any values in the output, convolved image (Figure 2).

185
186 FIGURE 2 Mask convolution. ∑ , where Pixels [15]
187
188 Resulting images from the 5x5 convolutions, using 16 masks described earlier, will be used to detect texture
189 boundaries, as explained in the EXPERIMENTATION section.
190
191
192 GABOR FILTER
193 The Gabor filter is a frequency based technique that has been used for object recognition, edge detection and
194 optical character recognition. This filter is very special in the sense that visual cortex cells in mammals can be
195 expressed by Gabor functions. The filter has the ability to respond to different orientations; hence, it helps
196 distinguish objects oriented in different directions. The Gabor filter is implemented as a filter bank consisting of
197 filters with a number of orientations (see θ below).
198

4
199 The Gabor filter is formed by the modulation of a Gaussian envelope by a complex sinusoid. The filter’s real
200 part can be expressed as follows,
[( ) ( ) ]
201 ( ) [ ( ) ]
202 (1)
203
204
205 Where, and . Here σ is the standard deviation of the Gaussian
206 neighborhood in the x’ direction. γ is the ellipticity of the filer. θ is desired orientation of the filter, λ is the
207 wavelength of the sinusoid, ψ is the phase offset of the modulation factor, which decides the symmetry or anti-
208 symmetry of the filter and the width (a) and the length (b) of the elliptical Gaussian (2D) envelope and the angle
209 between the orientation of the sinusoidal wave vector and the two dimensional Gaussian axes.

210 Figure 3 shows three different Gabor filters where the orientation, θ, or the wavelength, λ, is changed. The
211 pictures depict the continuous domain representations of the Gabor filter. When compared to the filter shown in
212 Figure 3(a) , the filter depicted in Figure 3(b) has a lower wavelength and that is clearly seen in the high
213 frequent changes in the latter figure. The filter shown in Figure 3(c) has a change in the direction (i.e. in θ) and
214 this is indicated by the change in the filter direction when compared to Figures 3(a) and 3(b).
215

216
217
218 FIGURE 3 Gabor filter with θ=0, λ=2 (a), Gabor Filter with θ=0, λ=0.5 (b) and Gabor filter
219 with θ=π/4, λ=5 (c) [16]
220
221 For practical image processing applications the continuous function has to be digitized and represented by a
222 mask as discussed in the previous section. Then the image is convolved with the mask as explained in Figure 2.
223 For a detailed explanation of the theory, refer to [17], where the Gabor filter is utilized for pavement crack
224 detection.
225
226
227 MORPHOLOGICAL OPERATIONS ON BINARY IMAGES
228 Binary images are the images that consist of only black (gray level 0) and white pixels (gray level 1), i.e. the
229 image has two intensity levels only. Blobs in binary images are the collection of white pixels that are connected
230 by a neighborhood. A lone white pixel, that does not have any neighbors, is also considered as a blob.
231 According to 8 neighborhoods, the middle pixel with value x5 in Figure 2 will have all the 8 pixels surrounding
232 it {x1, x2, x3, x4, x6, x7, x8, x9} as its neighbors. When it comes to 4-neighborhood, the pixels {x2, x4, x6, x8} are
233 considered the neighbors of the middle pixel, x5. The corner connectivity is not considered for a 4-
234 neighborhood. In this paper only 8 neighborhood is used (this is the default option with MATLAB’s Image
235 Processing Toolbox’s morphological functions).
236
237 A number of morphological operators are available. Out of these, dilation and erosion are used in this project.
238 Image morphology is frequently used for image enhancement. In a general sense, dilation adds white pixels to
239 an image based on any given criteria. Whereas, the erosion operation removes white pixels from an image.
240
241 Dilation
242 Any black pixel that has a white pixel in its 8 – neighborhood is turned into a white pixel (i.e. its value is set to
243 1). This operation is depicted in Figure 4 (a), where the image on the left is the original image and the right one
244 shows the dilated version. To explain it more, for the lone white pixel in the top right corner of the original
245 image in Figure 4(a), the dilation operation makes all the pixels in its 8-neighborhood white. The bwmorph
246 function of MATLAB has been used with the dilation option chosen [19]
247

5
248

249 ()

250 (a)

251
252 (b)
253
254 FIGURE 4 Two morphological operations: dilation (a) and erosion (b) performed with 8 pixel
255 connectivity on an image with two blobs [18]
256
257 Erosion
258 Erosion removes any white pixels that have at least one black pixel in its 8-neighborhood (See Figure 4 (b)).
259 Only the four white pixels, shown in the image after erosion, have white pixels entirely in their 8-neighborhood.
260 Once again MATLAB’s bwmorph is employed used with the dilation option [19].
261
262 Two other operations are also performed on the images. Boundary extraction, extracts the boundaries of every
263 blob in the image based on 8-neighborhood. MATLAB function boundaries are used to extract blob boundaries.
264 Hole filling replaces the black pixels, fully inside a blob, with white ones. For this purpose imfill function of
265 MATLAB is employed.
266
267
268 EXPERIMENTATION
269 Image Acquisition
270 The 2D and 3D images used in this study are obtained by Dynatest UK Ltd. using the LCMS system on a road
271 section in the UK. There is some post-processing on the images by the Pavemetrics software and the resulting
272 outputs are 2D (i.e. intensity) and 3D (i.e. Range) images both of which are 8-bit grayscale. The size of the road
273 imaged by the systems is 10 x 4.16 m2. Both the images have a resolution of 2500 x 1040. Hence, once pixel
274 images an area of 4 x 4 mm2 on the road. Nine hundred 2D-3D image pairs have been supplied by Dynatest.
275
276 Texture Edge Detection: Laws’ texture energy masks
277 The target here is to segment road areas from the paints, lines, etc. found in the images and then to look for
278 raveled areas, preferably using the range images. The areas of the paints and lines on the road surface do not
279 present a considerable height change (i.e. the applied thickness is usually under a millimeter). Hence, they do
280 not have an adequate depth change for the 3D range imager to detect them. Unless the surface roughness of the
281 painted areas can be exploited, the use of 3D image for the purpose of region segmentation is very limited.
282 However, the range resolution of the current pavements imagers are in the order of sub millimeters and hence a
283 reliable detection of surface roughness at an accuracy level to characterize, and thereby, distinguish surfaces is
284 highly unreliable. Nevertheless, 2D intensity images usually contain adequate visual markers, such as surface
285 texture, to differentiate different surfaces. Therefore, in this work, 2D images are used over 3D images for
286 region segmentation.
287
288 As seen in Figure 5 (a), the intensity images are complex with varying amounts of image intensities, within any
289 given type of surface. The texture within a given surface change greatly as well. Due to these variations, image
290 segmentation techniques like thresholding and edge detection are not found to give effective results. To segment
291 different the road regions from the non-road areas, the 16 masks based on Laws’ texture energy measured are
292 used to convolve the 2D intensity image.
293

6
294 It is experimentally found that the average of the images resulting from the convolution with the masks S5L5
295 and L5S5 give the best texture segmentation for these images. The corresponding texture edge image is shown
296 in Figure 5 (b). It can be seen from Figure 5 (b) that many of the pseudo edges, which are due to intensity
297 variations, are eliminated, esp. when compared with Figure 5 (a). However, many micro edges are still detected,
298 preventing a clear segmentation between road and other regions. To eliminate these edges, a thresholding
299 operation is performed. The thresholded, hence binary, image is shown in Figure 5 (c). The segmentation is not
300 perfect as there remain lots of edges within the letters ‘S’, ’L’ and ‘W’, written on the road. In addition, the lane
301 markings that are at the right- and leftmost regions of the image, have line segments missing.
302

303
304
305
306
307
308
309
310
311 (a) (b) (c)
312
313 FIGURE 5 A raw pavement intensity image (a), its Laws’ texture edge image (b) and the threshold edge
314 image (c)
315
316
317 Texture Edge Enhancement: The Gabor Filter
318 Twelve Gabor filters at orientations {0°, 15°, 30°, 45°, 60°, 75°, 90°, 105°, 120°, 135°, 150°, 165°} are used to
319 convolve the image given in 5 (c). Other parameters used for the Gabor filters are, γ =1, σ =12, λ = 40 and ψ = 0.
320 All 12 images resulting from the filters are then thersholded. Figures 6 (a) and 6 (b) show the threshold images
321 for the filter orientations, θ, of 0° and 90°, respectively.
322
323 As seen in Figure 6 (a), the orientation θ=0° picks up all the horizontal edges in Figure 5 (c). From Figure 6
324 (b), it can be seen that the 90° orientation Gabor filter detects all the vertical edges. In addition to orientation
325 based edge detection, Gabor filters also have a smearing effect hence fill up the gaps (non-detections) in the line
326 segments in Figure 5 (c). The main disadvantage is that the line segments tend to get thicker after processing
327 with Gabor filters. However, this drawback works to the advantage of this project, as it leads to a more
328 conservative detection, i.e. under detection, of the road surface.
329
330

7
331
332 (a) (b) (c)
333
334 FIGURE 6 Gabor response for θ=0° (a), Gabor response for θ=90° (b) and overall response of Gabor 12
335 filter bank
336
337 The 12 thresholded images resulting from the 12 Gabor filters are then combined into a single image using the
338 logical OR operation (i.e. If one of the images has a white pixel the corresponding pixel in the aggregated image
339 will be set to white). The aggregated image is shown in Figure 6 (c).
340
341 Figure 6 (a) detects most of the desired features detections, but the outer contour of the letter ‘S’ is not fully
342 detected. To make sure that all the contours are properly closed, dilation operation is performed 8 times and then
343 erosion is also carried out 8 times. This will close of contours while maintaining line thicknesses the same, in
344 general. Then a hole filling operation is implemented. The resulting image, with a better contour describing ‘S’
345 is shown in Figure 7 (a). In Figure 7 (a), the linear lane marking near the top right corner appears as broken,
346 due to its small thickness. However, in reality, there exists a ‘U’ shaped contiguous contour starting from and
347 finishing in the top edge of the image. Figure 7 (b) shows the image obtained by the logical OR of Images 6 (c)
348 and (7a). For the unclosed contours that touch the image edges at two, or more, places, a ‘closing’ scheme is
349 implemented so that parts of the image edge completes the contour. Then, all closed contours (i.e. blobs) are
350 filled for the holes with the imfill function. This image is shown in Figure 7 (c). The foregoing process
351 completes the region segmentation operation.
352

8
353
354 (a) (b) (c)
355
356 FIGURE 7 Image after morphological operations (a), after logical OR (b) and the final detection (c)
357
358
359 RESULTS
360 Figures 8 (a) and 8 (b) show the detected boundaries embedded in original intensity and range images,
361 respectively. The sand patch test covers an area of 250 x 250 mm2, in general [20]. This amounts to a square
362 area of 62 x 62 pixels in the image as 1 pixel is 4 x 4 mm2. Figure 8 (c) highlights the square tiles that are on the
363 road surface and will be analyzed for raveling.
364
365 The raveling is proposed to be quantified by the amount of range variation found within a window of 62 x 62
366 pixels. The measure of standard deviation of the range values inside a window is used here. However, if the
367 standard deviation, within a window, alone is considered as a measure of raveling, it may lead to erroneous
368 results. For example, within the window highlighted in red in Figure 8 (c), the surface profile of the road
369 changes drastically.

9
370
371 (a) (b) (c)

372 FIGURE 8 Detection embedded in the original intensity (a) and range (b) images, and the tiles to be
373 analyzed for raveling (c).

374
375 The range data for this window is plotted in 3D in Figure 9(a) and it can be seen that the road profile is a low
376 frequency variation. Hence, only the high frequency variations of the road must be considered for raveling
377 detection. Here, 3D window data are proposed to be filtered with a Gaussian filter mask of 5 x 5 pixel size with
378 a standard deviation (σ) of 3.9.
379
380 A Gaussian is given by the following function with A being a constant,
381
[ ]
382 ( ) (2)

383 The designed Gaussian mask is . This is a normalized mask.

[ ]

384 The 3D data of the window, when convoluted with this mask, results in the low-pass data shown in Figure 9
385 (b). In Figure 9 (b) all high frequency components in the original data are removed. Now, the low-pass data, in
386 Figure 9 (b), is subtracted from the original data in Figure 9 (a) to reveal the high frequency changes to which
387 raveling contributes. This is high-pass filtered data in effect. The high frequency variations are shown in Figure
388 9 (c).

10
389 (a)

390 (b)

391 (c)

392 FIGURE 9 The range data of a 62 x 62 pixel window (a), its low-pass filtered form (b) and its high-pass
393 filtered form(c)

394 The standard deviation of the high-pass filtered data is 11.66 given in the 8 bit range [0 255], as the range image
395 is 8 bits, called units hereafter. If the conversion factor between the range image data and the physical range
396 value, in meters, is given, the standard deviation value can also be expressed in meters. In this paper, the
397 standard deviation value above is used to quantify the amount of raveling in that particular window.
398
399 The above algorithm was tested on the 900 2D-3D image pairs, and the maximum and minimum value of
400 standard deviation of the high-pass filtered window data, among all range images, are 17.4 and 2.9 units,
401 respectively. The raveling condition for a window, thus found, is proposed to be classified as good, average or
402 bad, denoted by the window highlighted in green, orange or red color. Here, windows with standard deviation
403 less than 5 unit are characterized as good, the ones that fall in the range of 5 – 10 units are branded average and
404 the windows that have standard deviations greater than 10 units are considered as badly raveled. Figures 10 and
405 11 show two intensity and range image pairs. The rightmost images in Figures 10 and 11 have the range image
406 showing the detections.

11
407
408 (a) (b) (c)

409 FIGURE 10 Intensity(a), range(b) and detected range (c) images

410
411 (a) (b) (c)

412 FIGURE 11 Intensity (a), range (b) and detected range (c) images

12
413 DISCUSSION
414 In the supplied image set of 900 pairs, the road surface is correctly detected in all images. Very rarely, small
415 islands of road surface is characterized as non-road, e.g. the ones surrounded by blue contours immediately
416 above and to the left of the longer arrow in Figure 11(c). This usually happens when the local intensity of the
417 road surface, in the 2D image, is at its highest. However, when considering all 900 images these false negatives
418 are found to be extremely rare (much less than 1%).
419
420 The range images supplied by Dynatest are preprocessed with proprietary software that is available with the
421 hardware. Hence, the actual depth values are not valuable for the range images. If a conversion factor is
422 available to translate the range image intensities to depth values in meters, the raveling can be expressed in
423 either volumetric format (i.e. cubic meters per a 250 x 250 mm 2 window) or as a roughness-like value in meters.
424 Furthermore, in the presence of a conversion factor, a benchmarking process can be devised so that the raveling
425 measure proposed here can be correlated to some standard procedures to detect raveling, e.g. visual survey.
426 Sand patch tests are proposed to be performed in conjunction with automated imaging on known road sections
427 for comparison.

428 The region segmentation algorithm consumes 10.45 seconds to process an intensity image of 1040 x 2500 pixels
429 on a PC with an Intel Core-i7 processor (@ 2.10 GHz) with RAM of 8 GB. The 3D processing for raveling
430 measurement takes 0.65 seconds, approximately. Here, it should be noted that the algorithm is coded in
431 MATLAB, which is not the computationally efficient way of executing an algorithm. In this regard, MATLAB
432 is known for its rapid prototyping abilities, in the sense that it helps development of an algorithm in a fast
433 manner made possible by the availability of many in-built function libraries. This ability of MATLAB helps
434 engineers and researchers to try different algorithms and find the best method. The final, implementable version
435 of the algorithm is coded in a language like C++. In addition, the code execution is not performed on PCs but
436 rather on hardware like FPGA that allows highly large parallel processing usually required for the processing of
437 images. Both the translations, in terms of the use of implementation friendly, and high processing power
438 hardware and software, are bound to decrease the processing times by at least an order.

439

440 CONCLUSIONS
441 This paper provides a methodology, for the first time, to detect and quantify raveling from 2D and 3D images
442 that are captured in a synchronous manner. Using an array of methods available within image processing, it has
443 been shown that road surfaces can be accurately segmented from other painted areas on the road. Both texture
444 detection methods and texture-edge enhancement methods are employed for this purpose. Additionally, digital
445 signal processing methods are used to process and measure raveling from the obtained 3D range images. The
446 algorithm shows a very good ability in distinguishing pavement areas that may consist of raveling, and further,
447 in quantifying raveling within those potential areas on the road.
448

13
449 REFERENCES

450 1. Mathavan, S., M Rahman, and K., Kamal. Application of texture analysis and the Kohonen map for the
451 region segmentation of pavement images for crack detection, No 2304, In Transportation Research Record:
452 Journal of the Transportation Research Board, Transportation Research Board of the National
453 Academies, Washington, D.C., 2012, pp. 150-157
454
455 2. Bursanescu, L., M. Bursanescu, M. Hamdi, A. Lardigue, and D. Paiement. Three-dimensional
456 Infrared laser vision system for road surface features analysis, Proc SPIE, Vol. 4430, 2001, pp.801-808
457 801–808.
458
459 3. Si-Jie, Y., S. Sukumar. 3D reconstruction of road surfaces using an integrated multi-sensory approach. Opt.
460 Lasers Eng. 45(7), 2007, pp. 808–818.
461
462 4. Ahmed, M.F.M. and C.T. Haas. The Potential of Low Cost Close Range Photogrammetry towards Unified
463 Automatic Pavement Distress Surveying. CD Rom. Transportation Research Board of the National,
464 Washington, D.C., 2010.
465
466 5. Jong-Suk Y, M Sagong, J.S. Lee. Feature extraction concrete tunnel liner from 3D laser scanning data,
467 NDT&E International, 2009, pp. 97–105
468
469 6. Wang, K.C.P. and G. Weiguo. Automated Real-Time Pavement Crack Detection and
470 Classification, Final Report, NCHRP IDEA 20-30/IDEA 111, Burtch, Robert, LIDAR Principles and
471 Applications, 2002 IMAGIN Conference, Traverse city, MI, 2002.
472
473 7. Wang, K.C.P. Automated survey of pavement distress based on 2D and 3D laser images, MBTC DOT 3023,
474 2011.
475
476 8. Laurent J., D. Lefebvre., E. Samson. Development of a New 3D Transverse Laser Profiling System for the
477 Automatic Measurement of Road Cracks, Proc. 6th Int. Symp. on Pavement Surface, 2008
478
479 9. Wang, K.C.P., Hou, Z, and Williams, S. Precision Test of Cracking Surveys with the Automated Distress
480 Analyzer, ASCE Journal of Transportation Engineering, 2010
481
482 10. Q Li, Y.Ming , Y. Xun, B. Xu. A real-time 3D scanning system for pavement distortion inspection.
483 Measurement Science and Technology, vol. 21 issue 1 January 01, 2010. p. 015702-015702.
484
485 11. Sun X, J. Huang , W. Liu, and M Xu. Pavement crack characteristic detection based on sparse
486 representation, EURASIP, Journal on Advances in Signal Processing, 2012, pp. 191
487
488 12. FHWA, U.S. Department of Transportation. Long-term Pavement Performance (LTPP). Standard Data
489 Release 26.0 DVD, 2012.
490
491 13. Kringos, N., A.Scarpas. Raveling of Asphaltic Mixes Due to Water Damage: Computational Identification
492 of Controlling Parameters, In Transportation Research Record: Journal of the Transportation Research
493 Board, No 1929, Transportation Research Board of the National Academies, Washington, D.C., 2005.
494
495 14. Tuceryan, M and A.K.Jain. Texture Analysis, In The Handbook of Pattern Recognition and Computer
496 Vision (2nd Edition), by C. H. Chen, L. F. Pau, P. S. P. Wang (eds.), World Scientific Publishing Co., 1998,
497 pp. 207-248.
498
499
500 15. Lemaitre, G. and M. Rodojevic. Texture segmentation: Co-occurrence matrix and Laws texture mask
501 methods, Technical report, 2010. Available at
502 http://g.lemaitre58.free.fr/pdf/vibot/scene_segmentation_interpretation/cooccurencelaw.pdfXXXXXX
503

14
504 16. Helli, B., and M.E.Moghaddam. A text-independent Persian Writer Identification based on Feature Relation
505 Graph (FRG), Pattern Recognition 43, 2010, pp. 2199-2209
506
507 17. Salman, M., S.Mathavan, K.Kamal, and M.Rahman. Pavement Crack Detection Using the Gabor Filter,
508 16th International IEEE Annual Conference on Intelligent Transportation Systems, October 6-9, 2013, The
509 Hague, The Netherlands.
510
511 18. what-when-how 2013, Accessed on 26th July, 2013 from http://what-when-how.com/introduction-to-video-
512 and-image-processing/morphology-introduction-to-video-and-image-processing-part-1/
513
514 19. MATLAB, Accessed on 6th July 2013 from http://www.mathworks.co.uk/help/images/ref/bwmorph.html
515
516 20. BS EN 13036-1: Road and airfield surface characteristics - Test methods - Measurement of pavement
517 surface macrotexture using a volumetric patch technique, BSI.
518
519

520

15

You might also like