Advancements in Railroad Track Inspection Using Machine-Vision Technology

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Advancements in Railroad Track Inspection

Using Machine-Vision Technology

J. Riley Edwards, Lecturer

John M. Hart*, Senior Research Engineer

Steven Sawadisavi, Graduate Research Assistant

Esther Resendiz*, Graduate Research Assistant

Christopher P. L. Barkan, Professor

Narendra Ahuja*, Professor

Railroad Engineering Program

Department of Civil and Environmental Engineering

Newmark Civil Engineering Laboratory

205 N. Mathews Ave.

University of Illinois at Urbana-Champaign

Urbana, IL 61801

*Computer Vision and Robotics Laboratory

Beckman Institute for Advanced Science and Technology

405 N. Mathews Ave.

University of Illinois at Urbana-Champaign

Urbana, IL 61801

© AREMA 2009 ®
ABSTRACT

Railroad engineering practices and Federal Railroad Administration (FRA) regulations require

track to be inspected for physical defects at specified intervals, which may be as often as twice

per week. Currently, most of these inspections are manual and are conducted visually by

railroad track inspectors. Inspections include detecting defects relating to the ties, fasteners, rail,

special trackwork and ballast section. Enhancements to the current manual inspection process

are possible using advanced technologies such as machine vision, which consists of recording

digital images of track elements of interest and analyzing them using custom algorithms to

identify defects or their symptoms. Based on analysis of FRA accident data, discussion with

railroad track engineering experts and consultation with Association of American Railroads

(AAR) researchers, this project focuses on using machine vision to detect irregularities and

defects in cut spikes, rail anchors, turnout components and the crib ballast.

Because inspection data will be stored digitally, comparative and trend analyses of track

component condition are possible through data mining and Information Technology (IT)

procedures. These capabilities will facilitate longer-term predictive assessment of the health of

the track system and its components, and lead to more informed preventative maintenance

strategies and a greater understanding of track structure degradation and failure modes. Prior to

the final development of a functional machine-vision track inspection system, digital image

capture, image enhancement and assisted automation can provide interim improvements to

current track inspection practices. This paper will address the development of machine-vision

algorithms as well as interim solutions to improve the effectiveness and efficiency of track

inspections.

© AREMA 2009 ®
INTRODUCTION

Railroads conduct regular inspections of their track in order to maintain safe and efficient

operation. In addition to internal railroad inspection procedures, periodic track inspections are

required under Federal Railroad Administration (FRA) regulations. Although essential, track

inspection requires both financial and human resources and consumes track capacity. The

objective of the research described in this paper is to investigate the feasibility of using machine-

vision technology to make track inspection more efficient, effective and objective. In addition,

we discuss interim approaches to automated track inspection that will potentially lead to greater

inspection effectiveness and efficiency prior to full machine-vision system development and

implementation. These interim solutions include video capture using vehicle-mounted cameras,

image enhancement using image processing software and assisted automation using machine-

vision algorithms.

The primary focus of this research is inspection of Class I railroad mainline and siding

tracks, as these generally experience the highest traffic densities. Heavy traffic necessitates

frequent inspection and more stringent maintenance requirements, but presents railroads with

less time to accomplish it. Additionally, the cost associated with removing track from service

due to inspections or the repair of defects is most pronounced on these lines. This makes them

the most likely locations for cost-effective investment in new, more efficient, but potentially

more capital-intensive inspection technology. Although the primary focus of this research is the

inspection of high-density track, algorithms are also being tested on lower track classes to ensure

robustness to component variability and condition.

© AREMA 2009 ®
REVIEW OF RELATED INSPECTION TECHNOLOGIES

Prior to commencing work on this project, we conducted a survey of existing technologies for

non-destructive testing of railroad track and track components (1, 2). This survey provided

insight regarding which tasks were best suited for vision-based inspection and were not already

under development or in use within the railroad industry. This survey encompassed well-

established inspection technologies (e.g. ultrasonic rail flaw and geometry car testing) and more

experimental technologies currently under development (e.g. inertial accelerometers).

Out of the technologies we surveyed, machine vision is the most applicable inspection

technology to our present scope of work given the manual, visual nature of current track

inspections. Machine vision systems are currently in use or under development for a variety of

railroad inspection tasks, both wayside and mobile, including inspection of joint bars, surface

cracks in the rail, rail profile, gauge, intermodal loading efficiency and railcar structural

components and safety appliances (1, 2, 3, 4, 5, 6, 7). The University of Illinois at Urbana-

Champaign (UIUC) has been involved in multiple railroad machine-vision research projects

sponsored by the Association of American Railroads, BNSF Railway, NEXTRANS Center and

the Transportation Research Board (TRB) High-Speed Rail IDEA Program (3, 4, 5, 6, 7).

Machine-vision research projects at UIUC have been an interdisciplinary collaboration between

the Railroad Engineering Program in the Department of Civil and Environmental Engineering

and the Computer Vision and Robotics Laboratory at the Beckman Institute for Advanced

Science and Technology.

© AREMA 2009 ®
RAILWAY MACHINE-VISION INSPECTION SYSTEMS

Railway applications of machine-vision technology that were previously developed or are under

development at UIUC have three main elements. The first element is the data acquisition

system, in which digital cameras are used to obtain images or video in the visible or infrared

spectrum. The next component is the image analysis system, where the images or videos are

processed using machine-vision algorithms that identify specific items of interest and assess the

condition of the detected items. The final component is the data analysis system, which

compares and verifies whether or not the condition of track features or mechanical components

comply with parameters specified by the individual railroad or the FRA. The data analysis

component may also involve a combination of IT and data mining techniques to provide a

holistic approach to infrastructure management through improved planned maintenance

procedures.

The advantages of machine vision include greater objectivity and consistency compared

to manual, visual inspection, and the ability to record and organize large quantities of visual data

in a quantitative format. Gathering and organizing quantitative data facilitates analysis of the

health of track or vehicle components over both time and space. These features, combined with

data archiving and recall capabilities, provide powerful trending capabilities in addition to the

enhanced inspection capability itself. Some disadvantages of machine vision include difficulties

in coping with unusual or unforeseen circumstances (e.g. unique track components) and the need

to control and augment variable outdoor lighting conditions typical of the railroad environment.

© AREMA 2009 ®
DETERMINATION OF INSPECTION TASKS

Prioritization Based on FRA Accident Statistics

In order to prioritize the tasks that are most conducive to machine-vision inspection, the FRA

Accident Database was analyzed to identify the most frequent causes of track-related railroad

accidents from 2001-2005 (1, 2, 8). The three most frequent causes of track-related accidents are

broken rail, wide gauge, and cross-level. However, several existing technologies are already

being used by railroads to detect these defects, such as geometry and rail flaw detector (RFD)

cars. The principal defects that contribute to the next three most common, buckled track, switch

points, and other turnout defects, are currently inspected primarily using manual, visual

inspection. Therefore, these may be amenable to machine vision inspection and were selected

for further consideration (1, 2).

Initial Inspection Tasks

In the initial selection of inspection tasks and components to be investigated and developed in

this project, we took into account the lack of available technology, severity of defects, and their

potential contribution to accident prevention. We then sought and reviewed input from AAR

researchers, Class I railroad track-engineering and maintenance managers, track inspectors, and

other experts in track-related research. The result of this process was the selection of the

following track inspection tasks:

1. Raised, missing or inappropriate patterns of cut spikes

2. Displaced, missing, or inappropriate patterns of rail anchors

3. Condition of switch and frog points and other turnout components

4. Insufficient level of crib ballast

© AREMA 2009 ®
Beyond the current scope of work listed above, track components and inspection tasks that have

been identified for future machine-vision research include measuring tie spacing, identifying

insulated joint slippage, monitoring wayside rail lubricator performance, recording rail

manufacturing markings, determining thermite weld integrity and monitoring track circuit bond

wire condition.

DATA COLLECTION

Determination of Camera Views and Development of the Virtual Track Model

The most important consideration in the development of the image acquisition system was

camera placement. Cameras must be placed at optimal views to permit the machine-vision

algorithms to consistently and reliably detect the components of interest. Securing time to test

the image acquisition system on active track during the developmental phases proved difficult, so

a virtual track model (VTM) was created for use in the initial determination and selection of

camera views. The virtual cameras were then adjusted until they enabled viewing of the relevant

track components and allowed assessment of the conditions of interest that were conducive to

algorithm development.

The VTM used American Railway Engineering and Maintenance-of-Way Association

(AREMA) recommended practices for the design of track components to model FRA class 4 and

5 track and included sections of both tangent and curved track (9). AAR clearance plates were

incorporated into the VTM to ensure camera placements were in feasible locations (10). Defects

were simulated with the VTM to understand how different camera views influenced the ability of

the algorithms to locate and identify them. The VTM camera view experimentation and field

© AREMA 2009 ®
experimentation resulted in the selection of two initial camera views: the lateral view and the

over-the-rail view (1, 2).

Preliminary Data Collection

For our initial development of machine-vision algorithms, we generated synthetic images from

the VTM and used handheld cameras to capture images at selected camera locations. These

images provided insight into challenges such as lighting and the degree of variation in

component design and allowed us to test the initial algorithm’s ability to identify specific track

components.

Beyond handheld cameras, a method to capture video that would be representative of

future cameras attached to a track inspection vehicle was needed for further development of the

machine-vision inspection algorithms. For this reason, and the need to minimize the use of high-

rail vehicles and track capacity, an experimental test vehicle was designed.

Video Track Cart

To expedite project development and provide a test platform for finalizing the image acquisition

system and data collection methods, a video track cart (VTC) was developed for collecting

continuous video of track sections of interest (Figure 1). This cart is used on low-density track,

where track occupancy time is easier to obtain. This additional time allows for field adjustments

of a variety of parameters, such as shutter speed, camera views and lighting. The VTC is also

easily modifiable, making changes to the image acquisition system quick to implement.

© AREMA 2009 ®
Video Track Cart Hardware Selection

There are three major parameters that must be considered in determining the camera

specifications: frame rate, shutter speed, and image resolution. The values necessary for frame

rate and shutter speed are dependent on the speed at which videos are recorded. For initial

algorithm development at low speeds, a camera capable of 30 frames per second, a shutter speed

of at least 1/500 seconds, and an image resolution of 640x480 pixels is sufficient.

Video data collection uses a Dragonfly®2 DR2-COL camera. This camera has an image

resolution of 640x480 pixels and can record video at up to 60 frames per second (fps) with

shutter speeds as fast as 1/100,000 seconds (11). Each camera is mounted on a geared tripod

head that allows fine adjustment in three dimensions (Figure 1B).

Several factors were considered during camera lens selection, such as distance of the

camera from the subject, depth of field requirements and lens distortion. This led to the selection

of a 6 mm lens. This lens has a depth of field suitable for use in the over-the-rail view and does

not induce a significant level of distortion. It also allows the cameras to be placed at a distance

practical for track vehicle mounting.

Many factors were considered in the selection of the laptop computer used for recording

data in the field to ensure adequate performance while moving along track in the outdoor

environment. The most important factor was the ability to record video without dropping (or

missing) video frames. For hard drive access, a high-performance, single-level-cell, solid-state

hard drive is the optimal choice as it provides speed, reliability, low power consumption and low

access times in a moving environment. Standard hard drives are not designed for high-vibration

environments and also suffer from reduced performance when fragmented. A high-contrast

screen was also necessary for viewing the screen in bright outdoor environments, and a degree of

© AREMA 2009 ®
ruggedness is required to reliably use the equipment in a variety of field conditions. The laptop

selected has Microsoft Windows XP Professional SP3, 4 GB of RAM, an Intel® Core ™ 2 Duo

P9600 2.66 GHz processor, and an Ultra Performance Solid State Drive. The initial field data

collection required a mobile power source to provide power to the cameras and the laptop. A

Mega-Tron SRM-27 marine deep cycle battery was selected, and this battery has the capability

of steadily powering electronics drawing up to 10 amps of power for 4-5 hours.

Lighting Challenges

As with many vision inspection systems, optimizing lighting is a challenge. Our initial lighting

approach will use stage lighting, which has been used in a previous project involving inspection

of the undercarriage of freight and passenger railcars (5). The lighting system must provide

illumination that allows the cameras to consistently capture images with even illumination. This

is complicated by the variable outdoor environment where there are changing light intensities,

moving shadows and weather conditions such as rain and snow that alter how light is reflected.

Moreover, the use of area-scan cameras and views that record large areas of track add further

challenges, as these large areas must be lighted uniformly and with sufficient intensity.

ALGORITHM DEVELOPMENT

Inspection Guidelines

A thorough understanding of the specific track components and defects associated with them

was gained prior to developing the machine-vision algorithms. We used the FRA Track Safety

Standards, Class I track engineering standards, and the Track Safety and Condition Index (TSCI)

to determine guidelines used for inspection procedures (12, 13, 14, 15). Example guidelines for

© AREMA 2009 ®
the algorithms include the height of a spike above the base of rail that would constitute a raised

spike and how many spikes need to be raised before they would be considered critical. Similar

considerations were developed for inspection of anchors and crib ballast levels, taking into

consideration the expertise of track inspectors, researchers, and track maintenance managers at

Class I railroads. We are currently developing detailed guidelines for machine-vision turnout

inspection as a part of the next phase of this research.

Track Inspection Algorithms

Algorithm development to date has focused on spike and anchor detection and crib ballast level

recognition. Our algorithms can be summarized as a coarse-to-fine approach for detecting

objects. We first locate the track components with little variability in appearance and predictable

locations (e.g. the rail), and then locate objects that are subject to high appearance variability

(e.g. spike heads and anchors) in subsequent stages. This increases the robustness of component

detection by restricting the search space for the smaller components, whose appearances can

vary.

To further increase robustness to changing environmental conditions and changes in

object appearance (e.g. differing material types or corrosion), we have selected features that do

not rely on a specific spatial description, but rather a configuration of simple, local features that

are known to be valuable in classification. The simple, local features that we use include edges

and Gabor features. Edges are frequently used to detect objects in machine vision since object

boundaries often generate sharp changes in brightness (16). Image gradients (edges) should be

consistent among differing ties and rails, but unanticipated track obstacles could create

unanticipated edges, causing difficulty for the algorithms. For this reason, texture information

© AREMA 2009 ®
from the ballast, tie, and steel was incorporated into the edge-based algorithm to improve its

robustness. This approach relied on texture classification using Gabor filters, which produced

low-level texture features. Gabor filtering is used to summarize two-dimensional spatial

frequencies, and this can be used in texture discrimination (16).

Image Decomposition

Since we operate using a coarse-to-fine approach, we decompose the image beginning with the

rail, which is the largest, most consistently detectable object. Then we reliably differentiate

ballast texture from non-ballast texture using Gabor filtering. Labeled examples of ballast, tie,

and steel textures were created using previously stored images (Figure 2). When presented with

a previously unseen image, texture patches are extracted and classified as either “ballast” or

“non-ballast”. Though the “non-ballast” area may contain edge noise due to occluding objects

(e.g. leaves or ballast on ties), this method robustly provides a region that is centered on the tie.

Though the boundaries are inexact, in all test images, the area is reliably isolated for subsequent

processing.

After isolating the foreground portion of the tie, an accurate boundary for both the tie

plate and tie must be obtained to determine if an anchor has moved from its proper position.

Also, when the tie plate is delineated, prior knowledge of the dimensions of the tie plate can be

compared to the image to calibrate its scale for defect measurement estimations.

Texture information is used to ensure that the rail-to-tie plate edge separates two steel

textures, and that the tie plate-to-tie edge separates steel and tie textures. After delineation of the

two horizontal edges, the vertical edges are found since they are reliably detected only if their

search space is restricted. A restricted search space is needed because shadows, occlusions, and

© AREMA 2009 ®
other unforeseen anomalies will cause unanticipated edges and shapes. The vertical tie edge is

the dominant gradient that exists on both sides of the tie plate-to-tie edge, while the vertical tie

plate edge is the dominant gradient that exists only above the tie plate-to-tie edge. These edges

delineate the tie plate area.

Spike and Anchor Inspection

The spikes are located with spatial correlation using a previously developed template (1, 2). The

search area for the spikes is limited after the tie plate and rail are both delineated given that

spikes will only be found in certain positions. Rail anchors, when installed correctly, have more

distinctive visual characteristics when viewed from the gauge-side as compared to the field-side,

therefore, our anchor inspection primarily uses this view (1, 2). The anchors are identified and

the distance to both the tie and tie plate are measured. The search area for the anchors is

restricted to where the rail meets the ballast on either side of the tie plate. Anchors are detected

by identifying their parallel edges. Color intensity information is also included to ensure that

parallel edges have similar intensity distributions (1, 2).

Panorama Generation and Use in Track Component Defect Detection

The accuracy of track component detection algorithms increases when detection is performed on

panoramic images rather than on single frames. Algorithms generate panoramas from video data

by selecting vertical strips from the center of the frames, thereby minimizing the effect of

distortions and perspective differences, which become more severe as the distance between the

component and the center of the image increases (Figure 3A).

© AREMA 2009 ®
After the video is acquired, the first step performed by the algorithm is velocity

estimation, which detects the distance the camera moved between consecutive frames. This

velocity information is used to determine the size of the strip required from each frame to

construct accurate panoramas at a variety of data collection speeds. These strips are then

appended to each other to create the final panoramic image. Once the panoramas are generated,

the algorithms detect appropriate search areas, and then recognize each of the components and

detect defects within the search areas (Figure 3B).

Since many defects detected by this system cannot be classified as critical defects

without knowing the health of the surrounding track, this system must be able to compare

detected defects with others in the nearby area to determine defect severity. Panoramic images

will provide a method to easily view the inspected sections of track, allowing a human operator

to confirm the severity of defects detected by the system when the results of the algorithm are

questionable.

Video Processing - Over-the-rail View

The over-the-rail camera view can be used in conjunction with the lateral view to assist in the

identification of spikes and tie plate holes and aid in the estimation of the distance a spike may

be raised above the base of rail. For processing this view, we apply the same basic approach as

is used for the lateral view. The algorithm first estimates the tie locations, then delineates the

base of the rail, identifies the location of the ties and tie plates and finds the spike heads and tie

plate holes. However, instead of creating a panorama, we take the individual frames and process

them independently. This approach is fundamentally the same as panorama generation and has a

similar accuracy because inspection items in this view appear in the center of the images,

© AREMA 2009 ®
minimizing the effect of lens distortion. However, with this approach the results are compiled

and superimposed onto the original video.

Tie Location Estimation

The estimation of tie location is performed using the texture procedures described earlier for

discriminating ballast textures from non-ballast textures. This produces the images seen in

Figure 4, showing the detection of texture patches of the respective types; white patches

representing ballast and black patches representing all non-ballast areas. Then a "tie filter",

consisting of a rectangular strip of non-ballast patches, is used to isolate the tie in the black and

white patched image, thus delineating the tie location. The frames containing a tie in the

foreground produce the maximum response values to the “tie filter”, and this occurs periodically

with respect to time as the video frames are processed.

Location and Delineation of the Rail

The rail is identified in the video by finding an area of low intensity difference from frame to

frame due to the consistency of the appearance of the rail compared to the changing ballast and

ties. This step coarsely estimates the location of the rail in the center of the image. Using this

estimation, each frame is further processed by finding the image gradients near the boundary of

the identified area to refine the location of the edge at the base of the rail (Figure 5A).

Delineating the Ties

After the location of the rail edge has been determined, lower level texture processing can be

used to find the ties and tie plates. Using the methods described in the Image Decomposition

© AREMA 2009 ®
Section, the ballast and tie texture patches can be classified. Next, the ballast-to-tie edges are

found using this texture information and the tie-to-tie plate edges are then found using their

strong gradients (Figure 5B). With the area of the tie plate restricted by the previous steps, the

spike head, tie plate holes and potential defects can be found by using gradient templates in the

search area (Figure 5C).

In the video processing method, knowledge about defects in the surrounding track can be

traced by numbering ties as the algorithm isolates them, and storing the respective defect

information. In addition, defect details can be superimposed on the video frames and the video

reassembled so it can easily be viewed, interpreted, and confirmed by a human operator. As

these two methods are refined, they will be integrated for verifying the defects and increasing the

accuracy of measurement estimates.

DISCUSSION OF INTERIM ENHANCEMENTS TO MANUAL INSPECTIONS

Improvements to current manual track inspection procedures can be gained through interim

hardware and software enhancements prior to full machine-vision algorithm development and

implementation. These improvements can provide a short-term return on investment (ROI) to

the railroad industry prior to full system implementation, and may also provide valuable

feedback to users and developers prior to full system development. Benefits can accrue as a

result of either hardware or software enhancements or some combination of the two. Hardware

enhancements include the addition of digital cameras, lighting systems and data storage capacity

whereas software enhancements include the development of a wide range of machine-vision

inspection algorithms. These may range from comparatively simple algorithms that highlight

© AREMA 2009 ®
certain components for an operator to inspect visually, as a form of assisted automation, or they

may be robust to detecting defects without the need for operator intervention.

Hardware Improvements

Hardware improvements consist primarily of video collection and data storage enhancements

and aim to improve inspection effectiveness and efficiency by providing inspectors with digital

image data that can be viewed in a remote location more conducive to image analysis by humans

(e.g. climate controlled and reduced probability of slips, trips and falls). Other than the image

capture software needed to acquire the data, hardware solutions are stand-alone and do not

require additional software in the form of machine-vision algorithms. Data storage and recall

provide operators with the capability to compare the current health of track and railcar

components to previous images and data.

Interim enhancements to current manual inspections can be accomplished through video

data capture and enhancement, which can occur at two levels. At the most basic level, video

capture and enhancement consist of duplication of current human-vision inspection tasks through

digital image capture and subsequent human inspection of images. Secondly, with additional

hardware, video data can be enhanced through controlled lighting (e.g. illumination of track

components) and improved image quality (e.g. higher resolution cameras, contrast adjustment

and other forms of image manipulation).

Like current manual inspections, vision enhancement requires humans to differentiate

between defective and compliant components, but it is possible for humans to inspect video data

in a controlled environment where they are presented with images that have been captured in a

uniform manner and are presented to the operator in a consistent manner that facilitates rapid

© AREMA 2009 ®
inspection. Panoramic images would be one potential format for the images used in enhanced

manual inspection.

Software Improvements

At the most basic level, software can be used to enhance the quality of digital images to improve

the ease at which an operator can see the components of interest. Beyond image manipulation

software, machine-vision algorithms can be developed to assist human inspectors through a

variety of levels of sophistication. These options range from highlighting the most critical

components and elements within digital images for operator inspection to complete image

analysis without the need for human assistance. Highlighting critical components is an effective

means of enhancing the efficiency of operator-assisted inspection, and can be used in

conjunction with panorama generation.

Using machine-vision techniques, large quantities of data can be captured over time and

space, allowing comparison of defect trends at varying spatial levels (e.g. railway subdivision,

division or corporation). Large amounts of data, when used effectively, present the railroads

with the ability to compare the health of various parts of their network. This comparative

analysis leads to more effective planning of preventive maintenance and improved allocation of

their capitalized maintenance budget.

Railway Technology Implementation

The approach to technological implementation in the railway industry is important to understand

in terms of the fixed cost of implementation as well as the unit cost of operation. Machine-

vision systems require significant capital expenditures, but have the potential of providing lower

© AREMA 2009 ®
unit inspection costs than current manual inspections, depending on the number of units

inspected.

The feasibility of technology implementation in the railroad industry depends on two

primary qualities. The first is whether or not the system is stand-alone or requires

implementation across the entire network (17). Secondly, the ease of implementation depends

on whether the technology can be applied to individual units or must be implemented as an

integrated system. Machine vision track inspection requires individual units to be implemented

on a network level. Less than network level implementation is possible for machine-vision track

inspection systems, but the potential of holistic infrastructure management and other efficiencies

are reduced.

DISCUSSION AND CONCLUSIONS

The inspection of most railroad track components is currently conducted using manual, visual

inspections. These are labor intensive and lack the ability to easily record and compare data

needed for trend analysis. Moreover, they are subject to variability and subjectivity in different

inspectors’ abilities and interpretation of what they see. Also, it is impractical to manually

catalog the condition of such a large number of track components, so it is difficult to develop a

quantitative understanding of exactly how the non-critical or symptomatic defects may

contribute to the occurrence of critical defects or other track problems. In addition to machine-

vision inspection of track, there are interim approaches to automated track inspection that will

potentially lead to greater inspection effectiveness and efficiency prior to full system

development and implementation. These interim solutions include digital video capture using

© AREMA 2009 ®
vehicle-mounted cameras, digital image enhancement using image processing software and

assisted automation using machine-vision algorithms.

The goal of this machine-vision system for track inspection is to supplement current

visual inspection methods, allowing consistent, objective inspection of a large number of track

components. Based on analysis of railroad accident statistics and input from subject-matter

experts, we are focusing our initial research and development efforts on inspection of cut spikes,

rail anchors, turnout components and crib ballast.

Our algorithms use edge detection and texture information to provide a robust means of

detecting rail, ties and tie plates, which narrows the search area. Within this restricted area,

knowledge of probable component locations allow the algorithms to determine the presence of

spikes and rail anchors even when there are variations in the appearance of the components.

Future work involves refinement of the algorithms to improve the reliability of spike and

anchor detection. Anomalous objects from unforeseen circumstances, such as leaves, could

interfere with this initial texture classification phase. For this reason, we will experiment with

several machine-learning methods to perform our texture classification in the presence of

anomalies. We will experiment with Gaussian Mixture Models, which are a weighted

combination of Gaussian probability distributions, to enforce a confidence-level on our texture

classifications. We will also experiment with refining our classifiers based on the appearance of

the specific type of track under inspection to further improve robustness to anomalous

component appearances. The training on an initial set of videos will be done using labeled

texture data and labeled components provided by a user through a process known as supervised

learning. In the field, without the benefit of user-labeled data and user interaction, we will

experiment with updating our model based off of the appearance of the components that we

© AREMA 2009 ®
detect (i.e. unsupervised learning). For example, as ties are detected, Gabor features can

dynamically update our tie texture model. This way, the feature values for the ties, tie plates and

other components are accurate for that particular piece of track, since deteriorating conditions

may affect several ties in the same location. To ensure that we are not accepting erroneous

updates, we will only update our model after subsequent components have been successfully

identified.

Work is continuing on processing the over-the rail view and merging results from this

view with the lateral view to increase the accuracy of the identified defects and the estimated

measurements. Also, an evaluation will be made between the panoramic and video processing

approaches to provide documented information on the pros and cons of each approach to help

guide industrial vendors who will follow up on the commercial implementation of this work. In

addition, we will conduct lighting experiments to achieve better results from the algorithms in

adverse lighting conditions. Once the algorithms and lighting for inspection of spikes and

anchors have been refined using the video track cart, we intend to begin working on adapting the

system for testing on a high-rail vehicle or RFD car.

ACKNOWLEDGEMENTS

This project is sponsored by a grant from the Association of American Railroads (AAR)

Technology Scanning Program and funding from the NEXTRANS Center. The authors are

grateful to David Davis of the Transportation Technology Center, Inc. and the AAR Technology

Scanning committee for their assistance and technical guidance. Additional material, technical

input and support was provided by the Federal Railroad Administration, BNSF Railway, CN,

Norfolk Southern Corporation and Union Pacific Railroad. We also thank Larry Milhon, Mikel

© AREMA 2009 ®
D. Rodriguez Sullivan, Donald R. Uzarski, David P. White, Gary Carr, Ali Tajaddini, Matthew

D. Keller, Hank Lees, David Ferryman, and David Connell for their advice and assistance. J.

Riley Edwards has been supported in part by grants to the UIUC Railroad Engineering Program

from CN, CSX, Hanson Professional Services, Norfolk Southern, and the George Krambles

Transportation Scholarship Fund.

REFERENCES

(1) Sawadisavi, S., Edwards, J.R., Hart, J.M., Resendiz, E., Barkan, C.P.L., Ahuja, N.,

“Machine-Vision Inspection of Railroad Track,” 2008 AREMA Conference Proceedings,

American Railway and Maitenance of Way Association (AREMA), Landover, Maryland.

(2) Sawadisavi, S., J. Edwards, E. Resendiz, J.M. Hart, C.P.L Barkan, and N. Ahuja.

“Machine-Vision Inspection of Railroad Track” Proceedings of the TRB 88th Annual

Meeting, Washington, DC, January 2009.

(3) Hart, J. M., N. Ahuja, C. P. L. Barkan and D. D. Davis. A Machine Vision System for

Monitoring Railcar Health: Preliminary Results. Technology Digest: TD-04-008,

Association of American Railroads, Pueblo, Colorado, 2004.

(4) Edwards, J. R., J. M. Hart, S. Todorovic, C. P. L. Barkan, N. Ahuja, Z. Chua, N. Kocher

and J. Zeman. Development of Machine Vision Technology for Railcar Safety Appliance

Inspection. In Proceedings of the International Heavy Haul Conference Specialist

Technical Session - High Tech in Heavy Haul, Kiruna, Sweden, 2007, pp. 745-752.

(5) Hart, J. M., E. Resendiz, B. Freid, S. Sawadisavi, C. P. L. Barkan, N. Ahuja. Machine

Vision Using Multi-Spectral Imaging for Undercarriage Inspection of Railroad Equipment.

In Proceedings of the 8th World Congress on Railway Research, Seoul, Korea, 2008

© AREMA 2009 ®
(6) Lai, Y. -C., C. P. L. Barkan, J. Drapa, N. Ahuja, J. M. Hart, P. J. Narayanan, C. V.

Jawahar, A. Kumar, L. R. Milhon and M. P. Stehly. Machine vision analysis of the energy

efficiency of intermodal freight trains. Journal of Rail and Rapid Transit 221, 2007, pp.

353-364.

(7) Schlake, B.W., J.R. Edwards, J.M. Hart, C.P.L Barkan, S. Todorovic, and N. Ahuja.

“Automated Inspection of Railcar Underbody Structural Components Using Machine

Vision Technology” Proceedings of the TRB 88th Annual Meeting, Washington , DC,

January 2009.

(8) Federal Railroad Administration. Federal Railroad Administration Office of Safety

Analysis: 3.03 – Download Accident Data, 2006.

http://safetydata.fra.dot.gov/officeofsafety/publicsite/on_the_fly_download.aspx?itemno=3

.03. Accessed June 2006.

(9) AREMA Manual for Railway Engineering, Vol. 1. American Railway Engineering and

Maintenance-of-Way Association, Landover, Maryland, 2007.

(10) The Official Railway Equipment Register, Vol. 120, No. 3., R. E. R. Publishing

Corporation, East Windsor, New Jersey, 2005.

(11) Point Grey Research, 2009. Dragonfly2.

<http://www.ptgrey.com/products/dragonfly2/dragonfly2.pdf> Accessed Feb 2009.

(12) BNSF Railway. Engineering Instructions: Field Manual. Kansas City, Kansas, April 1,

2007.

(13) CN. Engineering Track Standards, March 2007.

(14) Federal Railroad Administration. Code of Federal Regulations, Title 49, Volume 4: Part

213 - Track Safety Standards, 2007. http://frwebgate3.access.gpo.gov/cgi-

© AREMA 2009 ®
bin/PDFgate.cgi?WAISdocID=263821734+5+1+0&WAISaction=retrieve. Accessed July

2008.

(15) Uzarski, D. R. Development of a Track Structure Condition Index (TSCI). Ph.D. thesis,

University of Illinois at Urbana-Champaign, Urbana, Illinois, 1991.

(16) Forsyth, D. A. and J. Ponce. Computer Vision: A Modern Approach. Prentice Hall, Upper

Saddle River, New Jersey, 2003.

(17) Gallamore, R. E., Technology in Perspective. Trains Magazine, Kalmbach Publishing

Co., Waukesha, WI, November 2008.

© AREMA 2009 ®
A: Video Track Cart in Use on Low-Density Track

B: Current Camera Mounts for Over-the-rail View (left) and Lateral View (right)

Figure 1: Development of Video Track Cart for Preliminary Video Capture

© AREMA 2009 ®
Figure 2: Template Images of Specific Ballast, Rail, and Tie Textures Used for Image
Processing

© AREMA 2009 ®
A: Panorama Generation Using Velocity Estimation for Accurate Panoramas

B: Tie, Tie Plate, Anchor and Spike Delineation on Test Panorama

Figure 3: Panorama Generation for Track Component Detection

© AREMA 2009 ®
(A) (B)

Figure 4: Texture Classified Image in Which White Squares Represent Ballast and Black
Squares Represent Non-ballast Areas (A) and Tie Location Found Using Tie Template (B)

© AREMA 2009 ®
A: Delineation of the Base of the Rail from the Over-the-rail View Using the Strong Gradient
Produced by the Edges of the Rail in the Foreground Against the Sections Containing Ballast and
Ties in the Background

B: Delineated Tie and Tie Plate location Estimations

C: Component Identification Using Gradients Templates Inside the Restricted Search Area

Figure 5: Over-the-Rail Image Capture and Analysis

© AREMA 2009 ®
FIGURES

Figure 1: Development of Video Track Cart for Preliminary Video Capture

Figure 2: Template Images of Specific Ballast, Rail, and Tie Textures Used for Image
Processing

Figure 3: Panorama Generation for Track Component Detection

Figure 4: Texture Classified Image in Which White Squares Represent Ballast and Black
Squares Represent Non-ballast Areas (A) and Tie Location Found Using Tie Template (B)

Figure 5: Over-the-Rail Image Capture and Analysis

© AREMA 2009 ®

You might also like