Walking Robots
Walking Robots
A Climbing Robot for Cleaning Glass Surface with Motion Planning and Visual Sensing
Dong Sun, Jian Zhu and Shiu Kit Tso
Department of Manufacturing Engineering and Engineering Management City University of Hong Kong Hong Kong
1. Introduction
There exists increasing demand for the development of various service robots to relieve human beings from hazardous jobs, such as cleaning glass-surface of skyscrapers, fire rescue, and inspection of high pipes and walls (Balaguer et al., 2000; La Rosa et al., 2002; Zhu et al., 2002). Fig. 1 shows our recently developed climbing robotic system aimed to clean glasses of high-rise buildings, using suction cups for adhering to the glass and a translation mechanism for moving. This robot can reach a maximum speed of 3 m/min and has the ability to cross cracks and obstacles less than 35mm in height and 80mm in width. Utilizing a flexible waist, the robot can easily adjust its orientation. Motion planning of the service robot plays an important role to enable the robot to arrive in the target position and avoid or cross obstacles in the trajectory path. There are considerable approaches in the literature to address the motion planning problem of car-like or walking robots, such as Lamiraux and Laumond (2001), Boissonnat etc. (2000), Hasegawa etc. (2000), Chen et al. (1999), Hert and Lumelsky (1999), Egerstedt and Hu (2002), Mosemann and Wahl (2001), to name a few. All these algorithms are not suitable to our cleaning robot applications. This is because 1) the movement mechanism of the climbing robot uses translational suction cups, which is different from the other robots; and 2) the climbing robot works in a special environment, i.e., the glass wall, which is divided into many rectangle sections by horizontal and vertical window frames, and the robot must be able to cross the window frames to clean all sections. Because of these characteristics, there exists a demand for a unique motion planning scheme for the climbing robot application. Another key issue to the success of the climbing robot application lies in an effective sensing capability. To do the cleaning work on the glass surface, the cleaning robot must know when to begin or stop the cleaning job, how to control the orientation (or moving direction), and how to cross the window frame. Therefore, it is necessary to measure the robot orientation, the distance between the robot and the window frame, and the distance between the robot and the dirty spot to be cleaned. Some recent works on the sensing system of cleaning robots have been reported in the literature (Malis et al., 1999; Ma et al., 1999). Simoncelli et al. (2000) utilized the ultrasonic sonar for automatic localization. Kurazume and Hirose (2000) proposed the so-called cooperative positioning system to
Source: Climbing & Walking Robots, Towards New Applications, Book edited by Houxiang Zhang, ISBN 978-3-902613-16-5, pp.546, October 2007, Itech Education and Publishing, Vienna, Austria
220
repeat a searching process until the target position was reached. However, in a climbing robot on the glass surface, many traditional methodologies with laser and ultrasonic sensors etc. cannot be applicable to measure the distance between the robot and the window frame. This is because that the height of the window frame is usually low and the light beam sent by the sensor is difficult to reach the frame unless the beam is exactly parallel to the glass surface. Due to inevitable installation errors, the sensors are usually hard to ensure that the light beam is parallel to the glass surface exactly. Cameras are often used for the robots localization, visual servoing, and vision guidance. Malis et al. (2000) used two cameras for 2D and 2-1/2-D visual servoing and proposed a multi-camera visual servoing method. However, the use of a number of cameras may not be suitable to the climbing robot because 1), it is difficult to establish a big zone of intersection of points of view when using several cameras, and 2), using a number of cameras increases the load weight and thus affects the safety of the climbing robot. Based on eigenspace analysis, a single camera was used to locate the robot by orienting the camera in multiple directions from one position (Maeda et al., 1997). The drawback of this eigenspace method is that the measuring performance may vary as environment changes. In addition, the depth information is lost and the distance between the camera and the target object cannot be measured by traditional single camera. 6 1 4
5 4 3 2 8 7 6 5
1 1. Horizontal (X-) Cylinder 2. Brush 3. Visual Sensor 4. Vertical (Y-) Cylinder 5. Suction Cup 6. Z-Cylinder 7. Slave CPU 8. Rotation Cylinder Fig. 1. Main structure of the climbing robot
A Climbing Robot for Cleaning Glass Surface with Motion Planning and Visual Sensing
221
This chapter presents our approaches to solving the above two challenging problems, motion planning and visual sensing, for a climbing glass-cleaning robot. Some works have been reported in Zhu etc. (2003) and Sun etc. (2004). The reminder of the chapter is organized as follows. In section 2, structure of the climbing robot is introduced. In section 3, mtion planning of the robot on a multi-frame galss wall is presented. In section 4, a visual sensing system that is composed of an omnidirectional CCD camera and two laser diodes, is shown to enable the robot to measure its orientation and the distance between the robot and the window frame. Experiments are performed in section 5 to demonstrate the effectiveness of the proposed approach. Finally, conclusions of this work are given in section 6.
Brush
Fig. 2. Main structure of the climbing robot The developed climbing robot has a length of 1220 mm, a width of 1340 mm, a height of 370 mm, and a weight of 30 Kg. The body of the robot is mainly composed of two rodless cylinders perpendicular to each other, as shown in Fig. 1. The stroke of the horizontal (X-) cylinder is 400mm, and that of the vertical (Y-) cylinder is 500mm. Actuating these two cylinders alternately, the robot moves in the X or Y direction. As shown in Fig. 2, four short Z- cylinders are installed at the two ends of each rodless cylinder. By stretching out or drawing back the piston beams of these four cylinders, the robot can move in the Z direction. At the intersection of two rodless cylinders, a rotational cylinder named the robot's waist is installed, by which the robot can rotate around the Z-pivot. Two specially designed brushes, each composed of a squeegee and a sucking system, are fixed at the two
222
ends of the horizontal cylinder. The squeegee cleans dirty spots on the glass surface using cleaning liquid provided by the supporting vehicle. The sucking system collects and returns the sewage to the supporting vehicle for recycling. The robot employs suction pads for adhesion. Four suction pads, each with a diameter of 100mm, are installed on each foot of the robot. The total sixteen pads provide a suction force enough to withstand 15 Kg payload. The robot uses a translational mechanism for the movement. With the operating mode of sticking-moving-sticking, the robot can complete a series of motions including moving, rotation, and crossing obstacles. The rotation of the robot is controlled by adjusting rotation angles of the rotational cylinder. The robot can rotate 1.6 degrees per step around Z-pivot until reaching the desired posture. The control system of the robot is based on a master and a slave computer. The master computer is located on the ground and manipulated by the human operator directly. The slave computer is embedded in the body of the robot. Using the feedback signal of sensors installed on the robot, the slave computer controls the movement and the posture of the robot to achieve automatic navigation on the glass surface. The master computer obtains the information and identifies the status of the robot by the visual sensing system together with the communication between the master and the slave computers with a RS422 link. In case of emergency, the human operator can directly control the robot according to the actual situation. The movement of the robot is achieved by alternately sucking and releasing the suction cups installed on the horizontal and vertical rodless cylinders. The slave computer sends ON or OFF signals to connect or disconnect the vacuum generator with air source, resulting in sucking or releasing the corresponding suction cups. Vacuum meters measure the relative vacuum of the suction cups and check the safety of the robot. If the vacuum degree of the suction cups is less than -40 kPa, an alarm signal is sent to the master computer. Fig. 3 illustrates the developed cleaning robot climbing on the commercial building of City Plaza in Hong Kong.
Fig. 3. The climbing robot on site trial outside a commercial building (provided by BBC)
A Climbing Robot for Cleaning Glass Surface with Motion Planning and Visual Sensing
223
3. Motion Planning
For simplicity, the robot moves horizontally and vertically to clean the whole glass surface in the motion planning. As an example, Fig. 4 illustrates a trajectory path of the robot within a rectangular glass section. The starting point is located in the up-right side of the glass section. The robot moves toward the left side horizontally while cleaning the glass surface. When arriving at the boundary of the glass section, the robot moves down a distance l and then moves back to the right side horizontally. Note that the distance l is equal to the length l b of the brush cleaning path, and l b is determined by considering the size of the brush and the dimension of the glass section. Repeating the above procedures, the whole glass section can be cleaned. The ending position is located in the down side of the glass section. During cleaning the sewage may drop down and makes the downside glass surface dirty. Therefore, the cleaning work should be performed from the upside to the downside. Robot trajectory path Starting point
l
lb
Ending point Brush cleaning path Fig. 4. Robot motion path on the glass
Window frame
3.1 Orientation Adjustment of the Robot When the climbing robot moves along the desired trajectory, the robot orientation is affected by various disturbances, especially by the gravitational force of the robot itself. To ensure a successful trajectory following, the robot must be able to adjust its orientation automatically. The orientation of the robot is measured by the visual sensor installed on the robot, relative to the window frame. Two laser diodes send two laser lights to the window frame so that an image of the frame can be acquired. The orientation of the robot relative to the window frame can be calculated by analyzing and comparing the image coordinates of two laser points. The technical details of this measurement are given in the next section. Based on the orientation measured by the visual sensor, the controller actuates the rotational cylinder to adjust the orientation of the robot. The orientation is adjusted by the stickingreleasing-sticking mode, as shown in Fig. 5. Firstly, the suction cups installed on the vertical cylinder are released (see Fig. 5 (1)). Secondly, the rotational cylinder is actuated to rotate the vertical cylinder (see Fig. 5 (2)). Then, the suction cups on the vertical cylinder are sucked, and meanwhile, the suction cups on the horizontal cylinder are released (see Fig. 5
224
(3)). Finally, the rotational cylinder is actuated again to rotate the horizontal cylinder (see Fig. 5 (4)). Vertical Cylinder Rotational Cylinder
(1)
(2)
(3)
(4)
Horizontal Cylinder : Suction Fig. 5. Rotation of the robot 3.2 Crossing the Window Frame
Horizontal Window Frame Section 3 Section 2
: Release
Section 1
Section 9
Section 8
Section 7
A Climbing Robot for Cleaning Glass Surface with Motion Planning and Visual Sensing
225
The window frames separate the whole glass wall into several sections, as shown in Fig. 6. After cleaning one section, the robot must be able to cross the vertical or horizontal window frame to enter another section. The two major steps for the robot to cross the window frame are: 1) Measuring the distance between the robot and the window frame The distance between the robot and the window frame, denoted by d as seen in Fig. 6, is an important factor to evaluate the position of the robot in each section. When this distance is close to zero, the robot prepares to cross the window frame. The visual sensor is employed to measure the distance between the robot and the window frame. According to the theory of triangulation, one laser diode is needed to measure the distance. The distance is measured by analysis and calculation of the pixel coordinate of the laser point, based on the position and posture of the CCD camera relative to the laser diode, which will be shown in the next section. Glass Window Frame Robot
Ultrasonic Sensor
Suction Cups
D Fig. 7. Ultrasonic sensors installed on the robot 2) Crossing the window frame After measuring the distance between the robot and the window frame, the robot plans its motion to cross the frame. Four ultrasonic sensors are installed to help the robot to detect whether the suction cups have crossed the window frame, as shown in Fig. 7, where D (=300mm) denotes the distance between the ultrasonic sensor and the boundary of suction cups. When the ultrasonic sensor is crossing the window frame, the ultrasonic sensor measures the height of its position relative to the surface of the window frame. After the ultrasonic sensor crosses the window frame, the height measured by the ultrasonic sensor is the one relative to the glass surface. Since the height measurements in the two cases are
226
different, the robot knows whether the ultrasonic sensor has crossed the window frame. When the ultrasonic sensor lies in the window frame, the robot also knows when the suction cups will follow the ultrasonic sensor to cross the window frame after moving a distance D.
4. Visual Sensing
The visual sensing hardware system consists of an oriented CCD camera with the model number Sony EVI-D30 (J), two laser diodes, and a capture card. Two laser diodes, fixed on the camera as "eyes" of the sensing system, send two laser lights to find the window frame and generate two laser marks. The distances between the reference points and the window frame, and the orientation of the robot relative to the window frame, can be determined by analyzing image pixel coordinates u and v of two laser points in the image plane.
z
zb
F xb Camera Coordinate Frame F Base Coordinate Frame Laser Diode
x y
yb
d L
L3
F L2 Image
CCD Camera
L1 Window Frame
Glass Surface
Fig. 8. Measurement of the robot position 4.1 Position Measurement The theory of triangulation will be utilized in the measurement. Fig. 8 (left view) illustrates how the robot position is measured by the visual sensing system. The launching point of the laser diode is represented by L. Point L1 is the laser mark on the window frame. Point L2 is the corresponding point of L1 in the image plane, where I denotes the center of the image. Treating the focal point of the camera as the origin, denoted by F, a base coordinate frame represented by F-xbybzb is established, where xb coordinate axis is parallel to the glass surface and perpendicular to the window frame, yb coordinate axis is parallel to both the window frame and the glass surface, and zb coordinate axis is perpendicular to the glass surface(xb-yb plane), as shown in Fig. 8. Using F as the same origin, another coordinate frame named camera coordinate frame, represented by F-xyz, is also established, where x axis is parallel to line I-F that is the main light pivot of the camera, y axis is the same as yb axis of the base coordinate frame, and z axis is perpendicular to the x-y plane. Denote [ x0 , y 0 , z 0 ]T ,
A Climbing Robot for Cleaning Glass Surface with Motion Planning and Visual Sensing
227
and v as coordinates in pixel in the image plane, and u0 and v0 as the pixel coordinates of the central point I. Since the focal point F is located in the line L1-L2, the line L1-L2 can be represented by
x y z = = x2 y2 z 2
(1)
Define as the tilt angle of the laser diode relative to the camera (around y axis, anticlockwise), as the pan angle (around z axis, anti-clockwise). Then, the line L-L1 can be represented by
x x0 y y0 z z0 = = tg tg 1
(2)
From (1) and (2), we can derive coordinates of point L1 in pixel coordinate u, i.e.,
[ x1 , y1 , z1 ]T = K u + [ x0 , y0 , z 0 ]T