Available online at www.sciencedirect.
com
ScienceDirect
Procedia CIRP 61 (2017) 281 – 286
The 24th CIRP Conference on Life Cycle Engineering
A process demonstration platform for product disassembly skills transfer
Supachai Vongbunyong*, Pakorn Vongseela, Jirad Sreerattana-aporn
Innovation and Advanced Manufacturing Research Group,Institute of Field Robotics, King Mongkut’s University of Technology Thonburi
126 Pracha Uthit Rd., Bang Mod, Thung Khru, Bangkok 10140, Thailand
* Corresponding author. Tel.: +662-470-9339; fax: +662-470-9703. E-mail address: [email protected]
Abstract
Automated disassembly is challenging due to uncertainties and variations in the process with respect to the returned products. While cognitive
robotic disassembly system can dismantle products in most cases, human assistances are required for resolving some physical uncertainties.
This article presents a platform that the disassembly process can be demonstrated by skillful human operators. The process is represented with
the sequence, position, and orientation of the tools, which are extracted by using vision system and tools’ markers. The knowledge at planning
and operational levels will be transferred to the robot to achieve automated disassembly. An operation for removing a back cover of an LCD
screen with three disassembly tools is a case study.
© 2017 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
© 2017 The Authors. Published by Elsevier B.V.
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the scientific committee of the 24th CIRP Conference on Life Cycle Engineering.
Peer-review under responsibility of the scientific committee of the 24th CIRP Conference on Life Cycle Engineering
Keywords: Learning by demonstration; Skill transfer; Disassembly; Human-robot collaboration; Robotics.
1. Introduction
1.2. Human-machine collaboration in disassembly
1.1. Product disassembly overview
Overcoming the aforementioned problems, Human-
With respect to the circular economy, End-of-Life (EOL) Machine collaboration can be occurred in various ways [8].
products treatment is one of the key steps that recovers Types of human involvement in assembly and disassembly
remaining embodied value from the disposal. Product are explained as follows:
disassembly is one of the important steps for efficient EOL (a) Semi-automatic or hybrid disassembly – optimized
treatment. Disassembly process aims to separate the products disassembly plans consist of tasks performed by machine and
into their parts, components, or subassembly. As a result, they human. Automatic workstations are equipped with sensor
will be treated in proper ways where the remaining value is systems and automatic disassembly tools performing
maximized. However, disassembly is currently a non-profit disassembly tasks. Human operators employed at the manual
and labor intensive process. As a result, it has become workstations make decision and perform more complicated
economically infeasible and ignored in most companies. tasks that is infeasible to be carried out automatically [9, 10] .
A number of attempts have been made in order to fully (b) Augmented Reality (AR) - Much research worked on
automate disassembly process. The research was conducted in using AR for facilitating the human experts to interact with
a number of approaches, e.g. vision-based system [1-3], actual products that the optimal plans are generated by the
multi-sensorial system [4, 5], and smart and integrated system for assembly [11, 12] and disassembly [13]. Human
disassembly tools [6, 7] However, due to the variations of operators perform the process in this case.
products returned and uncertainties in disassembly process, (c) Tele-presence of human – human operators can control
fully automated disassembly has been challenging and mostly the robot to on-line perform the tasks that can be hazardous or
economically infeasible. difficult due to human’s physical limitation. A number of
2212-8271 © 2017 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the scientific committee of the 24th CIRP Conference on Life Cycle Engineering
doi:10.1016/j.procir.2016.11.197
282 Supachai Vongbunyong et al. / Procedia CIRP 61 (2017) 281 – 286
related works were conducted for assembly, e.g. operator’s x Inaccurate location of the components,
movement is captured and translate to the program that x Non-detectable fasteners,
controls a dual-arm robot in assembly of heavy products [14]. x Inaccessible fasteners, and
(d) Machine learning (ML) – experts are responsible for x Improper method of disestablishing fasteners.
teaching plans and operations to the system. This is similar to
(c) but learning capability is added. Eventually, the system is The skill transfer platform proposed in this article is
able to perform the tasks autonomously. Regarding ML in designed to be capable of handling these problems. It should
disassembly, much research focuses on optimization at be noted that, to prove the concept of the training platform,
strategic levels, e.g. [15-17]. To the best of our knowledge, the disassembly process was completely demonstrated
only cognitive robotic disassembly system implemented ML without involving other cognitive abilities. As a result, a
at operational level to improve the performance of the process sequence of disassembly operations that is repeatable by
[18]. Human operators assist the system to resolve robots is expected to be obtained.
problematic conditions by demonstrating the actions required.
1.4. Organization of this paper
1.3. Learning by demonstration in cognitive robotics
This paper is organized as follows: methodology regarding
According to the framework of using cognitive robotics in learning by demonstration and skill transfer in Section 2,
product disassembly, learning and revision of the disassembly system design of the teaching platform in Section 3,
plans are the cognitive functions that help the system to experiment in Section 4, discussion and conclusions in
improve the process performance, i.e. time consumption and Section 5 and 6, respectively.
degree of autonomy [18, 19]. In the learning and revision
process, Cognitive robotic agent (CRA) primarily interacts 2. Methodology
with the knowledge base (KB) by writing or rewriting new
product-specific knowledge in regard to the disassembly 2.1. Complete system overview – skills and knowledge
sequence plans and operation plans (see framework in Fig. 1).
This knowledge will be recalled when the foreseen models of The ability to disassemble the products is involved with
the products are going to be disassembled in the future. skills and knowledge that can be extracted from the human
During the disassembly process, CRA performs two types expert’s demonstration. In the context of product disassembly,
of learning, namely by reasoning and by demonstration. (a) (a) skills are sets of primitive actions that are used to perform
For learning by reasoning, CRA autonomously obtains the tasks, in this case, disassembly operations. (b) Knowledge is
knowledge by executing the general plans and take the physical information related to components or products to be
outcome into account. The knowledge can be obtained if the disassembled. The information can be represented as
operations successfully remove the desired components. In parameters used in the disassembly operations. Skills aim to
case of failure, expert human operators may be called for be transferred in order to make the demonstrated operations
assistance. This is where the (b) learning by demonstration adaptable to different physical configurations, e.g. robots with
takes place. The operators demonstrate actions to CRA how to different configuration [20, 21].
resolve the problems which are the causes of failure, e.g. This project can be divided into two parts according to the
pointing out non-detected components, showing how to sides: the training side and the playback side. (a) On the
remove fasteners, etc. training side, a human expert demonstrates a completed
disassembly process of a product. The disassembly operations
Cognitive Robotic Module are captured by using a vision system and other sensors. The
abstract skills with the product specific knowledge are
Cognitive Robotic Agent
transferred to the robot. (b) The robot operating on the
Revision playback side will be able to disassemble the same product by
Human
Knowledge Base (KB)
using these skills and knowledge. The robot is equipped with
Assistance
Learning
sensors and cognitive ability, especially reasoning and
execution monitoring [19], in order to overcome the physical
uncertainties in automated disassembly process. As a result,
Execution
Monitoring the system is expected to be able to adapt the skills to carry
Physical world out the process with respect to the actual physical conditions
of the products that may vary from the conditions in the
Reasoning
training scenario.
An overview of the complete system is shown in Fig. 2.
However, it should be noted that in this article, only the
Fig. 1 Cognitive robotics disassembly framework system on the training side is presented.
The failures are caused by unresolved conditions occurred
at both planning and operational levels which are explained in
[18]. This article focuses on the unresolved conditions at the
operational level, which are:
Supachai Vongbunyong et al. / Procedia CIRP 61 (2017) 281 – 286 283
partition equipped with a sensor, i.e. mechanical switch. By
reasoning, the current actions and the type of tool can be
known by the tool that is absent from the storage.
(c) User console, the user can manually give inputs to the
system through a graphical user interface (GUI) and a pedal
switch. The GUI is used for manually input the types and the
removal stage of the components. The pedal switch is used for
signaling the system when starting and stopping executing the
actions. The user is able to continue disassembly operation by
hands. As a result, the duration of the actions can be recorded.
3.1. System architecture
Fig. 2 Skill transfer from human expert to robot with cognitive ability
The main process with GUI is operated on a central
computer (Windows10-64bits, Core i5 6300HQ 3.2 GHz 8GB
2.2. Skills transfer system – The training side platform memory, Visual C++ 2015). For the vision system, images are
obtained from the RGB-D sensor and with the functions in
Considering the training side of the skill transfer platform, OpenCV 3.1 [23] and Kinect SDK 1.8 [24]. For the inputs at
a goal of this prototype system is to obtain “the disassembly the physical platform, all switches are connected to the
sequence and the skills for disassembling a model of a computer via a micro controller unit (MCU) (Arduino UNO).
product”. The disassembly sequence and skills are considered The workflow is in Fig. 4 and the dataflow is in Fig. 5.
the planning levels and operational levels, respectively. The
planning level is involved disassembly sequence plans, while
Kinect
the operation level considers physical actions that detach the RGB-D Sensor
components. To achieve this goal, the following information
(see Table 1) is required to be recorded during the
demonstration of the disassembly process. Interactive
Toolbox
Table 1 Information required for skill transfer
Level Information Recording Method
Planning •Type of main components Manual input Product to be
disassembled
•Type of connective components Manual input
•Disassembly state changes
Manual input
(main component removed)
Product Fixture
•Operation time Automatic
Operation •Location of the main components Vision System
•Location of connective components Vision System
•Target position & orientation of tool Vision System
•Type of the tool Automatic
3. System design
From Table 1, the system of training platform is designed Pedal Switch
for capturing the information from the human expert’s
demonstration. The platform is designed in the way that the Human expert
experts can focus on performing the process with minimal (User)
distraction. The actions are automatically captured by the
vision system and some sensors. However, few manual inputs (a) Disassembly training platform
are required in order to avoid inaccuracy in disassembly
sequence plan, in which the error can lead to further logical Partition with contact switch
problems. Fig. 3 shows the physical setup of the platform
which consists of:
(a) Vision system is used to track the disassembly tools,
i.e. screwdrivers. The disassembly tools are equipped with
two color markers at the center and the tooltip. As a result, the
position and orientation of the tools can be determined by
processing RGB-D dataset, which is obtained with an RGB-D
sensor (MS-Kinect [22]).
(b) Interactive toolbox can determine the status of usage of (b) Interactive toolbox
the disassembly tools. Each tool is individually stored in its Fig. 3 Physical setup of the training platform
284 Supachai Vongbunyong et al. / Procedia CIRP 61 (2017) 281 – 286
TOOLBOX MCU
NO
YES
Send signal MCU read Type
Toolbox Switch State == 0
to MCU of tool being used
KINECT Transform to Get RGB pixel
Get RGB Image Image moment and
Complex Matrix coordinate of
Image (Byte) Thresholding Find Contour
(OpenCV) markers
RGB-D
Sensor
Get Depth Coordinate mapping between
Iamge (Byte) RGB image and Depth image
COMPUTER
Get Depth data of markers from Calculate real world position Calculate real world position
coordinate mapping frame (x,y,z) of markers (x,y,z) the Tool from markers
PEDAL SWITCH NO MCU
YES
Pedal Send signal MCU send
Switch State == 0
Switch to MCU command to PC
YES
PC records YES
Report all Receive command
user’s actions Origin Point is set?
data from MCU ?
at that moment
Set origin NO NO
point
Fig. 4 System architecture
Kinect
Markers’ position placed on the tool at PM1 and PM2, respectively. Once PM1 and
RGB-D Image
PM2 can be located, the position of the tooltip PT can be
obtained by vectors operation. Transformation matrix is
COMPUTER
Vision System applied to these three points in order to transform the
processing positions in the RGB-D image frame to the workspace
(OpenCV)
coordinate. The orientation of the tool can be represented in a
Manual
Input
spherical coordinate. The detailed process is as follows.
Human expert Main Process
GUI
(User) Control (a) Detection of the markers - the markers are detected in
Display
the RGB image by locating the region with specified color,
Pedal Switch
Tool usage blue and neon green. These colors can be distinguished from
Digital I/O
the working environment. Contours of the region with the
MCU color pixels are extracted to locate the position (xc,yc) of PM1
Tool Type
Toolbox
Digital I/O
and PM2. RGB image and depth image are registered. Thus,
corresponding depth zc at (xc,yc) is known. Position vectors of
the markers with respect to {C} (CPM1 and CPM2) are obtained.
Fig. 5 Framework of dataflow
(b) Transform the markers’ positions to workspace – by
3.2. Vision system for tool tracking applying transformation matrix WCT representing a coordinate
frame {C} with respect to {W}. CPMi = [xc,i ,yc,i ,zc,i]T where i
The vision system calculates the position the orientation of = 1 and 2 (see Eq.1-2)
the tooltip from the markers detected by RGB-D sensor. The
calculation is based on a coordination system shown in Fig. 6. W
PM1 = (WCT) CPM1 (1)
In principle, the green markers, Marker 1 and Marker 2, are W
PM2 = (WCT) CPM2 (2)
Supachai Vongbunyong et al. / Procedia CIRP 61 (2017) 281 – 286 285
(c) Find position of the tooltip – tooltip PT can be obtained
+ Screw
by using a unit vector representing the tool’s pose and the + +
Snap-fits
distances between the markers (L1 and L2) predefined for each
Toolbox
tool. PT with respect to {W} can be obtained from Eq.3. #1 #2 #3
Back cover of
LCD screen
W
PT = WPM2 + (WPM1 - WPM2) (L1 + L2) / L2 (3)
+ +
(d) Find orientation of the tool – the orientation of the tool
by considering a spherical coordinate. The angles between the {W}, {P}
tool and the orthogonal axes of the workspace frame (ӨxW
,ӨyW ,ӨzW) are presented. WUTool represents the pose of the tool Fig. 7 Experiment setup
(see Eq.4) where the angle can be calculated by using Eq. 5.
4.2. Workflow of the human operator
W W W W W W T
UTool = PT - PM2 = [ xTool, yTool, zTool] (4)
ӨiW = arccos( WiTool / ( L1 + L2 )) ; i { אx,y,z} (5) The disassembly process was divided into three steps.
Firstly, for the back cover, the name of the main component
was manually input through the GUI (see Fig. 8). A probe
yC RGB-D Sensor {C}
(#3) was picked up and used to point at four corners of the
xC back cover, so that the size of this component was recorded.
oW
Tool {T} Next, screwdriver (#1) was used to unscrew the 4 Phillip-head
zC screws. Lastly, screwdriver (#2) was used to remove 8 snap-
z'W y'W
fits around the back cover. It should be noted that the pedal
switch was pressed during each action had been perform to
Өzw Marker2
confirm the recording.
+
PM2
LWC Өyw
+ L2 Marker1
PM1
L1 x'W
PT
zW Өxw
yW
L1 =|PT – PM1|
Workspace {W} L2 =|PM1 – PM2|
oW xW
Fig. 6 Coordinate system of the training platform
Where {C} = image frame on RGB-D sensor; {T} = tool
frame; {W} = workspace frame; PT = position of tooltip; PM =
position of marker; LWC = distance vector between origins of
{C} and {W}.
4. Experiment
4.1. Experiment setup
Objectives of this experiment are to prove that the system
can record the actions and to produce a disassembly sequence
plan. An LCD screen is used as a case study. This preliminary
experiment focuses on operations for removing the back cover
from the remaining product. The screen was held by suction
cups on the fixture. The bottom-left corner of the LCD screen
was located at the origin of workspace coordinate (oW).
Therefore, {P} and {W} were coincident (see Fig. 7).
According to product structure analysis, the process dealt
with 1 main component (a back cover) and 2 types of
connectors (screws and snap-fits). Three types of tools were
assigned to the toolbox, including:
x (#1) Phillip-head screwdriver – for unscrewing, Fig. 8 Tracking system and GUI
x (#2) Flat-head screw driver – for removing snap-fits,
x (#3) Probe – for locating features by pointing at them.
286 Supachai Vongbunyong et al. / Procedia CIRP 61 (2017) 281 – 286
4.3. Result example For the future work, at the operational level, the system
will be capable of capturing hand movement required for
From these steps, the system produced the result as follow: performing complex tasks. In addition, the complete system
including a robot in which the skills will be transferred to will
Backcover(PT3… PT3), be developed.
Unscrew(PT1, OT1) … Unscrew(PT1, OT1) ,
RemoveSnap(PT2, OT2) … RemoveSnap(PT2, OT2) ,
References
Firstly, the back cover is located by using a probe (#3)
[1] Bdiwi M, Rashid A, Putz M, editors. Autonomous disassembly of
pointing at 8 corners of the back cover. Next, a Philip-head electric vehicle motors based on robot cognition. 2016 IEEE
screwdriver (#1) was picked and used to unscrew 5 screws on International Conference on Robotics and Automation (ICRA); 2016
the cover. Lastly, a flat-head screwdriver (#2) was picked and 16-21 May 2016.
[2] Vongbunyong S, Chen WH. Disassembly Automation Automated
used to dismantle 8 snap-fits around the cover. PTi and OTi are
Systems with Cognitive Abilities. Herrmann C, Kara S, editors:
position and orientation of tooltip. Average precision and Springer International Publishing; 2015.
accuracy were within 6.5mm and 10mm, respectively. [3] Bailey-Van Kuren M. Flexible robotic demanufacturing using real time
tool path generation. Robot and Comput Integr Manuf. 2006;22(1):17-
24.
5. Discussion [4] Gil P, Pomares J, Puente SVT, Diaz C, Candelas F, Torres F. Flexible
multi-sensorial system for automatic disassembly using cooperative
From the preliminary experiment, all the required robots. Int J Comput Integr Manuf. 2007;20(8):757-72.
[5] Torres F, Gil P, Puente ST, Pomares J, Aracil R. Automatic PC
information for repeating disassembling a model of products disassembly for component recovery. Int J Adv Manuf Technol.
was able to be captured. The action sequence – the type and 2004;23(1-2):39-46.
the order of the tools – can be captured accurately. [6] Schumacher P, Jouaneh M. A system for automated disassembly of
snap-fit covers. Int J Adv Manuf Technol. 2013;1(15):2055-69.
However, with respect to the precision and accuracy of the [7] Wegener K, Chen WH, Dietrich F, Dröder K, Kara S. Robot assisted
system, marginal error in position and orientation of the tools disassembly for the recycling of electric vehicle batteries. Procedia
was occurred due to an inaccuracy of the RGB-D sensor to CIRP. 2015;29:716-21.
locate the markers in the scenes. Three factors should be [8] Bley H, Reinhart G, Seliger G, Bernardi M, Korne T. Appropriate
human involvement in assembly and disassembly. CIRP Ann - Manuf
considered for an improvement. (a) Markers’ tracking Technol. 2004;53(2):487-509.
method: 3D markers should be used as they can be located [9] Kim H-J, Chiotellis S, Seliger G. Dynamic process planning control of
more precisely than the color areas on the tools. However, the hybrid disassembly systems. Int J Adv Manuf Technol. 2009;40(9-
10):1016-23.
shape of the markers should not affect the normal movement [10] Franke C, Kernbaum S, Seliger G. Remanufacturing of flat screen
the operators when performing tasks. (b) Performance of the monitors. In: Brissaud D TS, Zwolinski P, editor. Innovation in life
RGB-D sensor: the resolution of the depth image is limited by cycle engineering and sustainable development2006. p. 139-52.
[11] Makris S, Pintzos G, Rentzos L, Chryssolouris G. Assembly support
the resolution of the infrared pattern. The sensor with higher using AR technology based on automatic sequence generation CIRP
performance should be used. (c) Calibration: the coordinate Ann - Manuf Technol. 2013;62(1):9-12.
frames should be calibrated by using images input in addition [12] Leu MC, ElMaraghy HA, Nee AYC, Ong SK, Lanzetta M, Putz M, et
al. CAD model based virtual assembly simulation, planning and
to the information of physical configuration. training. CIRP Ann - Manuf Technol. 2013;62(2):799-822.
[13] Odenthal B, Mayer MP, Kabuß W, Kausch B, Schlick CM. An
6. Conclusions empirical study of disassembling using an augmented vision system.
Lecture Notes in Computer Science. 6777/20112011. p. 399-408.
[14] Makris S, Tsarouchi P, Surdilovic D, Krüger J. Intuitive dual arm robot
Learning by demonstration is one of the machine learning programming for assembly operations. CIRP Ann - Manuf Technol.
approaches that the intelligent agent of an autonomous system 2014;63(1):13-6.
can acquire skills from the actions demonstrated by human [15] Tang Y, Zhou M, editors. Learning-embedded disassembly petri net for
process planning. IEEE International Conference on Systems, Man and
operators. In previous works regarding cognitive disassembly Cybernetics 1; 2007.
automation [18], learning by demonstration was conducted [16] Yeh W-C. Simplified swarm optimization in disassembly sequencing
along with learning by reasoning in order to resolve problems problems with learning effects. Computers & Operations Research.
2012;39(9):2168-77.
regarding the uncertainties in product and the process. In this [17] Zeid I, Gupta SM, Bardasz T. A case-based reasoning approach to
research, the platform for transferring disassembly skills planning for disassembly. J Intell Manuf. 1997;8(2):97-106.
demonstrated by expert operators to the robots is developed. [18] Vongbunyong S, Kara S, Pagnucco M. Learning and revision in
In this article, the training part of the skill transfer platform cognitive robotics disassembly automation. Robot and Comput Integr
Manuf. 2015;34:79-94.
in regard to the tracking system of the disassembly tools is [19] Vongbunyong S, Kara S, Pagnucco M. Application of cognitive
developed. This platform is designed to capture the operator robotics in disassembly of products. CIRP Ann - Manuf Technol.
actions during entire disassembly process. The sequence of 2013;62(1):31-4.
[20] Paikan A, Schiebener D, Wächter M, Asfour T, Metta G, Natale L,
actions can be obtained by using the data from the switches on editors. Transferring object grasping knowledge and skill across
the toolbox and the pedal. The vision-based tracking system is different robotic platforms. International Conference on Advanced
able to locate the tool-tip of the disassembly tool by using Robotics (ICAR), 2015 2015 27-31 July 2015; Istanbul.
[21] Tonggoed T, Charoenseang S. Design and development of human skill
RGB-D images. As a result, disassembly skills, the transfer system with augmented reality. Jounal of Automation and
information regarding the disassembly process at the planning Control Engineering. 2015;3(5):403-9.
and the operational levels, can be obtained. According to the [22] Khoshelham K, Elberink SO. Accuracy and Resolution of Kinect Depth
Data for Indoor Mapping Applications. Sensors. 2012;12:1437-54.
design of this platform, the operators can typically focus on [23] Itseez. OpenCV 3.1. 2016.
the tasks with minimal distraction of manual input required. [24] Microsoft. Kinect for Windows. 2016.