10.1007@978 981 15 2341 0

Download as pdf or txt
Download as pdf or txt
You are on page 1of 693

Lecture Notes in Electrical Engineering 634

Yi Wang
Kristian Martinsen
Tao Yu
Kesheng Wang   Editors

Advanced
Manufacturing
and Automation IX
Lecture Notes in Electrical Engineering

Volume 634

Series Editors

Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli
Federico II, Naples, Italy
Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán,
Mexico
Bijaya Ketan Panigrahi, Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India
Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, Munich, Germany
Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China
Shanben Chen, Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore,
Singapore, Singapore
Rüdiger Dillmann Humanoids and Intelligent Systems Laboratory, Karlsruhe Institute for Technology,
Karlsruhe, Germany
Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China
Gianluigi Ferrari, Università di Parma, Parma, Italy
Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid,
Madrid, Spain
Sandra Hirche, Department of Electrical Engineering and Information Science, Technische Universität
München, Munich, Germany
Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA,
USA
Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland
Alaa Khamis, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt
Torsten Kroeger, Stanford University, Stanford, CA, USA
Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA
Ferran Martin, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra,
Barcelona, Spain
Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore
Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany
Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA
Sebastian Möller, Quality and Usability Laboratory, TU Berlin, Berlin, Germany
Subhas Mukhopadhyay, School of Engineering & Advanced Technology, Massey University, Palmerston
North, Manawatu-Wanganui, New Zealand
Cun-Zheng Ning, Electrical Engineering, Arizona State University, Tempe, AZ, USA
Toyoaki Nishida, Graduate School of Informatics, Kyoto University, Kyoto, Japan
Federica Pascucci, Dipartimento di Ingegneria, Università degli Studi “Roma Tre”, Rome, Italy
Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China
Gan Woon Seng, School of Electrical & Electronic Engineering, Nanyang Technological University,
Singapore, Singapore
Joachim Speidel, Institute of Telecommunications, Universität Stuttgart, Stuttgart, Germany
Germano Veiga, Campus da FEUP, INESC Porto, Porto, Portugal
Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China
Junjie James Zhang, Charlotte, NC, USA
The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments
in Electrical Engineering - quickly, informally and in high quality. While original research
reported in proceedings and monographs has traditionally formed the core of LNEE, we also
encourage authors to submit books devoted to supporting student education and professional
training in the various fields and applications areas of electrical engineering. The series cover
classical and emerging topics concerning:
• Communication Engineering, Information Theory and Networks
• Electronics Engineering and Microelectronics
• Signal, Image and Speech Processing
• Wireless and Mobile Communication
• Circuits and Systems
• Energy Systems, Power Electronics and Electrical Machines
• Electro-optical Engineering
• Instrumentation Engineering
• Avionics Engineering
• Control Systems
• Internet-of-Things and Cybersecurity
• Biomedical Devices, MEMS and NEMS

For general information about this book series, comments or suggestions, please contact leontina.
[email protected].
To submit a proposal or request further information, please contact the Publishing Editor in
your country:
China
Jasmine Dou, Associate Editor ([email protected])
India, Japan, Rest of Asia
Swati Meherishi, Executive Editor ([email protected])
Southeast Asia, Australia, New Zealand
Ramesh Nath Premnath, Editor ([email protected])
USA, Canada:
Michael Luby, Senior Editor ([email protected])
All other Countries:
Leontina Di Cecco, Senior Editor ([email protected])
** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex,
SCOPUS, MetaPress, Web of Science and Springerlink **

More information about this series at http://www.springer.com/series/7818


Yi Wang Kristian Martinsen Tao Yu
• • •

Kesheng Wang
Editors

Advanced Manufacturing
and Automation IX

123
Editors
Yi Wang Kristian Martinsen
School of Business Department of Manufacturing
Plymouth University and Civil Engineering
Plymouth, UK NTNU
Gjøvik, Norway
Tao Yu
Shanghai Second Polytechnic University Kesheng Wang
Shanghai, China Department of Mechanical
and Industrial Engineering
Norwegian University of Science
and Technology
Trondheim, Sør-Trøndelag Fylke, Norway

ISSN 1876-1100 ISSN 1876-1119 (electronic)


Lecture Notes in Electrical Engineering
ISBN 978-981-15-2340-3 ISBN 978-981-15-2341-0 (eBook)
https://doi.org/10.1007/978-981-15-2341-0
© Springer Nature Singapore Pte Ltd. 2020
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface

IWAMA—International Workshop of Advanced Manufacturing and Automation—


aims at providing a common platform for academics, researchers, practising pro-
fessionals and experts from industries to interact, discuss current technology trends
and advances and share ideas and perspectives in the areas of manufacturing and
automation.
IWAMA began in Shanghai University 2010. In 2012 and 2013, it was held at
the Norwegian University of Science and Technology, in 2014 at Shanghai
University again, in 2015 at Shanghai Polytechnic University, in 2016 at
Manchester University, in 2017 at Changshu Institute of Technology and in 2018 at
Changzhou University. The sponsors organizing the IWAMA series has expanded
to many universities throughout the world; including Plymouth University,
Changzhou University, Norwegian University of Science and Technology,
SINTEF, Manchester University, Shanghai University, Shanghai Polytechnic
University, Changshu Institute of Technology, Xiamen University of Science and
Technology, Tongji University, University of Malaga, University of Firenze,
Stavanger University, The Arctic University of Norway, Shandong Agricultural
University, China University of Mining and Technology, Indian National Institute
of Technology, Donghua University, Shanghai Jiao Tong University, Changshu
Institute of Technology, Dalian University, St. Petersburg Polytechnic University,
Hong Kong Polytechnic University, Lingnan Normal University, Civil Aviation
University of China and China Instrument and Control Society. As IWAMA
becomes an annual event, we are expecting more sponsors from universities and
industries, who will participate in the international workshop as co-organizers.
Manufacturing and automation have assumed paramount importance and are
vital for the economy of a nation and the quality of daily life. The field of manu-
facturing and automation is advancing at a rapid pace, and new technologies are
also emerging. The main challenge faced by today’s engineers, scientists and
academics is to keep on top of the emerging trends through continuous research and
development.

v
vi Preface

IWAMA 2019 takes place in Plymouth University, UK, 21–22 November 2019,
organized by Plymouth University, Norwegian University of Science and
Technology and Lingnan Normal University. The programme is designed to
improve manufacturing and automation technologies for the next generation
through discussion of the most recent advances and future perspectives and to
engage the worldwide community in a collective effort to solve problems in
manufacturing and automation.
The main focus of the workshop is focused on the transformation of present
factories, towards reusable, flexible, modular, intelligent, digital, virtual, affordable,
easy-to-adapt, easy-to-operate, easy-to-maintain and highly reliable “smart facto-
ries”. Therefore, IWAMA 2019 has mainly covered five topics in manufacturing
engineering:
1. Industry 4.0
2. Manufacturing Systems
3. Manufacturing Technologies
4. Production Management
5. Design and optimization
All papers submitted to the workshop have been subjected to strict peer review
by at least two expert referees. Finally, 84 papers have been selected to be included
in the proceedings after a revision process. We hope that the proceedings will not
only give the readers a broad overview of the latest advances, and a summary of the
event, but also provide researchers with a valuable reference in this field.
On behalf of the organization committee and the international scientific com-
mittee of IWAMA 2019, I would like to take this opportunity to express my
appreciation for all the kind support, from the contributors of high-quality keynotes
and papers and all the participants. My thanks are extended to all the workshop
organizers and paper reviewers, to Plymouth University and NTNU for the financial
support and to co-sponsors for their generous contribution. Thanks are also given to
Jian Wu, Jin Yuan, Yun Chen, Bo Chen and Tamal Ghosh, for their hard editorial
work of the proceedings and arrangement of the workshop.

Yi Wang
Chair of IWAMA 2019
Organization

Organized and Sponsored by

PLYU (Plymouth University, UK)


NTNU (Norwegian University of Science and Technology, Norway)
Co-organized by
LNU (Lingnan Normal University, China)
SSPU (Shanghai Second Polytechnic University, China)
TU (Tongji University, China)
SHU (Shanghai University, China)
SJTU (Shanghai Jiao Tong University, China)
Honorary Chairs
Minglun Fang, China
Kesheng Wang, Norway
Jan Ola Strandhagen, Norway
General Chairs
Yi Wang, UK
Kristian Martinsen, Norway
Tao Yu, China
Local Organizing Committee
Yi Wang (Chair)
Jonathan Moizer
Stephen Childe
Tamal Ghosh
Oleksandr Semeniuta
Ivanna Baturynska
Jian Wu

vii
viii Organization

International Programme Committee


Jan Ola Strandhagen, Norway Dawei Tu, China
Kesheng Wang, Norway Minglun Fang, China
Asbjørn Rolstadås, Norway Binheng Lu, China
Per Schjølberg, Norway Xiaoqien Tang, China
Knut Sørby, Norway Ming Chen, China
Erlend Alfnes, Norway Yun Chen, China
Heidi Dreyer, Norway Henry Xinguo Ming, China
Torgeir Welo, Norway Keith C. Chan, China
Leif Estensen, Norway Xiaojing Wang, China
Hirpa L. Gelgele, Norway Jin Yuan, China
Wei D. Solvang, Norway Bo Chen, China
Yi Wang, UK Shouqi Cao, China
Chris Parker, UK Shili Tan, China
Jorge M. Fajardo, Spain Ming Li, China
Torsten Kjellberg, Sweden Cuilian Zhao, China
Fumihiko Kimura, Japan Chuanhong Zhou, China
Gustav J. Olling, USA Jianqing Cao, China
Michael Wozny, USA Yayu Huang, China
Wladimir Bodrow, Germany Shirong Ge, China
Guy Doumeingts, France Jianjun Wu, China
Van Houten, Netherlands Guijuan Lin, China
Peter Bernus, Australia Shangming Luo, China
Janis Grundspenkis, Latvia Dong Yang, China
George L. Kovacs, Hungary Zumin Wang, China
Rinaldo Rinaldi, Italy Guohong Dai, China
Gaetano Aiello, Italy Sarbjit Singh, India
Romeo Bandinelli, Italy Vishal S. Sharma, India
Jinhui Yang, China

Secretariat
Jian Wu
Tamal Ghosh
Organization ix
Contents

Design and Optimization


Performance Analysis of Ball Mill Liner Based
on DEM-FEM Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Zhen Xu, Junfeng Sun, Taohong Liao, Xiangping Hu, and Xuedong Zhang
Performance of Spiral Groove Dry Gas Seal for Natural Gas
Considering Viscosity-Pressure Effect of the Gas . . . . . . . . . . . . . . . . . . 11
Xuejian Sun, Pengyun Song, and Xiangping Hu
Analysis of Residual Stress for Autofrettage High Pressure Cylinder . . . 20
Guiqin Li, Yang Li, Jinfeng Shi, Shijin Zhang, and Peter Mitrouchev
Study on the Detection System for Electric Control Cabinet . . . . . . . . . 29
Lixin Lu, Weicong Wang, Guiqin Li, and Peter Mitrouchev
Effects of Remelting on Fatigue Wear Performance of Coating . . . . . . . 37
Zhiping Zhao, Xinyong Li, Chao Wang, and Yang Ge
Design of Emergency Response Control System
for Elevator Blackout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Yan Dou, Wenmeng Li, Jiaxin Ma, and Lanzhong Guo
Effect of Cerium on Microstructure and Friction of MoS2 Coating . . . . 52
Wu Jian, Xinyong Li, Ge Yang, Lanzhong Guo, Cao Jie, and Peijun Jiao
A Machine Vision Method for Elevator Braking Detection . . . . . . . . . . 59
Yang Ge, Jian Wu, and Xiaomei Jiang
Remote Monitoring and Fault Diagnosis System and Method
for Traction Elevator Cattle Dawn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Shuguang Niu, Junjie Huang, and Zhiwen Ye
Soil Resistance Computation and Discrete Element Simulation Model
of Subsoiler Prototype Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Gong Liu, Zhenbo Xin, Ziru Niu, and Jin Yuan

xi
xii Contents

Simulation Analysis of Soil Resistance of White Asparagus


Harvesting End Actuator Baffle Parts Based on Discrete
Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Haoyu Ma, Liangliang Zou, Jin Yuan, and Xuemei Liu
Simulation and Experimental Study of Static Porosity Droplets
Deposition Test Rig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Laiqi Song, Xuemei Liu, Xinghua Liu, and Haishu Zhang
Effect of Heat Treatment on the Ductility of Inconel 718 Processed
by Laser Powder Bed Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Even Wilberg Hovig, Olav Åsebø Berg, Trond Aukrust,
and Harald Solhaug
Comparative Study of the Effect of Chord Length Computation
Methods in Design of Wind Turbine Blade . . . . . . . . . . . . . . . . . . . . . . 106
Temesgen Batu and Hirpa G. Lemu
On Modelling Techniques for Mechanical Joints: Literature Study . . . . 116
Øyvind Karlsen and Hirpa G. Lemu
Research on Magnetic Nanoparticle Transport and Capture
in Impermeable Microvessel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Jiejie Cao and Jian Wu
Analysis of Drag of Bristle Based on 2-D Staggered Tube Bank . . . . . . 134
Xiaolei Song, Meihong Liu, Xiangping Hu, Yuchi Kang, and Baodi Zhang
Design of Large Tonnage Lift Guide Bracket . . . . . . . . . . . . . . . . . . . . 142
Lanzhong Guo and Xiaomei Jiang
Stress Analysis of Hoist Auto Lift Car Frame . . . . . . . . . . . . . . . . . . . . 151
Xiaomei Jiang, Michael Namokel, Chaobin Hu, Jinzhong Dong,
and Ran Tian
Research on Lift Fault Prediction and Diagnosis Based
on Multi-sensor Information Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Xiaomei Jiang, Michael Namokel, Chaobin Hu, and Ran Tian
Design of Permanent Magnet Damper for Elevator . . . . . . . . . . . . . . . . 169
Xinyong Li, Jian Wu, Jianfeng Lu, Peijun Jiao, and Lanzhong Guo
Design and Simulation of Hydraulic Braking System for Loader
Based on AMESim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Junjun Liu, Lanzhong Guo, Jiaxin Ma, Yang Ge, Jian Wu,
and Zhanrong Ma
Contents xiii

Dynamic Modeling and Tension Analysis of Redundantly


Restrained Cable-Driven Parallel Robots Considering Dynamic
Pulley Bearing Friction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Zhenyu Hong, Xiaolei Ren, Zhiwei Xing, and Guichang Zhang

Industry 4.0
Knowledge Discovery and Anomaly Identification for Low
Correlation Industry Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Zhe Li and Jingyue Li
A Method of Communication State Monitoring for Multi-node
CAN Bus Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Lu Li-xin, Gu Ye, Li Gui-qin, and Peter Mitrouchev
Communication Data Processing Method for Massage Chair
Detection System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Lixin Lu, Yujie Jin, Guiqin Li, and Peter Mitrouchev
Development of Data Visualization Interface in Smart Ship System . . . 219
Guiqin Li, Zhipeng Du, Maoheng Zhou, Qiuyu Zhu, Jian Lan, Yang Lu,
and Peter Mitrouchev
Feature Detection Technology of Communication Backplane . . . . . . . . . 227
Guiqin Li, Hanlin Wang, Shengyi Lin, Tao Yu, and Peter Mitrouchev
Research on Data Monitoring System for Intelligent Ship . . . . . . . . . . . 234
Guiqin Li, Xuechao Deng, Maoheng Zhou, Qiuyu Zhu, Jian Lan,
Hong Xia, and Peter Mitrouchev
Research on Fault Diagnosis Algorithm Based on Multiscale
Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Xiaolong Li, Lilan Liu, Xiang Wan, Lingyan Gao, and Qi Huang
Proactive Learning for Intelligent Maintenance in Industry 4.0 . . . . . . . 250
Rami Noureddine, Wei Deng Solvang, Espen Johannessen, and Hao Yu
An Introduction of the Role of Virtual Technologies and Digital
Twin in Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Mohammad Azarian, Hao Yu, Wei Deng Solvang, and Beibei Shu
Model Optimization Method Based on Rhino . . . . . . . . . . . . . . . . . . . . . 267
Mengyao Dong, Zenggui Gao, and Lilan Liu
Construction of Equipment Maintenance Guiding System
and Research on Key Technologies Based on Augmented Reality . . . . . 275
Lingyan Gao, Fang Wu, Lilan Liu, and Xiang Wan
xiv Contents

A New Fault Identification Method Based on Combined


Reconstruction Contribution Plot and Structured Residual . . . . . . . . . . 283
Bo Chen, Kesheng Wang, Xiue Gao, Yi Wang, Shifeng Chen,
Tianshu Zhang, Kristian Martinsen, and Tamal Ghosh
Prediction of Blast Furnace Temperature Based on Improved
Extreme Learning Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Xin Guan
The Economic Dimension of Implementing Industry 4.0
in Maintenance and Asset Management . . . . . . . . . . . . . . . . . . . . . . . . . 299
Tom I. Pedersen and Per Schjølberg

Manufacturing System
Common Faults Analysis and Detection System Design
of Elevator Tractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Yan Dou, Wenmeng Li, Yang Ge, and Lanzhong Guo
Balanced Maintenance Program with a Value Chain Perspective . . . . . 317
Jon Martin Fordal, Thor Inge Bernhardsen, Harald Rødseth,
and Per Schjølberg
Construction Design of AGV Caller System . . . . . . . . . . . . . . . . . . . . . . 325
Zhang Xi, Wang Xin, and Yuanzhi Xu
A Transfer Learning Strip Steel Surface Defect Recognition
Network Based on VGG19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Xiang Wan, Lilan Liu, Sen Wang, and Yi Wang
Visual Interaction of Rolling Steel Heating Furnace Based
on Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Bowen Feng, Lilan Liu, Xiang Wan, and Qi Huang
Design and Test of Electro-Hydraulic Control System
for Intelligent Fruit Tree Planter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Ranguang Yin, Jin Yuan, and Xuemei Liu
Lean Implementing Facilitating Integrated Value Chain . . . . . . . . . . . . 358
Inger Gamme, Silje Aschehoug, and Eirin Lodgaard
Developing of Auxiliary Mechanical Arm to Color Doppler
Ultrasound Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Haohan Zhang, Zhiqiang Li, Guowei Zhang, and Xiaoyu Liu
The Importance of Key Performance Indicators that Can
Contribute to Autonomous Quality Control . . . . . . . . . . . . . . . . . . . . . . 373
Ragnhild J. Eleftheriadis and Odd Myklebust
Contents xv

Collaborative Fault Diagnosis Decision Fusion Algorithm Based


on Improved DS Evidence Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Xiue Gao, Bo Chen, Shifeng Chen, Kesheng Wang, Yi Wang,
Wenxue Xie, Jin Yuan, Kristian Martinsen, and Tamal Ghosh
Hybrid Algorithm and Forecasting Technology of Network Public
Opinion Based on BP Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Ronghua Zhang, Changzheng Liu, and Hongliang Ma
Applying Quality Function Deployment in Smart Phone Design . . . . . . 396
Taylor Barnard and Yi Wang
eQUALS: Automated Quality Check System for Paint Shop . . . . . . . . . 402
Angel Dacal-Nieto, Carmen Fernandez-Gonzalez, Victor Alonso-Ramos,
Gema Antequera-Garcia, and Cristian Ríos
Equipment Fault Case Retrieval Algorithm Based
on Mixed Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Yonglin Tian, Chengsheng Pan, Yana Lv, and Bo Chen
Cascaded Organic Rankine Cycles (ORCs) for Simultaneous
Utilization of Liquified Natural Gas (LNG) Cold Energy
and Low-Temperature Waste Heat . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
Fuyu Liu, Xiangping Hu, Haoshui Yu, and Baosheng Zhang

Manufacturing Technology
Effect of T-groove Parameters on Steady-State Characteristics
of Cylindrical Gas Seal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Junfeng Sun, Meihong Liu, Zhen Xu, Taohong Liao, and Xiangping Hu
Simulation Algorithm of Sample Strategy for CMM Based
on Neural Network Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Petr Chelishchev and Knut Sørby
Digital Modeling and Algorithms for Series Topological
Mechanisms Based on POC Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
Lixin Lu, Hehui Tang, Guiqin Li, and Peter Mitrouchev
Optimization of Injection Molding for UAV Rotor Based
on Taguchi Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Xiong Feng, Zhengqian Li, and Guiqin Li
Assembly Sequence Optimization Based on Improved
PSO Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Xiangyu Zhang, Lilan Liu, Xiang Wan, Kesheng Wang, and Qi Huang
Influence of Laser Scan Speed on the Relative Density and Tensile
Properties of 18Ni Maraging Steel Grade 300 . . . . . . . . . . . . . . . . . . . . 466
Even Wilberg Hovig and Knut Sørby
xvi Contents

Application of Automotive Rear Axle Assembly . . . . . . . . . . . . . . . . . . . 473


Shouzheng Liu, Lilan Liu, Xiang Wan, Kesheng Wang, and Fang Wu
Improvement of Hot Air Drying on Quality of Xiaocaoba
Gastrodia Elata in China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
Xiuying Tang, Chao Tan, Bin Cheng, Xuemei Leng, Xiangcai Feng,
and Yinhua Luo
Installation Parameters Optimization of Hot Air Distributor
During Centrifugal Spray Drying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
Yunfei Liu and Jingjing Xu
Wear Mechanism of Curved-Surface Subsoiler Based on Discrete
Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
Jinguang Li, Liangliang Zou, Xuemei Liu, and Jin Yuan
Development Status of Balanced Technology of Battery
Management System of Electric Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . 504
Xiupeng Yan, Jianjun Nie, Zongzheng Ma, and Haishu Ma
Application Analysis of Contourlet Transform in Image Denoising
of Flue-Cured Tobacco Leaves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Li Zhang, Haohan Zhang, Hongbin Liu, Sen Wang, and Xiaoyu Liu
Monte Carlo Simulation of Nanoparticle Coagulation
in a Turbulent Planar Impinging Jet Flow . . . . . . . . . . . . . . . . . . . . . . . 517
Hongmei Liu, Weigang Xu, Faqi Zhou, Lin Liu, Jiaming Deng,
Shuhao Ban, and Xuedong Liu
Structural Damage Detection of Elevator Steel Plate Using
GNARX Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Jiaxin Ma and Yan Dou

Production Management
The Innovative Development and Application of New Energy
Vehicles Industry from the Perspective of Game Theory . . . . . . . . . . . . 535
Jianhua Wang and Junwei Ma
Survey and Planning of High-Payload Human-Robot Collaboration:
Multi-modal Communication Based on Sensor Fusion . . . . . . . . . . . . . . 545
Gabor Sziebig
Research on Data Encapsulation Model for Memory Management . . . . 552
Lixin Lu, Weixing Zhao, Guiqin Li, and Peter Mitrouchev
Research on Task Scheduling Design of Multi-task System
in Massage Chair Function Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
Lixin Lu, Leibing Lv, Guiqin Li, and Peter Mitrouchev
Contents xvii

A Stochastic Closed-Loop Supply Chain Network Optimization


Problem Considering Flexible Network Capacity . . . . . . . . . . . . . . . . . . 567
Hao Yu, Wei Deng Solvang, and Xu Sun
Solving the Location Problem of Printers in a University Campus
Using p-Median Location Model and AnyLogic Simulation . . . . . . . . . . 577
Xu Sun, Hao Yu, and Wei Deng Solvang
Intelligent Workshop Digital Twin Virtual Reality Fusion
and Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
Qiang Miao, Wei Zou, Lilan Liu, Xiang Wan, and Pengfei Wu
Harvesting Path Planning of Selective Harvesting Robot
for White Asparagus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
Ping Zhang, Jin Yuan, Xuemei Liu, and Yang Li
Optimization of Technological Processes at Production Sites Based
on Digital Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
Pavel Drobintsev, Nikita Voinov, Lina Kotlyarova, Ivan Selin,
and Olga Aleksandrova
Smart Maintenance in Asset Management – Application
with Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
Harald Rødseth, Ragnhild J. Eleftheriadis, Zhe Li, and Jingyue Li
Maintenance Advisor Using Secondary-Uncertainty-Varying Type-2
Fuzzy Logic System for Offshore Power Systems . . . . . . . . . . . . . . . . . . 616
Haitao Sang
Determine Reducing Sugar Content in Potatoes Using
Hyperspectral Combined with VISSA Algorithm . . . . . . . . . . . . . . . . . . 625
Wei Jiang, Ming Li, and Yao Liu
Game Theory in the Fashion Industry: How Can H&M Use Game
Theory to Determine Their Marketing Strategy? . . . . . . . . . . . . . . . . . . 633
Chloe Luo and Yi Wang
Multidimensional Analysis Between High-Energy-Physics Theory
Citation Network and Twitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
Lapo Chirici, Yi Wang, and Kesheng Wang
Application of Variable Step Size Beetle Antennae Search
Optimization Algorithm in the Study of Spatial Cylindrical Errors . . . . 646
Chen Wang, Yi Wang, and Kesheng Wang
A Categorization Matrix and Corresponding Success Factors
for Involving External Designers in Contract Product Development . . . 654
Aleksander Wermers Nilsen and Erlend Alfnes
xviii Contents

Engineering Changes in the Engineer-to-Order Industry:


Challenges of Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
Luis F. Hinojos A., Natalia Iakymenko, and Erlend Alfnes
Impact of Carbon Price on Renewable Energy Using Power
Market System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
Xiangping Hu, Xiaomei Cheng, and Xinlu Qiu

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679


About the Editors

Yi Wang obtained his PhD from Manufacturing Engineering Centre, Cardiff


University in 2008. He is a lecturer in Business School, Plymouth University, UK.
Previously, he worked in the Department of Computer Science, Southampton
University and at the Business School, Nottingham Trent University. He holds
various visiting lectureship in several universities worldwide. Dr. Wang has special
research interests in supply chain management, logistics, operation management,
culture management, information systems, game theory, data analysis, semantics
and ontology analysis, and neuromarketing. Dr. Wang has published 26 technical
peer-reviewed papers in international journals and conferences. He co-authored two
books: Operations Management for Business and Data Mining for Zero-defect
Manufacturing.

Kristian Martinsen took his Dr. Ing. degree at the Norwegian University for
Science and Technology (NTNU) in 1995, with the topic “Vectorial Tolerancing in
Manufacturing”. He has 15 years’ experience from the manufacturing industry. He
is a professor at faculty of Engineering and Department for Manufacturing and
Civil Engineering, the Norwegian University for Science and Technology (NTNU),
and is the manager of the Manufacturing Engineering research group in this
department. He is a corporate member of the International Academy for Production
Engineering and a member of the High-Level Group of the EU technology platform
for manufacturing, MANUFUTURE. He is the manager of the Norwegian national
infrastructure for manufacturing research laboratories, MANULAB, and is the
international co-ordinator for the Norwegian Centre for Research-based
Innovation SFI MANUFACTURING. He has published many papers in interna-
tional journals and conference. His major research area is within the field of
measurement systems, variation/quality management, tolerancing and industry 5.0.

xix
xx About the Editors

Tao Yu is the president of Shanghai Second Polytechnic University (SSPU), China


and professor of Shanghai University (SHU). He received his PhD from SHU in
1997. Professor Yu is a member of the Group of Shanghai manufacturing infor-
mation and a committee member of the International Federation for Information
Processing IFIP/TC5. He is also an executive vice president of Shanghai Science
Volunteer Association and executive director of Shanghai Science and Art Institute
of Execution. He managed and performed about 20 national, Shanghai, enterprises
commissioned projects. He has published hundreds of academic papers, of which
about thirty were indexed by SCI, EI. His research interests are mechatronics,
computer integrated manufacturing system (CIMS) and grid manufacturing.

Kesheng Wang holds a PhD in production engineering from the Norwegian


University of Science and Technology (NTNU), Norway. Since 1993, he has been
appointed as a professor at the Department of Mechanical and Industrial
Engineering, NTNU. He is also an active researcher and serves as a technical
adviser in SINTEF. He was elected member of the Norwegian Academy of
Technological Sciences in 2006. He has published 21 books, 10 book chapters and
over 270 technical peer-reviewed papers in international journals and conferences.
Professor Wang’s current areas of interest are intelligent manufacturing systems,
applied computational intelligence, data mining and knowledge discovery, swarm
intelligence, condition-based monitoring and structured light systems for 3D
measurements and RFID, predictive/cognitive maintenance and Industry 4.0.
Design and Optimization
Performance Analysis of Ball Mill Liner Based
on DEM-FEM Coupling

Zhen Xu1, Junfeng Sun2(&), Taohong Liao3, Xiangping Hu4,


and Xuedong Zhang5
1
Faculty of Mechanical and Electrical Engineering, Yunnan Open University,
Kunming, China
2
Faculty of Mechanical and Electrical Engineering,
Kunming University of Science and Technology, Kunming 650504, China
[email protected]
3
Department of Marine Technology,
Norwegian University of Science and Technology, Trondheim, Norway
4
Industrial Ecology Programme, Department of Energy and Process
Engineering, Norwegian University of Science and Technology,
Trondheim, Norway
5
SAIC Iveco Hongyan Commercial Vehicle Co., Ltd.,
Chongqing 401122, China

Abstract. In the paper, based on the finite element-discrete element method


coupling and the coupled bionics theory, four lifting bar models with coupled
bionic surface structures, such as smooth, transverse stripe, longitudinal stripe
and convex hull, are established. The particle model with different geometric
features is established and the coupled simulations using finite element-discrete
element method are carried out. Results show that the lift bar with the coupled
bionic structure with soft matrix-hard bearing unit can effectively reduce the
equivalent stress of the lifting bar during grinding. This coupled bionic structure
can also reduce wear and tear of the lifting bar, and therefore, the lifting bar is
protected and the grinding effect is improved. Results also indicate that the
transverse stripe-coupled bionic structure has the best wear resistance and
grinding effect comparing among the four kinds of lifting bars.

Keywords: Ball mill  Lifting bar  DEM-FEM  Liner wear  Equivalent


stress  Coupled bionic structure

1 Introduction

As important equipment in the field of mineral processing, ball mills play crucial roles
in normal operation of the national economy [1–3]. For ball mills, wear and tear of the
lifting bars is the main causes of failure of the liner of the ball mill. According to
historical data [4], wet ball mills consumed more than 110,000 tons of liners in metal
mines during 2004 in China. However, green environmental protection is the main-
stream trend of world economic development, which adds new requirements on
equipment efficiency to save energy. Therefore, selecting a suitable wear-resistant

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 3–10, 2020.
https://doi.org/10.1007/978-981-15-2341-0_1
4 Z. Xu et al.

material and designing a reasonable structure of the liner is of great importance for the
ball mill.

2 Typical Biological Structure Analyses

Two biological characteristics of the buttercup and the ostrich toe make them to have
excellent wear and impact resistance [5]. These characteristics are closely related to the
surface topography, hierarchy, and materials of the organism. The unit shape of the
wear-resistant part of the ostrich toe is a spherical crown type convex body. This part of
the toe is in contact with the ground and has a high frequency of abrasive wear with
sand. By measuring the cross section, it is found that the ratio of height, width and
adjacent spacing of the bottom of the convex body is about 5:3:1. Similarly, the ratio of
the intercostals grooves, rib width and rib height can be found from the surface mor-
phology of the clam shell and it is about 4:3:1.

3 Lifting Bar Structure Design

In this paper, the ball mill prototype with the cylinder size 600 mm  400 mm is used
as the research platform and the size of the incoming ore is 4 mm–12 mm. The pro-
portional relationship discussed in Sect. 2 between the different parameters of the
extracted biometrics is used to design the lifting bars with different characteristic
surfaces. According to the relationship between the stripe and the direction of motion
of the cylinder, the stripe is divided into horizontal stripe and vertical stripe. The three-
dimensional model of the lifting bar with the stripe feature is shown in Fig. 1. In the lift
bar model, the width of the horizontal stripes and vertical stripes is M = 6 mm, the
height is H = 2 mm, and the space between stripes is L = 8 mm. At the same time,
according to the wear-resisting characteristics of ostrich toes, a lifting bar with convex
features on the surface is designed. The unit of the convex hull feature as a diameter of
M = 6 mm, a height of H = 2 mm, and the space between stripes is L = 8 mm. The
designed lifting bar with convex hull feature is shown in Fig. 2.

Fig. 1. Lifting bar with stripe features Fig. 2. Lifting bar with convex hull feature
Performance Analysis of Ball Mill Liner Based on DEM-FEM Coupling 5

4 Simulation Parameter Definitions

The parameters of the mill in the simulation are as follows: the filling rate is 0.4 [6]; the
diameter of the selected steel balls is 40 mm, 30 mm, 20 mm, and they match the
medium using the equal number method. The material of the steel ball is ZGMn13, and
the selected mill rotation rate is 65%.

4.1 Particle Model Establishment

Table 1. The mass distribution of ore and grinding medium of each particle in the simulation
Particle size (mm) Category Shape Quality (kg)
4 Ore Sphere 8.568
Tetrahedron
7.5 Sphere 11.424
Tetrahedron
Hexahedron
Diamond
12 Sphere 8.568
Tetrahedron
20, 30, 40 Grinding medium Sphere 216

The outlines of the particles are limited to spheres, tetrahedrons, hexahedrons, and
diamonds. The mass distribution of particles and the shape in the simulation model are
shown in Table 1.

4.2 The Meshing of Geometry


Tetrahedral mesh is used for the liner, the lifting bar and its bionic unit. A single liner
was analyzed when the amount of wear was counted.

4.3 Setting of Simulation Model


Set the motion of the geometric model to a linear rotation, and the speed is 35.55 rpm.
The geometric model starts from 1.45 s, and the total simulation time is 2.9 s. The
motion of materials in the ball mill at different time points are shown in Fig. 3, where
Fig. 3(a) shows the static accumulation state after blanking is completed, and Fig. 3(b)
shows the motion state of the material at 2.9 s.
6 Z. Xu et al.

(a)Static accumulation state after blanking (b)The motion state of the material at 2.9s

Fig. 3. The motion of materials in the ball mill at different time points

5 Simulation Result Analysis

The static analysis of the liner is based on the DEM-FEM coupling method. The force
obtained in EDEM soft and its simulation files are imported into the EDEM module.
Therefore, the meshing in the coupling analysis is consistent with the discrete element
simulation. In such a way, a more accurate solution can be obtained.
The wear of the lifting bar is related to the impact of the material and the medium
on the lifting surface. Therefore, the equivalent stress distribution on the surface of the
lifting bar is analyzed in detail. Figures 4, 5, 6 and 7 shows the equivalent stress
distribution of the ball mill cylinder model with four different surface feature lifting
bars in a certain period. Results in Fig. 4 illustrate that the smooth surface of lifting bar
is most concentrated in the joint portion of the lifting bar and the substrate of liner
during a certain period of mill operation. Secondly, Stress concentration occurs at the
adjacent part of the surface of lifting bar and the upper surface. This is because the
contacts between the material and the medium in these places are the severest, and it is
known that the stresses at both ends of the lifting bar are also higher than the stress in
the middle of the bar. It can be seen from Fig. 5 that the equivalent stress on the surface
of the lateral stripe surfacing feature lift bar is mainly concentrated in the adjacent part
of the lift bar and the liner base, and there is no obvious stress concentration. The stress
between the bionic features is lower than that of the bionic features. From the stress and
its distribution, the value is smaller than that of the smooth surface features.
Figure 6 shows the stress distribution of the lifting bar of the longitudinal stripe
feature. The stress is mainly distributed at the end of the longitudinal feature of lifting
bar and near the substrate of the liner. This is related to the geometric dimension of the
end of the feature and the size of the material. Since the particles stay between the
bionic feature and the substrate of the liner, the stress becomes larger. In general, the
stress value of the longitudinal feature of lifting bar is smaller than that of the smooth
surface. In Fig. 7, the distribution of overall stress is uniform compared to the other
three lift bars. At the same time, the stress value on the feature unit is smaller than the
stress value of the lateral feature bar. These results show that the stress relieving effect
of the convex hull type bionic feature is better than other bionic feature in the simu-
lation. However, on the matrix between the features, there is few difference between the
vertical stripes lifting bar and even the smooth surface lifting bar, so that the protection
Performance Analysis of Ball Mill Liner Based on DEM-FEM Coupling 7

Fig. 4. Equivalent stress distributions of smooth Fig. 5. Equivalent stress distributions of


surface lifting bars and its surrounding area horizontal stripes surface lifting bars

Fig. 6. Equivalent stress distributions of Fig. 7. Equivalent stress distributions of


vertical stripes lifting bars convex hull lifting bars

effect on the substrate is limited. To make a more accurate study of the wear zone
distribution of each lifting bar during the grinding process, the EDEM post-processing
module is used to select the “Geometry Bin” option. The concentrated area of the liner
and lifting bars are divided into two parts, and the areas are numbered and shown in
Fig. 8.
In the post-processing module of EDEM, the “Arc hard Wear” option in the
geometry section is selected in “File Export-Result Date”, since this option evaluates
the amount of wear based on the accumulated energy. The trend can also verify the
cumulative energy received by the liner and the lifting bar in this simulation. The
accumulated energy received by the liner and the lifting bar is plotted after the ball mill
model with different surface feature operated for a period, and this result can be used to
intuitively compare the amount of wear in the same area. The relationship is shown in
Fig. 9.
8 Z. Xu et al.

Fig. 8. The divided area of the liner and the lifting bars

Fig. 9. The wear distribution of specified area in different liner and lifting bars

As shown in Fig. 9, for the liner where the smooth surface feature lifting bar is
mounted, the main wear position is located between the upper surface of the lifting bar
and the non-lifting surface, i.e., between the region 9 and the region 14. The amount of
wear in the region 9 is 5.19E−7 mm. This is because of the right-angled trapezoidal
liner used in this simulation. Therefore, stress concentration appears at the common
edge of the non-lifting surface (region 9) and the upper surface. At the same time, the
transition region between the upper surface and the lifting surface (region 13) under-
goes relatively intense friction during the grinding process with the material raised to
the highest point, and this is the reason why the amount of wear is correspondingly
high. The reason why the amount of wear of the area 10 to area 12 is also relatively
large is that from the time when the upper surface is in contact with the material, and
the time when the material begins to drop, the material does more work for the location
than the work done on the lifting surface, so the amount of wear is slightly larger than
the area 13 to area 15. The wears of other areas are significantly lower than the ones in
area 9 to area 14.
Performance Analysis of Ball Mill Liner Based on DEM-FEM Coupling 9

6 Conclusions

In this paper, the stress distribution of the lifting bar and its surrounding area during the
grinding process is analyzed. At the same time, the wear areas of the lifting bar of
different surface features are compared. The results are summarized as follows:
(1) During the grinding process, the stress of the lifting bar is mainly concentrated in
the adjacent part of the lifting bar and the liner, and the values in the two ends are
larger than the one in intermediate part. The smooth surface lifting bar has
obvious stress concentration during the grinding process, and the concentrated
area is the widest among the four lifting bars. The stress relaxation effect occurs in
all three different bionic feature units, and this reduces the equivalent stress of the
lifting bar to a certain extent and improves the wear resistance of the lifting bar.
(2) Although the stress on the convex hull type bionic feature unit is significantly
reduced, the effect of the release of the equivalent stress on the lifting bar base is
lower than that of the horizontal stripe. The order of the stress relieving effect of
the bionic feature unit, from high to low, is horizontal stripe, convex hull feature
and vertical stripe.
(3) During the grinding process, the wear of the liner surface and the lifting surface of
the right angle trapezoidal lifting bar is significantly higher than that of the liner
between the lifting bar, and the most severely worn area are mainly concentrated
near the common side of upper surface and lifting surface.
(4) Comparing with the lifting bar on the smooth surface, the liner with the bionic
feature lifting bar has better wear resistance, and the order of the wear resistance
of the liner, from low to high, is horizontal stripes, vertical stripes, and convex
hull.
(5) The horizontal stripes bionic feature lifting bar can extend the replacement period
of the mill liner and avoid the mill downtime caused by the inconsistent
replacement period between the liner and the lifting bar. It can also reduce the
number of liner replacements and extend the liner life cycle.

Acknowledgement. The work was fully supported by the Youth Project of Science and
Technology Department of Yunnan Province (No. 2017FD132). We gratefully acknowledge the
relevant organizations.

References
1. Zhao, M., Lu, Y., Pan, Y.: Review on the theory of pulverization and the development of
pulverizing equipment. Min. Metall. 10(2), 36–41 (2001)
2. Zhang, G.: Current status and development of crushing and grinding equipment. Powder
Technol. 4(3), 37–42 (1998)
3. Belov, Brandt: Grinding. Liaoning People’s Publishing House, Liaoning (1954)
10 Z. Xu et al.

4. Li, W.: Market and production of wear-resistant steel parts. In: Proceedings of the Annual
Meeting of Yunnan Wear-Resistant and Corrosion-Resistant Materials 2004, Kunming,
pp. 7–12 (2004)
5. Cao, Z., Wang Dapeng, G.: Research progress of Marsh’s mother-of-pearl and seawater
pearls. J. South. Agric. Sci. 40(12), 1618–1622 (2009)
6. Zhang, X., Dong, W., Zhou, H., et al.: Numerical simulation of wear resistance of ball mill
lifting bars with biomimetic characteristics. Nonferrous Metals (Mineral Process.) 6, 56–62
(2017)
Performance of Spiral Groove Dry Gas
Seal for Natural Gas Considering
Viscosity-Pressure Effect of the Gas

Xuejian Sun1, Pengyun Song2(&), and Xiangping Hu3(&)


1
Mechanical and Electrical Engineering, Kunming University of Science
and Technology, Kunming, Yunnan, China
2
Chemical Engineering, Kunming University of Science and Technology,
Kunming, Yunnan, China
[email protected]
3
Industrial Ecology Programme, Department of Energy and Process
Engineering, Norwegian University of Science and Technology,
Trondheim, Norway
[email protected]

Abstract. Centrifugal compressors used for transporting natural gas are usually
equipped with dry gas seals. The working medium of the seal is usually the
delivered gas, that is, natural gas. In this paper, the natural gas viscosity-pressure
equation is derived from the Pederson mixed gas viscosity model and Lucas
viscosity-pressure model, and the real gas property of natural gas is expressed by
Redlich-Kwong equation. The gas film pressure governing equations proposed
by Muijderman for narrow grooves are modified and solved for the seal faces.
The influences of natural gas viscosity-pressure effect on the sealing charac-
teristics, such as leakage rate and opening force, of spiral groove dry gas seal are
analyzed. Results show that the viscosity-pressure effect has significant influ-
ence on spiral groove dry gas seal. This effect reduces the leakage rate but
increases the opening force, compared to the situation without considering the
viscosity-pressure effect. With the pressure up to 4 MPa, the viscosity-pressure
effect of natural gas is weak and negligible. As the pressure increases, the
viscosity-pressure effect increases. At 12 MPa, the relative deviations of leakage
rate and opening force caused by the viscosity-pressure effect are respectively
−30.6% and 1.65%. Therefore, the analyses indicate that the viscosity-pressure
effect of natural gas needs to be considered when used in high pressure situation.

Keywords: Dry gas seal  Natural gas  Analytical method  Viscosity-pressure


effect

1 Introduction

In natural gas long-distance pipelines, the compressors used for transporting natural gas
are usually equipped with dry gas seals as their shaft end seals. The working medium of
the seal is usually the delivered gas, that is, natural gas. Typically, the natural gas in the
pipeline is a mixture of different gases, and its components is different from the natural
gas sources. The physical property is different from each other. Viscosity is an

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 11–19, 2020.
https://doi.org/10.1007/978-981-15-2341-0_2
12 X. Sun et al.

important physical property for dry gas seal, and this property of the natural gas is
closely related to gas components, temperature and pressure. In general, when the
isothermal flow is assumed, the viscosity of the natural gas is a function of composition
and pressure.
Daliri et al. [1] analyzed the viscosity variation with pressure to obtain squeeze film
characteristics by modified Reynolds equation and Stoke’s microcontinuum. Lin et al.
[2] analyzed the effects of viscosity-pressure dependency and studied the impacts of
squeezed films between parallel circular plates of non-Newtonian coupled stress fluid
lubrication. According to their results, the influences of viscosity-pressure dependency
raise the load capacity and lengthen the approaching time of the plates. As to the
viscosity-pressure effect on the dry gas seal, Song et al. [3] analyzed the effect of
viscosity-pressure of nitrogen gas on the sealing performance by the Lucas model. Their
results show that high pressure has significant effects on the opening force, the leakage
rate and the gas pressure at the spiral groove root radius. However, nitrogen is a pure gas
and does not involve the viscosity relationship of a mixed gas such as the natural gas.
As to the spiral groove dry gas seal, the Pederson mixed gas viscosity model and
Lucas viscosity-pressure model are used to express the natural gas viscosity-pressure
effect, and the real gas property of natural gas is expressed by Redlich-Kwong equa-
tion. The gas film pressure governing equations proposed by Muijderman for narrow
grooves are modified and solved for the dry gas seal faces. The dry gas sealing
characteristic parameters such as the opening force and leakage rate are obtained.

2 Model Description
2.1 Geometry Model
The structural model of the spiral groove dry gas seal and geometric model of seal face
are shown in Fig. 1. In the geometric model, ri and ro are the inner and outer radii of
the sealing ring, respectively, and rg is the radius at the root of the spiral groove; x is
the angular velocity of the sealing ring; pi and po are the inlet and outlet pressure, and a
is the helix angle.

Fig. 1. Structural model of the spiral groove dry gas seal (a) and geometric model of seal face (b)
Performance of Spiral Groove Dry Gas Seal for Natural Gas 13

2.2 The Model of Natural Gas Viscosity-Pressure Effect


Lucas natural gas viscosity-pressure equation [4] has the following form,

1 þ a1 p1:3088
r
go ðpo ; To Þ ¼ g0 ð1Þ
a2 pa5 a4 1
r þ ð1 þ a3 pr Þ

Equation (1) is substituted to the Pederson mixed gas viscosity expression [5],
which yields the model of natural gas viscosity-pressure effect:
 16  23  12
Tc;mix pc;mix l amix 1 þ a1 p1:3088
r
gmix ¼ g ð2Þ
Tco pco Mo ao a2 pa5 a4 1 0
r þ ð1 þ a3 pr Þ

The po, To, Tc,mix, pc,mix and l are expressed as follow:

ppco  ð1:0 þ 0:031q1:847


r Þ TTco  ð1:0 þ 0:031q1:847
r Þ
po ¼ 3
; To ¼
pc;mix ð1:0 þ 7:378  10 qr l 1:847 0:5173 Þ Tc;mix ð1:0 þ 7:378  103 q1:847
r l0:5173 Þ
 1  13 3    1  13 3 
P
C P
C
Tci
3 Tcj 12 C P
P C
Tci
3 Tcj 12
ni nj pci þ pcj Tci Tcj 8 ni nj pci þ pcj Tci Tcj
i¼1 j¼1 i¼1 j¼1
Tc;mix ¼  1  13 3 ; pc;mix ¼ (  1 )2
PP Tci
3 Tcj PP  13 3
ni nj pci þ pcj ni nj
3 Tci
þ
Tcj
i j pci pcj
i j

0 !2:303 !2:303 1
X
c X
c X
c X
c
l ¼ 1:304  104 @ ni Mi2 = ni M i  ni M i Aþ ni M i
i¼1 i¼1 i¼1 i¼1

Where, {ai, i = 1, …, 5} are correction factors, and Pc, Tc are the critical pressure
and critical temperature obtained from the literature [5], respectively.

2.3 The Real Gas Model of Natural Gas


The present research adopts the Redlich-Kwong Equation [6]:

RT a
p¼  ð3Þ
V  b T 0:5 V ðV þ bÞ

And the Gas state equation can be written as:

PV ¼ ZRT ð4Þ
14 X. Sun et al.

Substituting Eqs. (3) to (4) yields:


2 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3 2 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3
 2  3 1=3  2  3 1=3
N N M 5 N N M 5 þ1
Z ¼ 4 þ þ þ 4  þ ð5Þ
2 2 3 2 2 3 3

The real gas state equation is:

q ¼ pM=ZRT ð6Þ

The density of the natural gas of Eqs. (5) and (6) are expressed as:

pM=RT
qmix ¼  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1=3  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  ð7Þ
N 2
M 3 N 2
M 3ffi 1=3
 N2 þ 2 þ 3 N
þ 2 2 þ 3 þ 1
3

M, N, a, b, aij, ai and bi are expressed as follow:


 
1 p2 b2 ap bp
M ¼  þ 2 2  2 2:5 þ
3 R T R T RT 
2 1 p2 b2 ap bp abp2
N ¼    þ  3 3:5
27 3 R T2 2 R T
2 2:5 RT R T
n X
X n X
n  0:5 
a ¼ yi yj aij b ¼ yi bi aij ¼ ai aj 1  kij
i¼1 j¼1 i¼1

ai ¼ 0:42748R2 Tc2:5 =Pc Tc0:5 bi ¼ 0:08664RTc =Pc

where, ai, aj are pure material parameters, yi, yj are the mole fraction of the mixture of
the pure substances i and j; kij is the binary interaction coefficient of the i, j pure
substances. These parameters can be found in the literature [6].

2.4 Modified Gas Film Pressure Governing Equations


The modified gas film pressure governing equations can be obtained by substituting
Eqs. (2) and (6) to the gas film pressure governing equations which are based on
Muijderman narrow groove theory [7].
For sealed dam area, the equation has the form:

dp 6g St RT 1
¼ mix 3 ð8Þ
dr ph qmix r
Performance of Spiral Groove Dry Gas Seal for Natural Gas 15

For sealed groove area, the equation can be written as:

dp 6g1 gmix xr 6gmix St g7 RT 1


¼ þ ð9Þ
dr h2 ph1 h2 g5 qmix r

St is the mass flow rate of the gas passing through the sealing surface; h and h1 are,
respectively, the film thickness of the groove and non-groove area, and they fulfill the
relationship h1 = h + t, where t is the groove depth of the spiral groove; x is the
angular velocity of rotation of the sealing ring; g1, g5, and g7 are the spiral groove
coefficients which can be obtained from the literature [7].

2.5 Solution of Gas Film Pressure Governing Equations


Boundary conditions of Eqs. (8) and (9) are:

pjr¼ri ¼ pi ; pjr¼ro ¼ po

The pressure distribution p(r) of end face film is obtained, and the end face opening
force F is obtained by integrating over the entire end face:
Z ro
F¼ pðr Þ2prdr ð10Þ
ri

The leakage rate St is expressed as:


 
ph3 p2g  p2i
St ¼   ð11Þ
r
12gRc T ln rgi

3 Analytical Model and Verification

3.1 Model Verification


The results from Eqs. (2) and (7) obtained in this paper are compared with the literature
data, and they are illustrated in Fig. 2 with different pressure conditions. The results
show that the average deviation of the natural gas compressibility factor, viscosity with
the National Institute of Standards and Technology database (NIST) [8] are 0.344%
and 1.45%, respectively.
16 X. Sun et al.

1.00
Current data 2.4 Current data
0.98 NIST database Ref [9]

Natural gas viscosity ηmix/10-5Pa.s-1


0.96
GERG-2008 natural gas data 2.2 RealPipe data
NIST database
Compressibility factor/Z

0.94 2.0

0.92
1.8

0.90
1.6
0.88
1.4
0.86

0.84 1.2

0 3 6 9 12 15 0 2 4 6 8 10 12 14 16
P/MPa P/MPa

Fig. 2. The comparison between the current data and references data [9, 10]

3.2 Property Parameter


The parameters of natural gas components and seal face geometric are listed in
Tables 1 and 2, and these parameters are from the literature [5, 11].

Table 1. Natural gas components and parameters


Component CH4 C2H6 C3H8 I- C4H10 N- C4H10 CO2 N2
Molar 0.812 0.043 0.009 0.0015 0.0015 0.076 0.057

Table 2. Basic parameters of numerical calculation


Parameter Value Parameter Value
Outer radius, ro/mm 77.78 Radius of groove root, rg2/mm 69
Inner radius, ri/mm 58.42 Spiral groove angle, a1/° 15
Number of groove, n 12 Groove depth, hg1/lm 5
Film thicknesses, h0/lm 3.0 Groove width ratio, c 1

3.3 Relative Errors


As shown in Sect. 2.5, the leakage rate St and opening force F of the natural gas with
the viscosity-pressure effect and the real gas property can be obtained from Eqs. (9) and
(10). Furthermore, two additional cases are analyzed, i.e., ideal gas case by setting
Z = 1 and viscosity-pressure effect ignorance case with constant viscosity. To express
the effect of natural gas viscosity-pressure on spiral groove dry gas seals, the relative
errors are used:
E1 = ((the leakage rate of G1-the leakage rate of G3)/(the leakage rate of
G3))  100%.
E2 = ((the leakage rate of G2-the leakage rate of G4)/(the leakage rate of
G4))  100%.
E3 = ((the opening force of G1-the opening force of G3)/(the opening force of
G3))  100%.
Performance of Spiral Groove Dry Gas Seal for Natural Gas 17

E4 = ((the opening force of G2-the opening force of G4)/(the opening force of


G4))  100%.
where G1 is the ideal gas with the viscosity-pressure effect; G2 is the real gas with the
viscosity-pressure effect; G3 is the ideal gas without viscosity-pressure effect; G4 is the
real gas without viscosity-pressure effect.

4 Results and Discussion

The boundary condition of internal pressure is 0.1013 MPa and the external pressure po
is respectively 0.6 MPa, 4 MPa, or 12 MPa. The sealing performance is calculated at
different film thicknesses. The results are shown in Figs. 3 and 4.

4.1 Leakage Rate


The leakage rates of G1 to G4 under different pressures and film thicknesses are shown
in Fig. 3. It can be seen from the Fig. 3(a)–(c) that the leakage rate increases with the
increase of film thickness and pressure. Figure 3(d) is a three-dimensional (3-D) map of
pressure, film thickness, leakage rate, from which it can be seen that the interaction
between the leakage rate and the two parameters, i.e. the film thickness and pressure is
obvious. When the pressure reaches 0.6 MPa, the averages of E1, E2 are, −0.070% and
−1.193%, respectively. Negative values of E1 and E2 indicate that the viscosity-

Fig. 3. Leakage at different film thicknesses and different po


18 X. Sun et al.

pressure effect reduces the leakage rate. The reason is that as the pressure increases, the
viscosity increases and the gas flow decreases, which results in the decrease in the
leakage rate. The value of │E2│ is greater than the │E1│ indicates that the
viscosity-pressure effect induces a stronger influence on real natural gas spiral groove
dry gas seal compared with the assumptions of ideal gas. When the pressure reaches
12 MPa, the averages of E1 and E2 are −28.622% and −30.6%, respectively. The
results show that the viscosity-pressure effect has influence on the leakage rate of dry
gas seal.

4.2 End Face Opening Force


The opening force of G1 to G4 under different pressures and film thicknesses are shown
in Fig. 4. The result in Fig. 4(a)–(c) show that the opening force increases with the
increase of pressure but decreases with the increase of the film thickness. From the
three-dimensional (3-D) map of pressure, film thickness and opening force, it can be
seen that the effect of pressure on the opening force is more obvious compared with the
film thickness. E3, E4 is greater than 0, indicating that the viscosity-pressure effect of
natural gas raises the opening force. At 0.6 MPa, the opening forces of the G1 to G4
almost overlap. As the pressure increases, the relative error of the opening force of G1
to G4 increases. When the pressure reaches 4 MPa, the average values of E3, E4 are
0.503%, 0.8120%, respectively. When the pressure reaches 12 MPa, the average values
of E3, E4 are 0.901%, 1.6472%, respectively. It is shown that under 4 MPa, the effect of
natural gas viscosity-pressure effect on the opening force is negligible.

Fig. 4. Opening force for kinds of gas at different film thicknesses


Performance of Spiral Groove Dry Gas Seal for Natural Gas 19

5 Conclusions

For spiral groove dry gas seal of conveying natural gas centrifugal compressor, the
natural gas viscosity-pressure effect is analyzed based on the narrow groove theory of
the spiral groove. The conclusions of the present research are listed as follows: (1) The
viscosity-pressure effect reduces the gas leakage rate but increases the opening force.
(2) Up to 4 MPa, natural gas viscosity-pressure effect is weak. As the pressure
increases, the viscosity-pressure effect increases. (3) At 12 MPa, the relative deviations
of leakage rate and opening force caused by the viscosity-pressure effect are respec-
tively 30.6% and −1.6472%. The viscosity-pressure effect of natural gas needs to be
considered when used for high pressure situation.

Acknowledgement. The research is supported by National Natural Foundation of China


(granted no. 51465026)

References
1. Daliri, M., Jalali-Vahid, D.: Investigation of combined effects of rotational inertia and
viscosity-pressure dependency on the squeeze film characteristics of parallel annular plates
lubricated by couple stress fluid. J. Tribol.-Trans. 137(3), 1–23 (2015)
2. Lin, J.-R., Chu, L.-M., Liang, L.-J.: Effects of viscosity-pressure dependency on the non-
newtonian squeeze film of parallel circular plates. Lubr. Sci. 25(1), 1–6 (2013)
3. Song, P., Ma, A., Xu, H.: The high pressure spiral groove dry gas seal performance by
considering the relationship of the viscosity and the pressure of the gas. In: 23rd
International Conference on FLUID SEALING, pp. 36–72 (2016)
4. Poling, B.E., Prausnitz, J.M., John, P.O.C., et al.: The Properties of Gases and Liquids.
Mcgraw-Hill, New York (2001)
5. Zhang, S., Ma, I., Xu, Y.: Natural Gas Engineering Handbook, pp. 32–44. Petroleum
Industry Press, Beijing (2016). (in Chinese)
6. Chen, Z., Gu, Y., Hu, W.: Chemical Thermodynamics, Third edn. pp. 184–185. Chemical
Industry Press, Beijing (2011). (in Chinese)
7. Muijderman, E.A.: Spiral Groove Bearings, pp. 17–21. Springer, New York (1966).
Bathgate R.H. Trans.
8. National Institute of Standards and Technology: NIST Chemistry Webbook [EB/OL]
9. Sun, H., Wang, J.: Design Manual for Flowmeter Measurement Throttling Device, Second
edn., p. 4. Chemical Industry Press, Beijing (2005). (in Chinese)
10. Deng, J., Song, F., Chen, J.: Research the simulation software of Realpipe-gas in natural gas
long-distance pipelines. Oil Gas Storage Transp. 9(30), 659–662 (2011). (in Chinese)
11. Sun, X., Song, P.: Analysis on the performance of dry gas seal in centrifugal compressor for
transporting natural gas. J. Drain. Irrig. Mach. Eng. 1(36), 55–62 (2018). (in Chinese)
Analysis of Residual Stress for Autofrettage
High Pressure Cylinder

Guiqin Li1(&), Yang Li1, Jinfeng Shi1, Shijin Zhang1,


and Peter Mitrouchev2(&)
1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai 200072, China
[email protected]
2
University Grenoble Alpes, G-SCOP, 38031 Grenoble, France
[email protected]

Abstract. Based on the elastoplastic theory of materials, the distribution of


residual stress is analyzed by the Huang’s model calculation to obtain the
optimum autofrettage pressure considering the strain hardening effect of the
plastic material and the Bauschinger effect. The BLH simulating model is set up
by applying 16 different autofrettage stresses from 400 MPa to 1200 MPa. The
optimized autofrettage stress distribution was achieved by comparing the
working stress of 16 groups of experiments. Then, the Huang’s model is sim-
plified, and the trend of working stress with residual stress is obtained. The
reliability of the simplified model is verified, which provides a basis for aut-
ofrettage high-pressure cylinders design.

Keywords: Bauschanger effect  Autofrettage  Residual stress  High pressure


cylinder

1 Introduction

The essence of self-enhancement technology is the rational use of residual compressive


stress field. Therefore, the study of residual stress field is the basis of autofrettage tech-
nology to improve the fatigue characteristics of ultra-high pressure parts. For ultra-high
pressure pumps, it is an indisputable fact that the fatigue characteristics of the components
can be greatly improved after the introduction of the residual stress field. However, how
to quantify the change in fatigue characteristics of the residual stress field? Over the years,
many Chinese and foreign researchers have conducted a lot of exploration.
The feasibility of finite element analysis in the application of prestressed scene
analysis is demonstrated by Parker [1], including the plastic deformation of metals with
highly nonlinear stress-strain curves. Bhatnagar [2] presents a new concept of self-
reinforced composite tube and models the autofrettage process by considering the
Bauschinger effect. Gibson [3] proposes a swage-like model, which applies deforma-
tion through pressure belt to investigate the effect of local load and shear stress on
residual stress field. Peng [4] considers the energy stored in the residual micro-stress
field in the plastically deformed material and its influence on the subsequent plastic
deformation and derives a 3D constitutive equation for large elastic deformation.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 20–28, 2020.
https://doi.org/10.1007/978-981-15-2341-0_3
Analysis of Residual Stress for Autofrettage High Pressure Cylinder 21

Domestic researchers have also done a lot of work on the autofrettage model of the
high pressure cylinder. Huang [5] proposes an autofrettage model considering the
material strain-hardening relationship and the Bauschinger effect, based on the actual
tensile and compressive stress-strain curve of material, plane-strain, and modified yield
criterion. Yuan [6, 7] studied the effects of end conditions of the vessel and material
parameters on the residual stress. ZHU [8] Based on the 3rd strength theory, the
theoretical relations among the equivalent stress of total stresses at elastoplastic junc-
ture, depth of plastic zone and reverse yielding, and load-bearing capacity for an
autofrettaged cylindrical pressure vessel are analyzed and demonstrated by using
combined image and analytical methods. On the basis of a nonlinear kinematic hard-
ening model, FU and YANG [9] present a nonlinear combined hardening model which
is used to describe the mechanical properties of autofretted barrel and calculate the
residual stress distribution. Yang [10] based on the energy dissipation method, derived
the fatigue damage and fatigue life function of composite thick-walled cylinders.

2 Residual Stress Analysis

2.1 Analysis of Residual Stress in Self-reinforced High Pressure Cylinder


According to the experimental data, the fatigue life of the high-pressure cylinder is only
500–600 h. In order to improve the fatigue life of the high-pressure cylinder, this study
intends to use the self-enhancement technology to treat the high-pressure cylinder. The
essence of self-enhancement technology is the rational use of residual compressive
stress field. Therefore, the study of residual stress field is the basis of autofrettage
technology to improve the fatigue characteristics of ultra-high pressure parts. The
material of the high-pressure cylinder is a plastic material, and the plastic material has a
Bauschinger effect and a strain hardening behavior. Therefore, if the EPP model is used
to calculate the process, there will be a large deviation.
In this paper, the optimal autofrettage stress is solved by the Huang’s model. which
defines the critical self-reinforced stress, that is, the optimal self-reinforced stress, when
the reverse yield occurs exactly at the inner wall of the self-reinforcing cylinder. It
defines that the stress when the reverse yield happens exactly at the inner wall of the
self-reinforcing cylinder is the critical autofrettage stress, that is the optimum aut-
ofrettage stress. According to the formula (2-1):
"  2 #
arE ri
Pzzq ¼ 1 ð2  1Þ
2 ro

Pzzq : Optimum autofrettage stress;


ri : Autofrettage cylinder inner diameter;
ro : Autofrettage cylinder outer diameter;
a: Yield criterion parameter.
22 G. Li et al.

The Bauschinger effect is represented by the rE parameter, which can be obtained


from the tensile-compressive stress-strain curve of the material. In this paper,
rE = 1.684rS , a = 1.21. It is calculated that the optimum self-reinforced stress is
861.167 MPa. In order to verify the reliability of the calculation results, this paper will
carry out finite element virtual simulation calculation.

2.2 Finite Element Analysis of Residual Stress


In order to facilitate the simulation of the self-enhancement process, the reverse
hardening stage and replaces the original curve with a multi-segment polyline has been
optimized in this paper. The tensile-compression curve of the newly constructed self-
reinforced wall material is shown in Fig. 1 where the DEFG section is the reverse
hardening stage, which characterizes the effect of the Bauschinger effect on the
material.

Fig. 1. Material model

2.3 Analysis of Working Stress of Non-autofrettage High Pressure


Cylinder
Taking a high pressure cylinder of waterjet cutting machine as the research object for
modeling, in order to facilitate the calculation, the author simplified the model and only
relied on the external dimensions for modeling. Since the model is relatively simple, it
can be modeled directly in abaqus to avoid unpredictable errors during the import of
the model. The working condition of the high-pressure cylinder is 420 MPa. The
stress-strain cloud diagram of the high-pressure cylinder under the working pressure is
shown in Fig. 2. It can be seen that the maximum stress is 699 MPa (Fig. 3).
Analysis of Residual Stress for Autofrettage High Pressure Cylinder 23

Fig. 2. 420 Mpa stress contour Fig. 3. Plastic deformation contour

3 FEA of Residual Stress


3.1 Stress Distribution of the BLH Model
The high-pressure cylinder is approximately subjected to the ultra-high internal pres-
sure load of the pulsation cycle. When the simulation calculation is performed, the
pulsating cyclic load is calculated by the working load of 420 MPa, and the calculation
result is safe. By applying different self-reinforcing stresses, it was divided into 16
groups from 400 MPa to 1200 MPa. Only the self-enhancement pressure is different
between experiments, so this paper uses python to program the simulation process to
reduce the amount of repetitive work. The law of distribution obtained by the simu-
lation experiment is shown in Fig. 4. As the autofrettage pressure increases, the mises
stress increases slowly, and the radial stress has a significant improvement. Analysis of
the residual stress curve reveals that residual stress begins to occur when the self-
reinforcing pressure is 550 MPa. When the autofrettage pressure reaches 600 MPa, the
thick-walled cylinder begins to yield, and the residual stress increases significantly.
Comparing the radial residual stress with the tangential residual stress, it can be found
that the radial residual stress is almost negligible.

Fig. 4. Equivalent stress contour


24 G. Li et al.

The PEEQ diagram of the thick-walled cylinder is shown in Fig. 5. As the self-
reinforcing pressure increases, the equivalent elastic-plastic strain gradually increases.
When the autofrettage pressure is 550 MPa, the elastic-plastic strain begins to appear,
and the 800 MPa front elastic-plastic strain increase is slower. The strain increase is
slower, and the elastic-plastic strain starts to increase sharply after 800 MPa. When the
self-reinforcing pressure increases to 1200 MPa, it begins to enter the fully plastic
shaped state.

Fig. 5. PEEQ contour

he working load of the water jet high pressure cylinder studied in this paper is
420 MPa, and the maximum stress under the working load of the high pressure
cylinder without self-reinforcing treatment is 699.6 MPa. The maximum stress after
self-reinforcing treatment is shown in Fig. 6. It can be seen from the figure that the
maximum working stress begins to decrease after the self-reinforcing pressure is higher
than 550 MPa. When the self-reinforced pressure increases to 800 MPa, the working
stress drops to a minimum of 495.6 MPa. Then, if the self-reinforced stress continues
to increase, the working stress begins to rise. Therefore, it can be concluded that the
working stress near 800 MPa takes the lowest value.

Fig. 6. Residual stress and working stress


Analysis of Residual Stress for Autofrettage High Pressure Cylinder 25

It can be found that the optimal self-enhancement pressure calculated by the for-
mula method is quite different from the optimal self-enhancement pressure calculated
by the bilinear hardening model. Therefore, in the engineering design process, it is not
advisable to simulate the autofrettage high-pressure cylinder using the BLH.

3.2 Stress Distribution of Autofrettage Material Model


The Bauschinger effect refers to the phenomenon that plastic strain strengthening
caused by forward loading during metal plastic processing leads to plastic strain
softening during subsequent reverse loading. When the metal material is first stretched
to the plastic deformation stage, unloaded to zero, and then reverse loaded, that is,
when the compression deformation is performed, the compressive yield limit of the
material is significantly lower than the original yield limit. The object of this paper is
the high-pressure cylinder of water jet cutting machine. The load is pulsed and has the
deformation process of reciprocating loading, unloading and reloading. Therefore, the
Bauschinger effect needs to be considered.
According to the reference [5], the autofrettage residual stress distribution can be
obtained by subtracting the corresponding unloading stress from the loading stress, and
the calculation of the residual stress can be divided into two cases and three regions for
discussion. The first case is a completely elastic unloading, and no reverse yielding
occurs, so the Bauschinger effect is negligible. The second case is elastoplastic
unloading, which consists of three regions, namely, loading elastic zone, unloading
elastic zone, loading plastic zone, unloading elastic zone, loading plastic zone, and
unloading plastic zone. The analysis can be such that when the reverse yield occurs
exactly at the inner wall of the self-reinforcing cylinder, the corresponding self-
reinforcing pressure is the optimum autofrettage pressure. This paper combines the
field variable multi-analysis step to model the self-reinforced material model. The
simulation process can be roughly divided into three steps, self-reinforced stress
loading, self-reinforced stress unloading, and working load loading. rs , E1 , E2 are
measured by the tensile and compressive tests of the material. For the simplified
calculation, E3 ¼ E2 , E4 ; E5 ; E6 are obtained by interpolation.
26 G. Li et al.

Fig. 7. Radial distribution of tangential stress

The radial distribution of the maximum tangential residual stress is shown in Fig. 7
above, where green, blue, yellow, and brown represent the tangential residual stress
distribution at 800 MPa, 850 MPa, 900 MPa, and 950 MPa, respectively. It can be
seen that the maximum tangential residual stress is obtained at the inner wall. The trend
of the residual stress along the radial direction is firstly sharply decreased with the
increase of the thickness, and after reaching the wall thickness of 7 mm, it enters a
relatively gentle phase.

Max working Stress

450

AutofreƩage model BLH model

Fig. 8. Max working stress change trend

The working stress diagram is shown in the above Fig. 8. The orange curve is the
maximum working stress under the ideal elastoplastic model. It can be seen that when
the self-reinforcing stress is less than 550 MPa, the material does not have any
strengthening effect. When it is greater than 550 MPa, the working stress begins to
have a significant drop interval. The blue curve is the maximum working stress contour
under the autofrettage material model. The two are approximately coincident before
Analysis of Residual Stress for Autofrettage High Pressure Cylinder 27

750 MPa, and after more than 750 MPa, the two begin to separate. It can be seen that
because of the existence of the Bauschinger effect, the self-reinforcing effect is
weakened, so the influence of the Bauschinger effect must be considered in the design
process, otherwise there will be significant losses.

3.3 Autofrettage Stress Optimization


It can be concluded from the calculation and simulation experiments that the self-
reinforcing stress is 850 Mpa, which is the optimum autofrettage pressure. At this time,
the distribution of equivalent residual stress and equivalent working stress along the
cross section of the high-pressure cylinder is shown in Figs. 9 and 10 is shown. It can
be seen from the above analysis that the application of the self-enhancement tech-
nology effectively reduces the working stress of the high-pressure cylinder during
operation, thereby improving its working life. When the autofrettage stress reaches
750 MPa, the change of working stress tends to be gentle with the increase of self-
reinforcing pressure, and the increase of residual stress is relatively large. Large
residual stress can reduce the working stress and achieve the purpose of increasing
fatigue life. However, since the high-pressure cylinder is under high stress for a long
time, the probability of initial cracking also increases.

Fig. 9. Equivalent residual stress Fig. 10. Equivalent working stress

4 Conclusions
(1) By simplifying the water jet high pressure cylinder model, the parameters of the
self-reinforced model are determined, and the distribution of residual stress is
calculated. The value of the best self-reinforced stress is obtained.
(2) The Huang’s model is simplified to facilitate finite element analysis calculations.
Based on abaqus software, a three-dimensional finite element parametric model of
high-pressure cylinder is established. The method of simulating the self-
enhancement process of high-pressure cylinder by multi-load step and field
variable method is given. The finite element simulation of the BLH model and the
self-enhancement model is carried out, and the simulation results are obtained.
The maximum residual stress and working stress under different self-reinforced
stresses are analyzed. The optimal self-reinforced pressure is obtained by
28 G. Li et al.

comparison, and the distribution of tangential residual stress along the radial
direction is obtained under the optimal self-reinforced stress. The simulation
results show that the BLH model does not consider the Bauschinger effect and the
reverse yielding effect, and the results obtained deviate greatly from the real
situation. The results of the self-enhanced model simulation more accurately
reflect the distribution of residual stress.

References
1. Parker, A.P.: Autofrettage of open-end tubes—pressures, stresses, strains, and code
comparisons. J. Press. Vessel Technol. 123(271), 8 (2001)
2. Bhatnagar, R.M.: Modelling, validation and design of autofrettage and compound cylinder.
Eur. J. Mechan. 39, 17–25 (2013)
3. Gibson, M.C.: Determination of Residual Stress Distributions in Autofrettaged Thick-
Walled Cylinders. Cranfield University, Cranfield (2008)
4. Peng, X., Balendra, R.: Application of a physically based constitutive model to metal
forming analysis. J. Mater. Process. Technol. 145(2), 180–188 (2004)
5. Huang, X.P., Cui, W.: Effect of bauschinger effect and yield criterion on residual stress
distribution of autofrettaged tube. ASME J. Press. Vessel Technol. 128(2), 212–216 (2006)
6. Yuan, G.: Analysis of residual stress for autofrettaged ultrahigh pressure vessels. Zhongguo
Jixie Gongcheng/China Mech. Eng. 22(5), 536–540 (2011)
7. Yuan, G.: Numerical analysis of residual stress and strength of autofrettage high pressure
cylinder finishing. Mech. Des. Manuf. (03), 229–232 (2015)
8. Zhu, R.: Study on autofrettage of cylindrical pressure vessels. J. Mech. Eng. 46(6), 126–133
(2010)
9. Fu, S., Yang, G.: A nonlinear combined hardening model for residual stress analysis of
autofretted thick-walled cylinder Acta Armamentarii 07 (2018)
10. Yang, Z.Y., Qian, L.F.: Research on fatigue damage of composite thick wall cylinder. Chin.
J. Appl. Mech. 30(03), 378–383 (2013)
Study on the Detection System for Electric
Control Cabinet

Lixin Lu1, Weicong Wang1, Guiqin Li1(&), and Peter Mitrouchev2(&)


1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai 200072, China
[email protected]
2
University Grenoble Alpes, G-SCOP, 38031 Grenoble, France
[email protected]

Abstract. The electric control cabinet for massage armchair, as a control unit,
has a crucial influence on the normal operation of massage armchair, so the
manual detection of electric control cabinet should be replaced by an intelligent
detection system with high efficiency to improve the reliability of detection
results. By combining the characteristics of electric control cabinet and several
detection methods widely used at present, this paper proposes a study on the
detection system for electric control cabinet in massage armchair to meet the
urgent demand of precise and efficient detection of electric control cabinets, with
LabVIEW as the software foundation and ADAM data acquisition module as the
hardware supports.

Keywords: Cabinet  Detection  LabVIEW  ADAM module

1 Introduction

Nowadays, the application of automated detection systems plays an important role in


promoting the development of enterprises [1].
Lately the increasing popularity of massage chairs has led to the formation of an
independent industrial chain and hence the relevant standards and industry, norms for
massage chairs have officially appeared [2]. The need for massage chair testing is going
to be more urgent [3]. For the detection of the power plant control cabinet, Yang [4]
proposed to check the analog output signal of the power station by providing the analog
input signal. As for the detection system of subway control system, Lan [5] proposed a
hardware system that can directly connect the tested cabinet and give test input to the
cabinet and the response.
This paper takes the electric control cabinet as the detection object and designs a
detection system for electric control cabinet based on distributed network.

2 System Introduction

The system mainly consists of detection module and data analysis module. The
detection system of electric control cabinet mainly aims to confirm whether the control
function of cabinet is normal or not. Therefore, this paper regards the electric control

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 29–36, 2020.
https://doi.org/10.1007/978-981-15-2341-0_4
30 L. Lu et al.

cabinet as “black box” so that it should pay less attention to the internal structure and
electric control principle of the electric control cabinet but emphasize the output signal
generated by the electric control cabinet when the input signal has been given. From
this perspective, all the electric control cabinets of massage armchair have a common
characteristic regardless of various types. In short, the input and output signals of
electric control cabinet are both constituted by analog quantity signal and control
signal.
The upper computer of this system is programmed through LabVIEW, controlling
the operation of electric control cabinet through the module from RS-232 to TTL in an
industrial personal computer (IPC) and collecting the pressure value, analog voltage
and control peripheral through RS485 bus and Advantech ADAM module. As shown
in the Fig. 1, the control system controls the data acquisition module respectively
through the bus and achieves the control of detection system and data management
based on LabVIEW.

Fig. 1. Topology of control network

3 Detection Module

Generally, different signal pathways will be used to enter the controller according to
different signal types, roughly including digital input, digital output, analog input,
analog output, technology/frequency inputs and other modules. As shown in the Fig. 2,
the most common method for system structure is to control the ADAM module (I/O
module from Advantech Co., Ltd) on the whole network of RS485 through IPC and
RS232/RS485 conversion modules. Each ADAM module is connected to RS485 and
operate independently without any data exchange between modules to achieve data
transfer and data receive.
Study on the Detection System for Electric Control Cabinet 31

In this detection system, the electric control cabinet has the output signals including
pressure value, analog voltage and digital quantity, which can be acquired through I/O
module. Therefore, it’s considerately appropriate to apply the structures of distributed
data acquisition and control system constituted by RS485 bus to this system.

Fig. 2. Common system structure of ADAM module

As for the detection of airbag, measure the output pressure of air pump with an air
gauge and judge the working condition of each airbag based on the high or low levels
controlled by solenoid valve. As for the detection of movement, a set of load resistance
in the detection system is used to simulate the working condition of electric control
cabinet, so as to detect the output voltage in the case of analog load and further judge
whether the corresponding function can meet requirements. Finally, after the detection
is completed, the data will be stored in local database to generate a report and the
system will be reset for the next detection.

4 Data Analysis Module

The system processes the measured data through a barometer to determine whether the
air pump function of the electric control cabinet is normal. However, due to the
interference of various external factors, the collected signals will inevitably have
various burrs. Therefore, the signal should be filtered to overcome the sudden distur-
bance caused by the abrupt disturbance or the sensor.
When the sampling rate of the ADC is higher than the actual demand and the
controller has enough actual filtering of the sensor for multiple acquisitions, the median
filtering method is generally adopted, which essentially determines the filter output
response by minimizing the absolute value of the error box.
Its mathematical principle is that let a set of observations be fxi gð1  i  NÞ and
find the best approximation x of fxi g to minimize the sum of the absolute errors. Then
there is
32 L. Lu et al.

X
N
Q¼ j x  xi j ð1Þ
i¼1

dQ dX N
¼ jx  xi j
dx dx i¼1

N h i1=2
dX
¼ ð x  xi Þ 2
dx i¼1

XN
x  xi XN
¼ ¼ signðx  xi Þ ¼ 0 ð2Þ
i¼1
j x  xi j i¼1

To satisfy the formula (1), x should take the value of fxi g in the order of the size of
the middle position.In this system, the ADC sampling rate is set to 4.4 kHz. After each
time the data acquisition ends, a DMA interrupt is generated, and the sensor data of
different channels is copied to the elements of the corresponding label of the array
ADC1_ConvertedValue. And the elements in the array will be updated again every 8
times. At this point, the array is placed in a two-dimensional buffer as a column of a
two-dimensional array. The buffer uses the data structure of the heap (first-in first -out)
so that the five latest 8-channel sensor values are always stored in the buffer and
refreshed at a frequency of 4.4 kHz. According to the control instruction of the host
computer, when the controller needs to collect and filter data for different channels, the
program calls the filter. And if the data in the buffer does not overflow, it will directly
return the collected 8-channel value. Otherwise, it will perform 5 cycles of bubble
sorting for each row of 5 data in the buffer and 8 rows of data will be sorted to return
the median value of each row. The data is stored in the global variable ADC1_fil-
terValue and the data is stored after a data acquisition is completed. The exact delay is
based on the acquisition frequency in the instruction and the filter is called again after
the time has elapsed until all points are collected.
The data acquisition frequency of different detection functions is also different.
When the data sampling rate is low, the newly acquired elements in the buffer are
obtained by taking the median value of the five elements continuously sampled, which
achieves that the element time interval is shorter, the sampling value of pressure is
more accurate. When the sampling frequency is 4.4 kHz, one column of data is added
each time in the filter buffer and the median value is extracted by scrolling, which does
not limit the sampling rate while ensuring the filtering effect. Following figures is a
comparison of the collected pressure waveform before and after filtering. Figure 3 is
the original waveform acquired, and Fig. 4 is the effect after the filter is called. It can
tell that the filter removes the pulse interference better and makes extraction of sub-
sequent feature much smoother.
Study on the Detection System for Electric Control Cabinet 33

Fig. 3. Pre-filtering signal

Fig. 4. Filtered signal

5 System Development

The LabVIEW, which is significantly used for the detection system in the development
environment, has its own unique advantages. Firstly, the establishment of human-
computer interface is easier and faster than that of other languages. Secondly, Lab-
VIEW can provide serial communication control unit for researchers to develop the
system program of serial communication rapidly. Besides, there is a large amount of
convenience in its functions to facilitate the use of users. This system combines
LabVIEW and distributed system to build a high-efficiency and practical detection
system in an easy way.
The modularized software design is used for the programming of control system,
with the software structure shown as below (Fig. 5):
34 L. Lu et al.

Fig. 5. Software framework

As for the design and development of this system, the VISA serial communication
control unit of LabVIEW is used to achieve the connection with hardware equipment
and the communication protocol of ADAM is used to achieve the control of upper
computer over peripheral equipment, such as reading the output value of air pump
collected through air pressure sensor, etc. Meanwhile, ADAM has the functions of
current and voltage measurements, properly achieving the direct measurement on
current and voltage of analog load resistance and realize the precision and high effi-
ciency of data acquisition (Fig. 6).

Fig. 6. VI of communication judgement


Study on the Detection System for Electric Control Cabinet 35

The upper computer program is mainly divided into four parts: ADAM operation,
function detection, cabinet command and custom control. Each part includes the cor-
responding VI module. The modular design is adopted in the design of the program.
Each function detection item of electric control cabinet is set for a module (a sub-
routine). The advantage of this is that when the type of the electric control cabinet
increased, the corresponding program module can be used as long as hardware support
is satisfied, which is convenient for post-maintenance.

6 Result

According to the demand analysis on electric control cabinet of massage armchair, this
paper proposes the detection system structure of electric control cabinet based on
distributed network. The LabVIEW is adopted as the upper computer to control the
electric control cabinet and read the feedback signal through serial port and achieve the
distributed control over each ADAM module through RS485 bus so that the detection
system can have good real-time responsiveness and the capacity of resisting distur-
bance. In order to detect whether the electric control cabinet can drive the normal
operation of a motor, the load resistance should be used to simulate the internal
resistance of motor under working conditions so as to realize convenient and efficient

Fig. 7. Operation interface of system


36 L. Lu et al.

detection procedures, guarantee the accuracy of detection results and facilitate the use
of users. The detection system for the electric control cabinet of massage armchair
proposed in this paper provides an integrated environment for follow-up intelligent
production (Fig. 7).

Acknowledgements. This work is supported by the project under grant D.71-0109-18-167.

References
1. Wang, D.: Application of automated inspection in the era of “Industry 4.0”. Sci. Technol.
(12), 109 (2016)
2. Chen, J.J.: Industrial automation technology in various engineering fields. Silicon Val. 5, 129
(2010)
3. Su, D.D.: Research and Design of Adaptive Control System of Massage Chairs. Anhui
University of Technology (2017)
4. Yang, S.X., Wang, X., Wu, D.J.: The design of test system for power plant control panel.
Mob. Power Veh. (01), 19–20+35 (2011)
5. Lan, J.: Design of Automatic Test System for Electrical Cabinet of Metro Vehicle Control
System. University of Science and Technology, Nanjing (2014)
6. Mao, L., Sun, D.: A method of pressure sensor dynamic digital filter. Sens. Instrum. 24(12–1),
127–128 (2008)
7. Tian, Y.: Development of electronic control function inspection system for mining equipment.
Ind. Miner. Autom. 37(8), 165–167 (2011)
8. Ping, Y.B.: Research on Control System Based on RS-485 Network. Southwest Jiaotong
University (2003)
Effects of Remelting on Fatigue Wear
Performance of Coating

Zhiping Zhao(&), Xinyong Li, Chao Wang, and Yang Ge

School of Mechanical Engineering, Changshu Institute of Technology,


Changshu, Jiangsu, China
[email protected]

Abstract. The thermal spraying technology is a kind of surface strengthening


technology. With the development of science and technology, the thermal
spraying technology has been made extensive popularization and application.
To guarantee the service life of thermal spraying component and investigate the
effects with various remelting time, specimens of 40Cr steel substrate which
were thermally sprayed with Ni-based self-fluxing alloy coating were prepared.
Fatigue wear performance and friction coefficient of the samples with different
remelting time were investigated by the micro vibration friction and wear test
machine. Wear volume were analyzed by non-contact three-dimensional surface
contour graph measuring system. The results showed that a reasonable remelting
time has a significant effect on the surface fatigue resistance and coating
structure. At a reasonable remelting time, the fatigue resistance of the coating
surface will be the strongest.

Keywords: Coating  Remelting time  Wear volume  Wear properties

1 Introduction

Flame spray welding (powder welding) is a method to make a coating by heating a self-
fluxing alloy powder on surface with an oxy-acetylene flame or other heat source. With
this method, a spray coating on the surface of metal substrate could be obtained. The
method is well to reduce the pores and oxide slag in sprayed layer. And, a fusion layer
between metal substrate and coating would be generated and improve the compactness
and bonding strength of the coating greatly. Moreover, the surface of workpiece will
achieve an excellent corrosion resistance. Due to its performance and wear resistance
and ability to withstand higher stresses, it is widely used in aerospace, mechanical
engineering, petrochemical and other fields.
In recent decades, many scholars [1–5] have studied the safety and reliability of the
interface of powder flame spray weldments. While, the fatigue performance and wear
resistance of flame spray weldments with various remelting times have few studies. In
this study, 40Cr was metal substrate and Ni60A nickel-based self-fluxing alloy powder
was coating material. After flame spray welding, flame remelting technology is
adopted. In flame remelting process, the flame temperature is constant, various
experiments were carried out to study the effect from remelting time. In previous study,
the bending behavior and torsion fatigue properties of samples after various remelting

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 37–43, 2020.
https://doi.org/10.1007/978-981-15-2341-0_5
38 Z. Zhao et al.

time treatments have been worked out [6–8]. The main job in this study is to investigate
wear resistance of the treated coated surface. With parametric study and analyze, the
optimization of process parameters, basic theory of wear resistance and prediction of
fatigue wear life could be explored.

2 Experimental Procedure
2.1 Preparation of Spraying Material and Coating Samples
The spraying material of Ni60A alloy powder is used in this study, which is a kind of
self-fluxing alloy powder. With this material, external weld flux is not necessary during
flame spray welding. Moreover, the alloy composition shows the deoxidation and
slagging performance during remelting, which could improve the wetting performance
greatly. Meanwhile, a low melting point alloy which is well metallurgically bonded
could be obtained. The melting point from 1050 to 1100 °C could ensure only the
spray coating layer melts during remelting while have no influence on substrate metal.
The hardness of coating layer was measured as 55–65 HRC. Also, a good solid-state
flow property was observed and the particles are spherical with granularity of 200. The
mass percentage of the chemical composition are 13.7% Cr, 2.96% B, 4.40% Si, 2.67%
Fe, 0.60% C, and the rest is Ni.
Before flame spray welding, the surface of the substrate metal was derusted and
degreased. Then, the coating layer is prepared by sandblasted and roughened before
flame thermal spraying. The process parameters are shown in Table 1 [9].

Table 1. Flame thermal spaying process parameters


Fuel 1 (oxygen) Fuel 1 Spraying Preheat Spraying
press/MPa (acetylene) distance/mm temperature/°C temperature/°C
press/MPa
0.5 0.1 150 300 500

The total processes were surface pretreatment, preheating, pre-spraying and dusting.
After spraying, the sample was processed into a standard sample of U24 mm  7.9 mm
(thickness) with coating thickness of 1 mm. Then, the samples were proceeded by flame
remelting. A constant flame temperature with four different remelting times were tested.
The remelting time was 2 min, 5 min, 10 min and 12 min. After remelting, the samples
were cooled slowly to room temperature surrounded by asbestos powder.

2.2 Test Method


The non-lubricating fatigue friction and wear test in room temperature was carried out
by SRV-IV micro-vibration friction and wear tester of OPTIMEL, Germany. The
motion form is reciprocating movement and the contact form is point contact. For the
Effects of Remelting on Fatigue Wear Performance of Coating 39

rubber ball sample, the ceramic ball Si3N4 with U10 mm with amplitude of 1 mm was
used. The test load of 30 N with 20 Hz was applied in 30 min. The three-dimensional
shape of the wear scar of the sample coating was reflected by the non-contact three-
dimensional surface profilometer of ADE, American. With this profilometer, the wear
volume of coating could be calculated.

3 Test Results and Analysis


3.1 Analysis of Coating Wear Test Results
With observing the three-dimensional wear morphology of each coating layer (Fig. 1 is
the three-dimensional wear morphology of the coating after remelting at different
lengths of time), it is found that the wear volume of each typical sample coating after
different remelting treatments is different. Different (as shown in Fig. 2, where the wear
volume is organized as shown in Fig. 2). Among them: the coating wear volume after
remelting for 2 min is the largest, the wear volume is 3.734  107 lm3; the second is
the coating after remelting for 5 min, the wear amount is 3.50974  107 lm3; then it
is after remelting for 12 min. The coating has a wear amount of 3.266  107 lm3; the
smallest amount of wear is the coating after remelting for 10 min, and the wear amount
is 3.029  107 lm3. It shows that the wear resistance of each coating is as follows:
remelting 10 min > remelting 12 min > remelting 5 min > remelting 2 min coating.
According to the wear test, the remelting time is 10 min for the wear resistance of the
sample, which is a reasonable remelting time. Secondly, by observing the three-
dimensional wear morphology of each coating in Fig. 1, it was found that the bottom of
the remelted 2 min and remelted 5 min coating had obvious lamellar peeling that was
inconsistent with the grinding direction, and remelted for 10 min and remelted for
12 min. There is no obvious layering peeling marks at the bottom of the pit, but there
are deep cutting furrow marks on the wear surface of the coating. In order to further
understand the coating strength and influencing factors, we studied the microscopic
morphology of each coating in 3.3.

3.2 Wear Performance of Coating Layer


In terms of coating layer, the particle deformation, furrow and adhesion are three basic
mechanisms for generating friction force. The friction factor is mainly affected by
factors such as coating morphology, coating adhesion and slip conditions. The rela-
tionship between the friction coefficient and the structure is related to factors such as
the distribution of defects. Since the hardness of the ceramic ball is higher than that of
the coating, the main factor which cause the change of the friction force include the
change of the tangential furrow force and the contact area. Figure 3 shows the friction
coefficient of the remelting coatings under different durations during the friction and
wear test. It shows that each friction factor fluctuated with the continuation of time
growing. After remelting for 2 min, remelting for 5 min, and remelting for 12 min, the
friction coefficient of the coating was fluctuated with time. The friction factor after
40 Z. Zhao et al.

Fig. 1. Micro wear morphology of coating after different post-fusing: (a) 2 min; (b) 5 min;
(c) 10 min; (d) 12 min

Fig. 2. Wear volume of coating after different post-fusing: (a) 2 min; (b) 5 min; (c) 10 min;
(d) 12 min
Effects of Remelting on Fatigue Wear Performance of Coating 41

remelting for 10 min began to fluctuate after 200 s, while at the period from 800 s
to 1200 s, the friction factor dropped back to a low value. And then it showed a
volatility growth. To this end, the mean friction factor under different conditions were
calculated. The mean friction coefficient after remelting for 2 min was 0.661842. The
mean friction coefficient after remelting for 5 min was 0.7253625. The mean friction
coefficient after remelting for 10 min was 0.6818065. And the mean friction coefficient
after remelting for 12 min is 0.6893495. Hence, the order of friction factor could be
determined as 2 min < 10 min < 12 min. Hence, the friction coefficient of remelting
2 min was minimal.

Fig. 3. Friction coefficient of coating after different post-fusing: (a) 2 min; (b) 5 min; (c) 10 min;
(d) 12 min

Secondly, Fig. 3 shows that the friction factor is relatively stable during the pre-
vious 200 at test. While the mean value fluctuated slightly, it was indicated that the
friction factor is mainly determined by the coating structure in this period. Moreover,
the influence on friction factor from strength and defect was small. The mean value of
friction coefficient of each coating layer could be obtained by calculation. The mean
friction coefficient of coating layer after remelting for 2 min is 0.505657. And the mean
friction coefficient of coating layer after remelting for 5 min is 0.52132. The mean
friction coefficient of remelting for 10 min is 0.41205. And the mean value after
remelting 12 min is 0.526977. Hence, the order of friction factor is 10 min
< 2 min < 5 min < 12 min. With comparing the wear volume of coating, it was found
that the friction coefficient by remelting 2 min was less than that of remelting 5 min.
42 Z. Zhao et al.

It was indicated that there has no absolute correspondence between wear resistance and
friction factor. Hence, it is necessary to study the microscopic morphology of each
coating layer.

4 Wear Fatigue Life Prediction of Coated Parts

According to the wear formula proposed by Czichos [10], the wear rate remains
constant during the wear stable period. And the wear volume is a function of time as
W = Ct. Where W is wearing volume, t is wearing time and C is a constant. Therefore,
we can fit the wear volume and time curve at wear stable period to obtain the constant
C. Hence, the wear fatigue life of thermal sprayed parts under a same condition could
be predicted. Meanwhile, the product can be repaired and replaced timely and the
reliability and economic of the product could be ensured.

5 Conclusions
(1) Under the same test conditions, the wear volume is directly related to the overall
wear resistance of the coating. The smaller the wear volume means a better
performance of wear resistance.
(2) The friction factor is directly related to fractional wear resistance of the coating.
The smaller value means a better performance of fractional wear resistance of the
coating and it does not indicate the overall wear resistance of the coating.
(3) When remelting time is reasonable, the wear resistance is best. While the
remelting time is insufficient or too long, the wear resistance are not well.
(4) Fitting the wear amount and time curve during the wear stabilization period will
obtain the value of the constant c in the formula W = Ct. Thereby, the fatigue
wear life of the thermally sprayed member can be predicted under the same
conditions.

References
1. Zhang, X.C., Xu, B.S., Wang, Z.D., Tu, S.T.: Failure mode and fatigue mechanism of laser-
remelted plasma-sprayed Ni alloy coating in rolling contact. Surf. Coat. Technol. 205, 3119–
3127 (2011)
2. Berger, L.M., Lipp, J., Spatzier, J., Bretschneider, J.: Dependence of the rolling contact
fatigue of HVOF-Sprayed WC-17%Co hardmetal coatings on substrate hardness. Wear 271,
2080–2088 (2011)
3. Wang, S.Y., GuoLu, L.I., Wang, H.D., et al.: Influence of remelting treatment on rolling
contact fatigue performance of NiCrBSi coating. Trans. Mater. Heat Treat. 32(11), 135–139
(2011)
4. Wang, G., Sun, D., Wang, Y., et al.: Cavitation properties of Ni-based coatings deposited by
HVAF and plasma cladding technology. Trans. Mater. Heat Treat. 28(6), 109–113 (2007)
Effects of Remelting on Fatigue Wear Performance of Coating 43

5. Akebono, H., Komotori, J., Shimizu, M.: Effect of coating microstructure on the fatigue
properties of steel thermally sprayed with Ni-based self-fluxing alloy. Int. J. Fatigue 30, 814–
821 (2008)
6. Zhao, Z., Li, X., Li, Y., et al.: The analysis of fatigue properties and the improvement of
process for plunger with different post-fused thermal spray. Trans. Mater. Heat Treat. 33(s1),
92–95 (2012)
7. Zhao, Z., Li, Y., Li, X.: Effects of remelting time on fatigue life and wear performance of
thermal spray welding components. Trans. Mater. Heat Treat. 34(7), 169–174 (2013)
8. Zhao, Z., Li, X., Li, Y., et al.: Effect of remelting processing on fatigue properties of Ni
based PM alloy parts with thermal spraying coating. Powder Metall. Technol. 1, 3–7 (2012)
9. Li, X., Zhao, Z.: The investigation and practice of flame thermal spray technology on piston.
J. Lanzhou Polytech. Coll. 12(2), 9–11 (2005)
10. Yang, W., Wu, Y., Hong, S., et al.: Microstructure, friction and wear properties of HVOF
sprayed WC-10Co-4Cr coating. J. Mater. Eng. 46(5), 120–125 (2018)
Design of Emergency Response Control System
for Elevator Blackout

Yan Dou1,2, Wenmeng Li3, Jiaxin Ma1,2, and Lanzhong Guo1,2(&)


1
School of Mechanical Engineering, Changshu Institute of Technology,
Changshu, People’s Republic of China
{jixiedouyan,guolz}@cslg.edu.cn, [email protected]
2
Jiangsu Elevator Intelligent Safety Key Construction Laboratory,
Changshu, People’s Republic of China
3
Zhejiang Academy of Special Equipment Science,
Hangzhou, People’s Republic of China
[email protected]

Abstract. The emergency system will cooperate with an integrated controller


with low-speed self-rescue function. When the elevator is powered off, the
inverter power supply will supply power to the elevator controller. The con-
troller will drive the traction machine to slowly stop the car to the nearest level
position to open the door. During the normal operation of the elevator, the SCM
should monitor the power supply situation at all times and control the system
circuit to charge the battery. After the power failure of the elevator, the single
chip should be able to respond in time, control the inverter power supply to
supply power to the elevator controller and output emergency operation signals
and phase sequence short circuit signals to the controller. After the emergency
operation is finished, the single chip also needs to receive the stop signal output
by the controller and then control the system to enter a standby state. The
emergency system is one of the defense lines to ensure the safety of elevators.
The elevators using the emergency system can effectively reduce the occurrence
of trapped accidents and make the emergency rescue process more convenient.

Keywords: Elevator power failure  Automatic leveling  Emergency


response  Control system

1 Introduction

The traditional rescue method requires manual turning, which takes a long time. In
order to rescue trapped passengers from the car in time, emergency response control
system came into being. When the city power supply is normal, the elevator emergency
response control system charges the internal storage battery. Once the power grid is cut
off or the elevator has an electrical fault, the system first isolates the external power grid
from the elevator operation control system, and then inverts and outputs the electric
energy in the storage battery to the frequency converter of the control cabinet so that it
drives the traction machine to drive the car to slowly level off and release people. The
elevator emergency response control system should be able to distinguish from the

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 44–51, 2020.
https://doi.org/10.1007/978-981-15-2341-0_6
Design of Emergency Response Control System for Elevator Blackout 45

time-consuming and labor-intensive manual operation, so as to cut off power and save
oneself, reduce the trapped time of personnel and ensure the safety of passengers.
At present, there are two kinds of elevator power failure emergency rescue devices
on the domestic market. The first type is to install an electric brake release device.
When the elevator is out of power, the device opens the traction brake independently
from the control cabinet to make the car floor nearby, and then professional rescuers
come to open the door and release the trapped passengers. The second type is the
installation of uninterrupted power supply in the elevator control cabinet. When the
elevator is out of power, supply power to the control cabinet and let the car auto-
matically open and put people on the flat floor. This kind of rescue device is more
convenient but not widely used at present [1] (Fig. 1).

Interface of Elevator Control


System

Input power supply Control System for Elevator


detection Door-Motor of door-
Elevator motor

Emergency Control
Charging circuit DC Converter Brake
System

Three-phase Inverter
Tractor
Storage battery Circuit

Fig. 1. Schematic diagram of emergency device

2 The Main Content of the Subject Research

This design is an emergency response control system. The system needs to be able to
detect the power supply situation of the elevator at all times and react in time to
cooperate with the elevator controller inverter to drive the traction machine when
power is cut off. During the normal operation of the elevator, the emergency response
control system shall not interfere with the operation of the operation control system.
The rated voltage of the inverter used to drive the traction machine of the elevator is
usually 380 V AC. The controller used in this design must have the functions of low-
speed self-rescue, running direction self-identification and power failure rescue. After
the power failure of the elevator, the controller can identify the load condition of the car
and start the emergency power supply to slowly run the elevator to the level area to
open the door in the most energy-saving way, thus safely and quickly rescuing the
trapped personnel in the car.

3 Functional Requirements of the System

Elevator power outage emergency response control system (abbreviated as emergency


system). First of all, the system should have the function of detecting the external
power grid so that the system can respond in time in case of power failure. In this
46 Y. Dou et al.

system, a phase sequence relay needs to be placed, which mainly plays the role of
phase sequence detection and phase interruption protection. When the power failure
occurs, elevator controller will stop the operation when the power is lost. In order to
realize the automatic landing release of the car, a storage battery is required to transmit
power to the control cabinet. Under the condition of ensuring its power supply, an
emergency operation start signal is output to the frequency converter of the control
cabinet to make it operate, and the traction machine is driven to lift the car and control
the door system to open the door to release the car. Since a battery is required during
power failure, it is inevitable to have a charging circuit and a saturated discharging
circuit in the circuit when power supply is normal. After the emergency rescue, the
emergency system needs to enter the standby state to wait for the power supply to
return to normal, which requires the controller to output a rescue stop signal to stop the
system [2].
In this experiment, NICE3000new series elevator integrated controller is selected
for the inverter part of the control cabinet. Its rated voltage is AC 380 V. After the
emergency system outputs 380 V AC, the controller can realize the function of low-
speed self-rescue, control the car to slowly level the floor and then open the door.
In addition to the safety switch circuit, it also includes the door lock circuit. When
all the door electrical interlocking switches are closed, the controller’s main board can
receive the signal that the safety circuit works normally. Since there is a safety switch
of phase sequence relay in the series circuit of safety switch, when the elevator is out of
power, the phase sequence relay will stop working, and its safety switch will be
disconnected naturally, making the safety circuit invalid. In the emergency operation
stage, the safety circuit can be connected only when the safety switch of phase
sequence relay is closed, and the controller can control the emergency operation of the
elevator. The emergency system of this experiment should not only be able to output
the two-phase 380 V alternating current and the emergency operation start signal to the
control cabinet inverter, but also be able to output the phase sequence short connection
signal.

4 Control Circuit Design of the System


4.1 The Circuit Structure of the System
The microcomputer control module is necessary for the system to receive and output
signals. When the elevator power supply is normal, the battery module should be
charged, so there must be a charging power module in the circuit. In case of elevator
power failure, the system shall supply ac 380 V to the control cabinet inverter, while
the battery module outputs dc voltage, which requires an inverter module to invert the
dc voltage output from the battery module into ac 380 V output (Fig. 2).
Design of Emergency Response Control System for Elevator Blackout 47

Incomplete Microcomputer Battery Storage


sequence relay Control charger battery

Inverter

Elevator
Controller

Fig. 2. The Hardware module structure diagram of the system

The microcomputer control module takes the single chip microcomputer as the
core, its working voltage is generally 24 V direct current. When the elevator is running
normally, the charging power supply module will supply dc 24 V to the power
detection end of the microcomputer control module. After recognition by the single-
chip microcomputer, the power supply of the elevator is confirmed to be normal. The
phase sequence relay module has 3 input points to detect three-phase electricity, and
one of its normally open contacts is connected to the microcomputer module power
detection terminal and connected in series to the power supply circuit. When the
elevator is out of power, the phase sequence relay stops working, the normally open
contact of the phase sequence relay in the power supply circuit is disconnected, the
power detection end cannot detect 24 V dc, and the single chip microcomputer can
identify and confirm the elevator is out of power. Therefore, the basis of microcom-
puter control module to judge whether the elevator is out of power is whether 24 V dc
is detected at the power detection end [3] (Fig. 3).

Storage battery AC220V


R S T
Charging Power
Supply
Power supply detection
Incomplete sequence relay Charging Power Supply
The microcomputer control module Module
Control cabinet module
Stop signal
Charging Power supply COM

24V Power supply detection Microcomputer


Storage battery
board

Fig. 3. The Microcomputer control module, The Incomplete sequence relay module and The
Charging power supply module

4.2 Design of Control Schematic Diagram of the System


The battery module is charged by the charging power module when the power supply is
normal, and the microcomputer control module and inverter module are supplied when
the power is cut off. 48 V battery group is selected as the battery module. The charging
48 Y. Dou et al.

power module outputs 55 V dc to charge the battery and 24 V dc to supply power to


the microcomputer control module.
The detailed design idea of the schematic diagram of the control circuit is as
follows.
Press the start switch of the device, and both FA normally open contacts in the
circuit are closed. When the elevator power supply is normal, the phase sequence relay
works normally, and the phase sequence relay at the power supply detection end of the
microcomputer control board is often open and closed. The km coil connecting the
three-phase electricity and the zero line is electrified. The KM is often open and the
contact is closed and the circuit is turned on (Fig. 4).

Fig. 4. The Control circuit diagram of the system


Design of Emergency Response Control System for Elevator Blackout 49

The charging power supply outputs 24 V DC to the power supply detection ter-
minal of the PC control board. The MCU recognizes that the elevator power supply is
normal, then the K1 and K3 terminals have no signal output, and the K1 and K3 coils
are not charged. K1 and K3 normally closed contacts are closed to maintain circuit
connectivity; K1 and K3 normally open contacts are disconnected to maintain standby
state; K1 often open contacts are disconnected to maintain standby state; K1 often open
contacts are disconnected without starting signal and phase sequence short connection
signal are issued; K3 normally closed contacts are kept closed so that power supply
control terminal continues to monitor power supply.
The 48 V output terminal of the charging power supply charges the battery, and the
QF1 (48 V air switch) is turned on to prevent the short circuit of the inverter. Every
charging period, the battery is close to saturation. The signal is sent from the K2
terminal of the PC control board. The action coil of the K2 relay is energized. The K2
single pole double throw switch disconnects the charging circuit. The battery and the
resistance are discharged in series. At the same time, the inverter is charged and
recharged after a period of time. Because the KM contactor coil is powered up, the KM
closed contacts at the output end of the inverters are disconnected to isolate the power
supply [4, 5].
When the elevator is out of power, the coil of KM contactor loses power, which
causes the open contacts of KM to be disconnected and the circuit to be disconnected.
The phase sequence relay stops working, its normal open contacts are disconnected in
the power supply circuit, and there is no 24 V direct current input in the power supply
detection terminal. The single chip computer identifies and confirms the elevator
blackout, the output signals of K1 and K3 terminals, and the action coils of K1 and K3
relays are powered up. The normal closed contacts are disconnected and isolated from
the market electricity, so as to prevent the sudden calls to make the KM contactor act
during emergency operation.
During the period, the computer module is supplied with 24 V DC by the battery,
and the QF2 (24 V air) switch is closed. K1 and K3 normally open contacts are closed,
batteries output 48 V DC to the inverters, QF1 closed, and inverters output 380 V AC
(KM normally closed due to coil power loss has been disconnected) to the elevator
control cabinet frequency converter NICE3000 new. K1 usually opens contacts to close
and output emergency operation start signal and phase sequence short connection
signal. In safety circuit, phase sequence relay often opens contacts to close and the
safety circuit works normally. The controller drives the tractor to release the car slowly
near the flat floor. During this period, because the K3 normally closed contacts are
disconnected, even if the phase sequence relay operates the computer module, the
power signal will not be detected, which ensures the stability of emergency operation.
After the rescue, the controller outputs emergency stop signal to MCU for identi-
fication. The MCU controls K1 and K3 relays to stop, and the system enters standby
state to wait for elevator power supply recovery.

4.3 Schematic Design of Wiring Between System and Control Cabinet


When the elevator is out of power, the emergency system provides the 380 V ac
working voltage to the control cabinet. The single-chip microcomputer provides
50 Y. Dou et al.

emergency operation signals to the main board X20 of the control cabinet, and at the
same time provides phase sequence short connection signals to the phase sequence
relay contacts of the safety circuit, and the elevator enters emergency operation. After
the end of emergency operation, the elevator is located at the horizontal position. The
horizontal sensor photoelectric switch X1 receives the horizontal signal, which is used
as the stop signal to output to the emergency system. The system enters the standby
state and waits for the power supply to resume (Fig. 5).

Fig. 5. Wiring Diagram of Emergency system and Control Cabinet

During emergency operation, 380 V AC power is output from the inverter to the
elevator control cabinet. After K1 relay action, 24 V DC power is detected at the X20
end of the integrated controller. The 11 and 14 contacts of the phase sequence relay in
the safety circuit are short connected, and the frequency converter drives the tractor
operation.

5 Summary

Emergency system and elevator control cabinet frequency converter matching use.
When the abnormal elevator power supply is detected, the system provides 380 V AC
to the controller and sends a start signal, and the elevator car will open slowly on the
flat floor. After the operation, the controller feeds back a stop signal to the system and
waits for power supply to be restored. Nowadays, most of the converters in the elevator
Design of Emergency Response Control System for Elevator Blackout 51

control cabinet are integrated controllers, which have rich varieties and powerful
functions, and have the function of low-speed self-rescue.

References
1. Wang, G., Zhang, G., Yang, R., et al.: Robust low-cost control scheme of direct-drive gearless
traction machine for elevators without a weight transducer. IEEE Trans. Ind. Appl. 48(3),
996–1005 (2012)
2. Weiss, G., Felicito, N.R., Kaykaty, M., et al.: Weight-transducerless starting torque
compensation of gearless permanent-magnet traction machine for direct-drive elevators.
IEEE Trans. Ind. Electron. 61(9), 4594–4604 (2014)
3. Rashad, E.M., Radwan, T.S., Rahman, M.A.: A maximum torque per ampere vector control
strategy for synchronous reluctance motors considering saturation and iron losses. In:
Conference Record of the Industry Applications Conference, 2004. IAS Meeting, vol. 4,
pp. 2411–2417. IEEE (2004)
4. Wang, A., Wang, Q., Jiang, W.: A novel double-loop vector control strategy for PMSMs
based on kinetic energy feedback. J. Power Electron. 15(5), 1256–1263 (2015)
5. Zhang, Y.B., Pi, Y.G.: Fractional order controller for PMSM speed servo system based on
Bode’s ideal transfer function. 173(6), 110–117 (2014)
Effect of Cerium on Microstructure
and Friction of MoS2 Coating

Wu Jian1,2,3(&), Xinyong Li1,2, Ge Yang1,2, Lanzhong Guo1,2,


Cao Jie1,2, and Peijun Jiao1,2
1
School of Mechanical Engineering, Changshu Institute of Technology,
Changshu, China
[email protected]
2
Jiangsu Elevator Intelligent Safety Key Construction Laboratory,
Changshu, China
3
Department of Mechanical and Industrial Engineering,
Norwegian University of Science and Technology, Trondheim, Norway

Abstract. The MoS2 coatings with different Cerium contents (0.5%, 1%, 2%,
4%) were prepared based on the titanium alloy (Ti811) by a mixing method. The
surface microstructure and metallographic structure of the MoS2 coatings were
characterized by scanning electron microscopy (SEM). And the friction coeffi-
cient and wear texture were analyzed of four kinds of MoS2 coating layer. The
results have shown the increase of Cerium content can refine the microstructure
of the MoS2 coating, thereby inhibiting the growth of the crystal grains,
improving the wear resistance, and lower the friction coefficient. The minimum
friction coefficient is 0.055 at the Ce (1%). With the increase of Ce content,
hydrogenation occurs, resulting in the crystallization of the MoS2 coating, which
reduces the friction coefficient and affects the wear resistance.

Keywords: MoS2  Cerium  Microstructure  Friction coefficient 


Morphology of wear

1 Introduction

Tribology [1] is an applied discipline developed in the 1940s that focuses on friction,
wear, and lubrication caused by relative motion between contact surfaces. Fretting wear
can not only cause occlusion, looseness, noise increase between the contact surfaces
but also may cause cracks on the surface of the test piece, which is greatly reducing the
service life of the device [2].
As a convenient surface treatment technology, the bonded solid lubricant coating
(such as MoS2) can reduce the friction and wear of parts [3–5]. Alnabulsi, Luo et al.
studied the properties of fretting friction with MoS2 and analyzed the mechanism of
friction [6, 7]. Zhu et al. tested the frictional properties of the lateral and radial loads on
the MoS2 coating surface and compared the damage characteristics of the MoS2 coating
under two different loads [8].
In this paper, the MoS2 coatings with different Cerium contents (0.5%, 1%, 2%,
4%) were prepared based on the titanium alloy (Ti811) by mixing method. The surface

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 52–58, 2020.
https://doi.org/10.1007/978-981-15-2341-0_7
Effect of Cerium on Microstructure and Friction of MoS2 Coating 53

microstructure and metallographic structure of the MoS2 coatings were characterized


by scanning electron microscopy (SEM). And the friction coefficient and wear texture
were analyzed of four kinds of MoS2 coating layer.

2 The Material and Layer Process

2.1 Matrix
The matrix material is the titanium alloy (Ti811) which is more common in aerospace.
It is characterized by good thermal stability, strong corrosion-resistance, and high
specific strength. Generally higher performance aircraft engine compressor components
mainly choose this material. In this study, the titanium alloy (Ti811) sample processing
size is u22 mm  7 mm, Using a double annealing treatment, and mechanical pol-
ishing to a surface roughness of Ra = 0.6 lm. The chemical components and the
properties of Ti811 as shown Tables 1 and 2.

Table 1. The chemical components of Ti811 (wt%)


Al Mo V Fe C N H O Ti
Ti811 7.9 1.0 0.99 0.05 0.1 0.01 0.001 0.06 89.889

Table 2. The properties of Ti811


Sb (MPa) S0.2 (MPa) HRC d (%) w (%)
Ti811 931 890 32–38 23 46

2.2 Coating Preparation Process


The coating layer was prepared by solid lubricant, adhesive, curing agent, diluent, and
Ce. The process of coating layer as follows:
(a) The solid lubricant is selected from MoS2 and graphite, and the ratio is 1:1. The
adhesive is a mixed resin of 601 and 618, and the mixing ratio is 3:1. The curing
agent selects MHHPA, which has a relatively high curing temperature and the best
solubility. The diluent is a mixed solvent of xylene, ethylbenzene and N-methyl
pyrrolidone. The additive selects the rare earth element cerium (10 lm) with good
modification properties. The amount of Cerium added to the coating is shown in
Table 3.
(b) Weigh the components according to the above ratio, mix and stir, then transfer to
the grinder container for 24–48 h. Use a thinner to adjust the viscosity, and use
ultrasonic cleaning to make the paint mix more evenly. After the treatment is
finished, let it stand for a while and wait for the spray preparation.
(c) The surface roughness of the sample after grinding is about 0.6 lm. The surface
of the substrate was cleaned with ultrasonic waves, dried and sprayed. Basic
parameters of spraying as followed: temperature is 18–25 °C, humidity less than
54 W. Jian et al.

80%, spray pressure is 0.2–0.4 MPa, Spray angle is 70°–90°, the thickness is 8–
15 lm.
(d) Place the sample at room temperature for 1–2 h, then put it into the oven and
gradually heat it. Curing time is 130 °C for half an hour; 220 °C for 0.5 h to 1 h.
The picture of four coating sample as shown in Fig. 1.

Table 3. Coating ratio of Ce


C1 C2 C4 C6
Ce % 0.5 1 2 4

Fig. 1. Picture of four sample

3 Experiment

The fretting friction and wear test is carried out on self-made fretting friction and wear
tester. The schematic diagram of the fretting friction test device is shown in Fig. 2. The
motor drives the table to swing, and the three-axis micro force sensor is mounted on the
upper test ball to measure the friction during the sliding process (Fig. 3).

Fig. 2. The schematic of wear tester Fig. 3. Photo of wear tester


Effect of Cerium on Microstructure and Friction of MoS2 Coating 55

In this test, the test parameters are: motor speed is 60 r/min, test load is 50 N,
reciprocating distance amplitude is 200 lm, the test period is 2000 times, and each
group of tests is done 3 times, and the average value is taken.

4 Results and Analysis

4.1 The Microstructure of Coating Layers


The structure of the four MoS2 lubricated coatings in the metallographic microscope
(200) is shown in Fig. 4. The coating consists of black, green and gray phases, of which
black is Cerium. Adding 0.5% rare Cerium (Fig. 4a), The coating structure is flat, but
there are also very few particles that are bundled into a bundle. The coating is relatively
uniform and dense, and the layered structure is more obvious. When the cerium particles
with a content of 1% (Fig. 4b), the coating structure is uniform, the layer structure is
uniform and compact, and the gray flat particles are obviously refined. With the increase
of the Cerium, the coating structure is no longer uniform, begins to show agglomeration,
and the surface protrusion of the coating increases, as shown in Fig. 4c and d.

Fig. 4. Microstructure topography of different MoS2 coatings

4.2 The Friction Coefficient Curves of Coating Layers


Figure 5 shows the friction coefficient of 4 coatings under the same test condition. The
friction coefficient of different coatings has a similar trend. At the beginning of the test,
the friction coefficient may change significantly in the initial stage of friction due to the
unevenness of the surface of the coating. As the test progresses, the friction coefficient
56 W. Jian et al.

will gradually stabilize and rise and fall within a certain fixed range. The coefficient of
friction of 0.5 content is the highest, which is 0.062–0.064, and the 1.0 content is the
lowest, which is 0.056–0.058. As the content is increased, the coefficient of friction
does not decrease and is basically between 0.056 and 0.064.
Combined with microstructure, MoS2 containing Ce can reduce the friction coef-
ficient of the surface. The friction coefficient is related to the distribution of Ce in
MoS2. The more uniform Ce is inside MoS2, the lower the friction coefficient. But with
the increase of Ce content, Ce accumulates in MoS2, which reduces the friction-
reducing characteristics. However, if the Ce content is increased, Ce accumulates in
MoS2, which in turn reduces the wear reduction characteristics. A Ce content of 1%
allows Ce to be uniformly distributed in MoS2, thus obtaining the lowest coefficient of
friction.

Fig. 5. Friction coefficient curve of four samples

4.3 Wear Morphology of Layers


The prepared samples were tested using self-made friction and wear test apparatus, and
the results are shown in Fig. 6. Figure 6a shows the friction and wear morphology of a
MoS2 layer coating with 0.5% Ce particles added. It can be seen from the figure that the
surface of the coating has obvious wear marks, the surface of the wear scar has heavier
furrow and falls off, and the base material is clearly visible. Figure 6b shows the
friction and wear morphology of a MoS2 layer coating with 1.0% Ce. It can be seen that
the wear profile of the coating is relatively light and the coating surface is smooth and
complete. Figure 6c shows the friction and wear morphology with 2.0% Ce. The wear
marks are deeper, the furrows are more obvious, and the central part is seriously worn.
The last one (Fig. 6d) is to add 4%. It can be seen that the wear scars are more serious
than 2%, the surface has heavier furrow marks and the lubrication film is lifted.
Effect of Cerium on Microstructure and Friction of MoS2 Coating 57

Considering four microscopic of wear, the MoS2 coating with Ce can alleviate
wear. A coating with a small amount of Ce added does not have a good anti-wear effect
with the MoS2 coating. As the Ce content increases, the distribution in the MoS2
coating becomes more and more uniform, and the friction characteristics are effectively
improved. As the Ce particle content increases, Ce particles begin to build up inside the
coating, forming hard particles. The hard particles fall off during the movement, and at
the same time increase the damage to the surface of the friction pair, and the surface
does not form a good lubricating film.

Fig. 6. Different coating wear morphology of four samples

5 Conclusion
(1) Cerium can improve the structure of the dry film coating. The MoS2 coating has a
flat particle shape. After adding 1% Ce, the coating structure is refined, the coating
structure is uniform, and there is no obvious particle agglomeration.
(2) The addition of Cerium to the MoS2 dry layer can improve the friction and wear
properties of the coating, and make the coating more wear-resistant and wear-
reducing. By comparing the fretting friction test of dry with different Cerium
amount, with 1% Ce added was obtained, and the wear resistance of MoS2 dry
film coating was the best.

Acknowledgments. The work is supported by Jiangsu Government Scholarship for Overseas


Studies, Open Project of Jiangsu Elevator Intelligent Safety Key Construction Laboratory
(JSKLESS201703), and The Doctoral Science Foundation of Changshu Institute of Technology
(No. KYZ2015054Z).
58 W. Jian et al.

References
1. Stachowiak, G., Batchelor, A.W.: Engineering tribology. Butterworth-Heinemann, Oxford
(2013)
2. Zhu, M., Xu, J., Zhou, Z.: Alleviating fretting damages through surface engineering design.
China Surf. Eng. 20(6), 5–10 (2007)
3. Asl, K.M., Masoudi, A., Khomamizadeh, F.J.M.S., et al.: The effect of different rare earth
elements content on microstructure, mechanical and wear behavior of Mg-Al–Zn alloy. Mater.
Sci. Eng.: A 527(7–8), 2027–2035 (2010)
4. Yan, M., Zhang, C., Sun, Z.J.A.S.S.: Study on depth-related microstructure and wear property
of rare earth nitrocarburized layer of M50NiL steel. Appl. Surf. Sci. 289, 370–377 (2014)
5. Li, B., Xie, F., Zhang, M., et al.: Study on tribological properties of Nano-MoS2 as additive in
lubricating oils. Lubr. Eng. 39(9), 91–95 (2014)
6. Alnabulsi, S., Lince, J., Paul, D., et al.: Complementary XPS and AES analysis of MoS3 solid
lubricant coatings. Microsc. Microanal. 20(S3), 2060–2061 (2014)
7. Luo, J., Zhu, M., Wang, Y., et al.: Study on rotational fretting wear of bonded MoS2 solid
lubricant coating prepared on medium carbon steel. Tribol. Int. 44(11), 1565–1570 (2011)
8. Zhu, M., Zhou, H., Chen, J., et al.: A comparative study on radial and tangential fretting
damage of molybdenum disulfide-bonded solid lubrication coating. Tribology 22(1), 14218
(2002)
A Machine Vision Method for Elevator
Braking Detection

Yang Ge1,2(&), Jian Wu1,2(&), and Xiaomei Jiang1,2


1
School of Mechanical Engineering, Changshu Institute of Technology,
Changshu 215500, Jiangsu, China
[email protected], [email protected]
2
Jiangsu Key Laboratory for Elevator Intelligent Safety,
Changshu Institute of Technology, Changshu 215500, Jiangsu, China

Abstract. Brake is one of important safety equipment in an elevator. An ele-


vator brake force detection method based on machine vision is presented in this
paper. The Fourier transform theory is used to transform an image from spatial
domain to frequency domain, then power spectrum of the image is calculated,
the main direction of the image is determined, and finally, rotation angle
between two images is calculated. The experiment results show the proposed
method has very high detection accuracy.

Keywords: Machine vision  Elevator braking  Fourier transform  Power


spectrum  Rotation angle

1 Introduction

As special equipment, an elevator can quickly carry out vertical transportation, pro-
viding a great convenience for transportation. Elevator has a safety system to ensure
safe operation, and the brake is an important device of the elevator safety system. It can
effectively prevent and control the occurrence of accidents such as slipping and falling,
which greatly improves the safety of elevator operation and enables the elevator to stop
at exactly the position set by the program.
Brake force is an important technical index of a brake, whose schematic diagram of
the structure is shown in Fig. 1. The field measurement of braking force basically relies
on manual measurement. The general practice is to load the car with a certain load,
after the elevator runs smoothly, cut off the traction electromechanical source, the brake
starts to break at the same time, until the elevator stops completely, the distance of the
car runs during this period is the braking distance. The manual measurement procedure
is tedious and the accuracy is poor. This paper presents a machine vision elevator
braking force detection method, which can automatically calculate the braking distance,
simplify the braking detection process and improve the detection accuracy.
As shown in Fig. 1, the braking distance is the running distance of lift car during
test, which can be calculated by knowing the diameter of traction wheel and the
rotation angle of traction wheel during a test. Therefore, the main task of this paper is to
propose a machine vision method to detect the rotation angle of the traction wheel
during the test. After obtaining the rotation angle of traction wheel, the length of wire
© Springer Nature Singapore Pte Ltd. 2020
Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 59–66, 2020.
https://doi.org/10.1007/978-981-15-2341-0_8
60 Y. Ge et al.

Fig. 1. Simplified diagram of the elevator brake

rope movement can be calculated using the diameter of the traction wheel, and that is
the braking distance. The calculating rotation angle of two images belongs to image
registration technology. Image registration refers to the geometric alignment of two or
more images in the same scene with different shooting time, a field of view or imaging
mode.
Image registration technology is widely used in medical image processing, remote
sensing image processing, and computer vision. The main image registration methods
are methods based on image grayscale [1], e.g. mutual information, methods based fast
Fourier transform [2], methods based on image features, e.g. feature points and edge
detection [3]. Ofverstedt et al. [4] proposed an affine registration framework based on a
combination of strength and spatial information, which is symmetric and without
strength interpolation. This method shows stronger robustness and higher accuracy
than the commonly used similarity measure. Zhu et al. [5] proposed a robust non-rigid
body feature matching method based on geometric constraints. The non-rigid feature
matching method is transformed into the maximum likelihood (ML) estimation prob-
lem. The experimental results show that the method has good performance. Alahyane
et al. [6] proposed a new fluid image registration method based on lattice point
Boltzmann method (LBM). Chen et al. [7] proposed a multistage optimization method
based on two-step Gauss-Newton method to minimize continuously differentiable
functions obtained by discretization model for image registration. Wu et al. [8] pro-
posed an aerial image registration algorithm based on gaussian mixture model,
established an image registration model based on Gaussian mixture model (GMM), and
solved the transformation matrix between two aerial images.
This paper presents a method for detecting the rotation angle of traction wheel
using a Fourier transform. This paper is organized as follows. Section 1 is an intro-
duction. Section 2 introduces the basic theory used in this paper. Section 3 verifies the
proposed method by an example. Finally, the conclusions are given in Sect. 4.
A Machine Vision Method for Elevator Braking Detection 61

2 Basic Theory

Fourier transform is employed for detecting the rotation angle of two images.

2.1 Two Dimensional Discrete Fourier Transform


Let f ðx; yÞ denotes image in the spatial domain, the size is M  N and Fðu; vÞ denotes
the Fourier transform result of the image, as shown in Formula (1).
rffiffiffiffiffiffiffiffi h xu yvi
1 XM1 XN1
Fðu; vÞ ¼ f ð x; y Þexp j2p þ ð1Þ
MN x¼0 y¼0 M N

Where, u ¼ 0; 1;    ; M  1, v ¼ 0; 1;    ; N  1. Its phase spectrum U and power


spectrum P can be calculated by Formulas (2) and (3).
(
Im½F ðu;vÞ
Uðu; vÞ ¼
arctan Re½F ðu;vÞ ; Re½F ðu; vÞ 6¼ 0 ð2Þ
0 Re½F ðu; vÞ ¼ 0
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Pðu; vÞ ¼ ðRe½F ðu; vÞÞ2 þ ðIm½F ðu; vÞÞ2 ð3Þ

Where Re½ and Im½ represent the operation of the real and imaginary parts of the
function.

2.2 Fourier Spectrum Analysis of Images


According to the corresponding relationship between image spatial domain and fre-
quency domain, the central brightness part of the spectrum map corresponds to the low-
frequency energy of the whole image, while the bright straight line reflects main texture
direction of the whole image.
According to the Fourier transform, when image rotates, it can be represented by
f1 ðx; yÞ ¼ f ðx; yÞejðmx þ nyÞ . Its Fourier transform is F1 ðu; vÞ ¼ F ðu  m; v  nÞ. The
power spectrum Pðu; vÞ will also change after rotation. So the Fourier transform is
sensitive to rotation. As shown in Fig. 2, the four images are respectively original
image and its Fourier spectrum, and the original rotated counterclockwise by 30° and
its Fourier spectrum. In each spectrum diagram, there are two lines with high brightness
and passing through the center, the directions indicated by the two lines are the texture
directions of the image. As can be seen from Fig. 2, the Fourier transform is sensitive
to rotation, and image rotation in the spatial domain will be directly reflected in the
spectrum diagram after the Fourier transform. So, the rotation angle of the image can be
detected using the spectrum diagram after the Fourier transform.
62 Y. Ge et al.

Fig. 2. Image and its Fourier spectrum

2.3 Calculation of Rotation Angle


By observing the spectrum diagram in Fig. 2, it can be found that one or more bright
lines will appear in the spectrum diagram after the Fourier transform of the image. In
this paper, a rotation angle between two images can be calculated by locating the
direction of the line with the highest superposition of brightness in the spectrum image.
As shown in Fig. 3, determine a circular region in the spectrum graph, with
ðM=2; N=2Þ as circle center and R ¼ minðM=2; N=2Þ as radius.
Let angle of right horizontal radius to 0°, and denote angle of radius through circle
center as h0 , h0 2 ½p; p. Accumulate pixel value of pixel points passing radius in the
circular area, radius direction corresponding to the maximum accumulation result is the
main direction of the image.
The relationship between any point on the radius and the image coordinates ðx; yÞ
can be expressed in Formula (4).

x ¼ rcosðh0 Þ
ð4Þ
y ¼ rsinðh0 Þ

Where r denotes the distance from the point on a radius to circle center, r 2 ½0; R.
In calculating superimposed pixel values of points on the radius, h0 and r are
A Machine Vision Method for Elevator Braking Detection 63

Fig. 3. A circular area in the spectrum graph

discretized, and the corresponding coordinate can be calculated by the exhaustive


method. When the result is a decimal, it needs to be rounded. Then add up these pixel
coordinates values, results corresponding to the maximum h0 value is the main direction
of the image.
After main directions h1 and h2 of two images are determined respectively, rotation
angle between two images can be calculated by Formula (5).

h ¼ h2  h1 ð5Þ

The rotation angle of Fig. 2 is 30.2755° calculated using the proposed method, and
the actual rotation angle is 30°. It can be seen that the error of the machine vision
method proposed is very small, which can meet the requirements of some engineering
calculations.

3 Experiment and Result Analysis

As shown in Fig. 4, there are the first and last frames of video from an elevator brake
test beginning to complete stop. Due to space limitation, all the frames are not listed,
only rotation angle between the first frame and the last frame is calculated. It should be
noted that the traction wheel rotation angle is less than a week during this period. For
the convenience of measuring traction wheel rotation angle and determine the main
direction of images, a black reflective strip was attached to traction wheel during the
experiment.
64 Y. Ge et al.

The first frame

The last frame

Fig. 4. Brake braking force test

In order to reduce the influence of image background and improve the calculation
accuracy of the rotation angle, only the center part of images is shotted, as shown in
Fig. 5.

The first frame The last frame

Fig. 5. Cutting processing of images

Using the proposed method, Fourier transforms for images, then draw the spectrum
diagram, as shown in Fig. 6.
A Machine Vision Method for Elevator Braking Detection 65

Fig. 6. Brake detection picture and its Fourier spectrum

As shown in Fig. 6, there is a line with higher brightness through the center of the
image, that is the main direction of the image. After calculating the rotation angle, the
rotation angle of the traction wheel is 48.2197°, and actual measurement of rotation
angle is 47.5°, the error is 0.7197°. When traction wheel diameter is known, Formula
(6) can be used to calculate braking distance.

L ¼ phD=360 ð6Þ

Where L denotes braking distance, D denotes traction wheel diameter. Traction


wheel diameter is 600 mm, bringing it to Formula (6), braking distance can be cal-
culated as 252.4778 mm.

4 Conclusion

An elevator brake force detection method based on machine vision is presented in this
paper. The Fourier transform theory is used to transform the image from spatial domain
to frequency domain, then power spectrum of the image is calculated, the main
direction of the image is determined, and finally, rotation angle between two images is
calculated. The proposed method can reduce the workload of elevator brake force
detection and improve detection accuracy.
66 Y. Ge et al.

References
1. Barbara, Z., Flusser, J.: Image registration methods: a survey. Image Vis. Comput. 21(11),
977–1000 (2003)
2. Reddy, B.S., Chatterji, B.N.: An FFT-based technique for translation, rotation, and scale-
invariant image registration. IEEE Trans. Image Process. 5(8), 1266–1271 (1996)
3. Wang, K., Shi, T., Liao, G.: Image registration using a point-line duality based line matching
method. J. Vis. Commun. Image Represent. 24(5), 615–626 (2013)
4. Ofverstedt, J., Lindblad, J., Sladoje, N.: Fast and robust symmetric image registration based
on distances combining intensity and spatial information. IEEE Trans. Image Process. 28(7),
3584–3597 (2019)
5. Zhu, H., Zou, K., Li, Y., et al.: Robust non-rigid feature matching for image registration using
geometry preserving. Sensors 19(12), 1746–1752 (2019)
6. Alahyane, M., Hakim, A., Laghrib, A., et al.: A lattice Boltzmann method applied to the fluid
image registration. Appl. Math. Comput. 349, 421–438 (2019)
7. Chen, K., Grapiglia, G.N., Yuan, J., et al.: Improved optimization methods for image
registration problems. Numer. Algorithms 80(2), 305–336 (2019)
8. Wu, C., Wang, Y., Karimi, H.R.: A robust aerial image registration method using Gaussian
mixture models. Neurocomputing 144, 546–552 (2014)
Remote Monitoring and Fault Diagnosis
System and Method for Traction Elevator
Cattle Dawn

Shuguang Niu1,2(&), Junjie Huang1,2, and Zhiwen Ye1,2


1
Key Laboratory of Elevator Intelligent Safety in Jiangsu Province,
Changshu, China
[email protected]
2
Changshu Institute of Technology, Changshu, Jiangsu, China

Abstract. Remote monitoring and fault diagnosis system of traction elevator,


temperature sensor is installed on traction engine brake, acceleration sensors are
installed both on traction machine and lift car, photoelectric sensor is installed
on car door, microprocessor and elevator monitoring center are installed on top
of elevator; The sensors are connected to the microprocessor, which is con-
nected to the elevator monitoring center. The elevator running process is
monitored by sensors, microprocessors and the elevator monitoring center in
real time, realizing that how the elevator runs and its running state is predicted
by the system, so as to find out the hidden trouble in time and arrange the
maintenance activities reasonably, thus avoiding accidents.

Keywords: Remote monitoring  Fault diagnosis  Sensor  Monitoring center

1 Introduction

With the rapid growth of the number of elevators, high load, large volume and long
cycle of use of elevators become common, and the number of old elevators is surging.
Due to the expansion of the number and scope of elevator using, the faults ,which
become an important hidden danger in production safety have entered the stage of
frequent occurrence—the time between faults has been significantly shortened.
According to the statistics of elevators running for 5–10 years, an elevator has an
average of mechanical and electrical failures for 36.5 times every year, and 3.3 times
accidents that cause great harm to equipment and personal safety such as jacking and
clamping. At present, there are nearly 700 elevator manufacturers in China, and
thousands of elevator installation, transformation and maintenance units with hundreds
of thousands of employees. In order to get a share from the limited market, many
actions—vicious maintenance competition, bad currency drive good currency, the
maintenance work out of place, grab resources—have been taken, which result in the
maintenance market chaos, poor quality maintenance, directly affecting the safe
operation of the elevator. These have a great impact on the safety of the elevator and
elevator accidents and malfunctions occur from time to time.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 67–71, 2020.
https://doi.org/10.1007/978-981-15-2341-0_9
68 S. Niu et al.

Aiming at the elevator safety, traditional method is as follows: first set up the
elevator monitoring center and apply computer control technology and the elevator
remote monitoring based on network communication technology, then use sensors to
collect elevator operation data, analyzing abnormal data through the microprocessor. It
can monitor the elevator in the network 24 h a day without interruption, analyze and
record the running condition of the elevator in real time, and calculate the failure rate
automatically according to the fault record. By means of GPRS network transmission,
public telephone line transmission, LAN transmission and 485 communication trans-
mission, it is a comprehensive elevator management platform that can realize elevator
fault alarm, rescue of trapped people, daily management, quality assessment, hidden
danger prevention and other functions. The method above solves the problem of
knowing when, where, what happened and the development process of the accident in
the first time, but it is the treatment after the occurrence of the fault, and cannot reduce
the occurrence of the fault.

2 Classify According to the Test Data

Related failure, equipment parameters are intermittent or permanently beyond the scope
specified in technical standards.
Unrelated failure, due to the failure of the test instrument and other reasons, the
collected data exceed the specified value.
Fatal failure, failures that can lead to significant loss of personal safety and prop-
erty, identify the cause of failure and predict the change of function difference,
according to the trend of failure and the prediction of equipment life to establish a
benchmark, so as to carry out the corresponding warning.
By means of vibration detection, modal analysis is carried out and the physical
coordinates in the system of linear constant coefficient differential equations are
transformed into modal coordinates, then derive out the system modal parameters:
Fundamental equations for modal analysis:
 
M x þ C x þ Kx þ f ðtÞ

In the equation, M, C, K - mass matrix, damping matrix and stiffness matrix of the
vibration system;
 
x; x; x – Displacement vector, velocity vector and acceleration vector in vibration
system;
For an undamped system, the free vibration equation is:

M x þ Kx ¼ 0

Traction elevator operation monitoring system can carry out real-time monitoring
of elevator status and fault warning or alarm, effectively ensuring the safety of elevator
operation and solving the possibility of elevator failure. Is the traction elevator oper-
ation monitoring method. Remote monitoring and fault diagnosis system of traction
Remote Monitoring and Fault Diagnosis System and Method 69

elevator, temperature sensor is installed on traction wheel and motor brake, acceleration
sensor is installed on the traction machine of the lift car, photoelectric sensor is
installed on car door, The microprocessor and elevator monitoring center are installed
on the top of the elevator; The sensor is connected with the elevator microprocessor,
and the microprocessor is connected with the elevator monitoring center. Micropro-
cessor, including central processing unit, the signal acquisition module, power module
and storage module, signal acquisition module receives the acceleration sensor, pho-
toelectric sensors, temperature sensors transmit signal data, signal acquisition module is
connected to a central processing unit, central processing unit to analyze the signal
data, determine whether the parameters of each sensor are within the specified range,
power supply module and storage module respectively connected with the central
processing unit (Fig. 1).

Fig. 1. Structure and layout

The central processing unit includes a first judge. After starting the motor and
motor brake, the first judge is used to judge whether the acceleration signal data of the
acceleration sensor exceeds the specified range when the photoelectric sensor has a
signal. If it does, the central processing unit will send an alarm signal. The central
processing unit also includes a second judge. After starting the motor and the motor
brake, when there is a signal from the photoelectric sensor, the system calculates the
real-time speed and position of the elevator car based on the signal data of the
acceleration sensor, and determines whether the slip difference of the traction wheel
and the traction rope exceeds the specified range. If the data exceeds the specified
range, the central processing unit will send an alarm signal. Otherwise, the signal data
70 S. Niu et al.

store will be stored in the storage module it belongs to. The second discriminator can
also be used when the photoelectric sensor has no signal but the acceleration sensor has
signal data. It will activate the elevator’s safety clamps and the central processing unit
will send out an alarm signal. The central processing unit also includes a third judge. It
will determine whether the temperature exceeds the specified range according to the
temperature data collected by the temperature sensor. Acceleration sensor is MEMS
triaxial acceleration sensor. The monitoring center includes computers that are used to
receive data from microprocessors and the data is displayed graphically on the com-
puter screen.

3 Remote Monitoring and Fault Diagnosis Methods


for Traction Elevators Include the Following Steps

Install the temperature sensor on the traction wheel and motor brake, the acceleration
sensor on the cage and the traction machine, and the photoelectric sensor on the door of
the car; the elevator also installs microprocessors, used for analysis of abnormal data, to
realize the early fault diagnosis of foresight. Elevator microprocessor in the central
processing unit are connected to each sensor, used for collecting temperature and
acceleration. Then it will determine whether the data is within the scope of the regu-
lations, to the health of the elevator monitoring analysis; the microprocessor sends the
monitoring analysis results to the monitoring center.
When the motor and motor brake are started, if the photoelectric sensor has a signal,
the first judge of the central processing unit will send an alarm signal according to
whether the acceleration signal data of the acceleration sensor exceeds the specified
range or not. The second judge of the central processing unit will calculate the
instantaneous speed and position of the elevator car based on the acceleration signal
data, and determine whether the slip difference between the traction wheel and the
traction rope exceeds the specified range. If the specified range is exceeded, the central
processing unit will send an alarm signal; Otherwise, the signal data is stored in the
storage module; When the photoelectric sensor has no signal and the acceleration
sensor has signal data, the elevator’s safety clamp will be activated and the central
processing unit will send an alarm signal at the same time. The third judge of the
central processing unit determines whether the temperature data is out of the specified
temperature range (Fig. 2).

Fig. 2. Fault diagnosis


Remote Monitoring and Fault Diagnosis System and Method 71

4 Conclusion

By installing sensors and elevator microprocessors on the running elevators, carrying


out real-time monitoring and fault warning or alarm for the status of the elevators, and
realizing the system can predict how the elevators run and their running status, so that
we can find out the hidden trouble in time and arrange the maintenance activities
reasonably. This effectively ensures the safety of elevator operation, solves the pos-
sibility of elevator failure, and thus avoids accidents, which is of great significance.

Acknowledgements. This work is funded by Changshu Science and Technology Bureau


(project no. CQ201702).

References
1. Wang, X., Niu, S.: Application of supercapacitor in elevator emergency and energy saving.
J. Changshu Inst. Technol. 27(2), 72–74 (2013)
2. Shen, S., Zhao, G.: Research on elevator intelligent fault diagnosis system based on wireless
cable. Mech. Eng. (2018)
3. Tian, M., Rui, Y.N.: Elevator safety remote monitoring technology and fault diagnosis. Chem.
Ind. Manag. (2015)
4. Chen, G.: Mechanical fault diagnosis of elevator system based on vibration analysis, Master’s
thesis. Soochow University. (2018)
Soil Resistance Computation and Discrete
Element Simulation Model of Subsoiler
Prototype Parts

Gong Liu1, Zhenbo Xin2, Ziru Niu2, and Jin Yuan1(&)


1
College of Mechanical and Electronic Engineering,
Shandong Agricultural University, Tai’an 271018, China
[email protected]
2
Shandong Provincial Key Laboratory of Horticultural Machinery
and Equipment, Tai’an 271018, China

Abstract. Subsoiling operation can break the bottom layer of the soil, thicken
the tillage layer, and increase crop yield. The subsoiling operation faces the
problem of large resistance and serious wear of the soil parts, which greatly
increases the cost of the subsoiling operation. In order to reduce tillage resis-
tance, the active lubrication method is selected to study. According to the
structure of the curved deep shovel handle, a simple sample is designed and
processed. A theoretical calculation method for the resistance of the sample
work is proposed. The finite element software EDEM is used to simulate the
working condition of the sample, and it is tested in indoor soil trough and
outdoor field. By comparing the simulation results with the real test, the cor-
rectness of the theoretical calculation and simulation model was verified, and the
drag reduction effect of the active lubrication and drag reduction operation mode
was proved.

Keywords: Subsoil  Drag reduction  Discrete element simulation test

1 Introduction

Subsoiling operation is an important part of conservation tillage, which has been


widely promoted and applied in the world with less tillage and no tillage. Subsoiling
operation can break the bottom layer of soil plough, thicken the tillage layer, store
water and conserve moisture, which has a good effect on the increase of field crop yield
[1]. At present, subsoiling operation is faced with problems such as large tillage
resistance and serious wear of machinery and tools. Therefore, it is of great significance
to reduce operation resistance, energy consumption and cost for the promotion and
popularization of subsoiling operation and improve the quality of farmland in China.
To solve the problem of large tillage resistance, experts at home and abroad have
studied many ways to optimize the design of subsoiler, such as vibration resistance
reduction, bionic resistance reduction, etc. Different from the above idea of reducing
drag, the idea of an active lubrication method to reduce the resistance has not been fully
studied in relevant academic conferences.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 72–80, 2020.
https://doi.org/10.1007/978-981-15-2341-0_10
Soil Resistance Computation and Discrete Element Simulation Model 73

In order to study the drag reduction effect of the active lubrication mode, a sample
was designed and processed in this paper. The theoretical analysis of the sample and
the EDEM discrete element simulation model were established. The correctness of the
theoretical analysis and simulation model was verified by experiments. The drag
reduction effect of the active lubrication drag reduction mode was verified.

2 Design and Process a Sample

In this paper, the shovel handle of the surface deep shovel is selected as the prototype
of the sample. The prototype is optimized by the bionic method. By simulating the
corrugated shape of the earthworm body surface, the back surface and ripple of the
head and body of the scorpion to simulate, three back holes were designed on the side
of the sample, and the three back holes were arranged vertically in a straight line on the
side of the sample [2]. The arrangement of these back holes is the optimal arrangement
of resistance reduction effect obtained by Bingxue Kou after the experiment. In this
paper, the internal pipeline is designed in the sample, and the orifice is designed in the
back hole of the foremost side in the forward direction of the sample as shown in
Fig. 1. The orifice is evenly distributed in the back hole and connected to the pipe
inside the sample.

Fig. 1. Sample design Fig. 2. Sample Fig. 3. Orifices

According to the sketch designed above, the test sample is processed. 65 Mn is


selected as the material to obtain the sample as shown in Fig. 2 and the throttle hole as
shown in Fig. 3.

3 Force Analysis

In this paper, based on the experience and calculation methods of the predecessors,
combined with the Kostritsyn calculation method, the theoretical analysis of the force
of the sample is carried out. When Kostritsyn studied the cutting force of soil, he
divided the subsoiler type cutter into two basic shapes, among which the front edge
with sharp Angle is called the wedge blade, and the parallel blade is called the side
edge [3]. The stress of the sample is analyzed, and its stress is shown in Fig. 4.
74 G. Liu et al.

Fig. 4. Stress condition of sample

Combined with Jingfeng Bai’s calculation method for subsoiler, the formula for
calculating the resistance of the sample can be obtained as follows [4]:
 a a 
F1 ¼ 2 N1 sin þ l N1 cos þ lN2 ð1Þ
2 2

2S1 B cos a2 þ h
N1 ¼ Kel þ ð2Þ
d0
a
N2 ¼ Kel S2 cos ð3Þ
2

where
F1—total resistance of the sample (N)
T—the positive force on the knife (N)
P—the drag component acting in the normal direction (N)
N1 —normal force of soil on wedge edge (N)
N2 —normal force on the opposite edge of soil (N)
a—sample wedge edge Angle (50°)
l—Coefficient of friction between sample and soil (0.6)
Kel—Stress due to elastic deformation of soil (4500 N/m2)
S1—Wedge edge area of sample (0.003542 m2)
S2—Area of side edge of sample (0.0106 m2)
d′—Sample width (0.015 m)
h—Sample work depth (0.15 m)
L0—average deformation amount of soil
h—Angle of friction between soil and metal (40°).
When the subsoiler works, taking into account the complexity of the soil envi-
ronment and the impact of the operating speed on the resistance, it is considered that
the resistance also includes a part of the force including the undetermined coefficient,
which is calculated as:

F0 ¼ khd0 v2 ð4Þ
Soil Resistance Computation and Discrete Element Simulation Model 75

where
k—Undetermined coefficient
v—Operating speed (0.5 m/s).
According to the previous test, the value of the undetermined coefficient is taken as
120. Therefore, the average deformation of soil L0 = 0.0089 m.
By arranging formula (1) to formula (4), the corresponding data can be substituted
into the force of the sample:

F2 ¼ F1 þ F 0 ¼ 911:55 N ð5Þ

In summary, the total resistance of the sample is 911.55 N, and the direction is
opposite to the direction in which the sample advances.

4 Discrete Element Method Simulation

4.1 The Establishment of Discrete Element Model


This paper adopts 3d CAD design software Solidworks2018 to carry out 3d modeling
of the samples. After 1:1 modeling of the samples according to real objects, the sample
is saved into the intermediate format of IGS. Discrete element method is used for
simulation software EDEM2018.
In order to simulate the soil environment of sample operation, it is very important to
establish the soil model accurately. Contact model is an important basis of discrete
element method, which is essentially the elastoplastic analysis results of contact
mechanics of particles and solids under quasi-static state [5, 6]. The contact model
determines the forces and torques suffered by the particle model. Due to the complexity
of soil environment, selecting the appropriate soil model becomes a key step [7]. In this
paper, soil parameters of the land environment were obtained through field testing of
soil environment of Huanghuaihai regional maize technology innovation center, and
microscopic parameters in EDEM simulation were finally determined by combining
data.

4.2 Determination of Discrete Element Simulation Parameters


In discrete element simulation, the accuracy of parameters affects the simulation results
[8]. According to the relevant literature, the parameters of microscopic soil required for
simulation are referenced and fine-tuned, as shown in Table 1 [9].
76 G. Liu et al.

Table 1. Basic parameters of the discrete element model


Parameters Value
Density of soil particles/(kg/m3) 1350
Poisson’s ratio of soil particles 0.4
Shear modulus of soil particles/Pa 1.09 * 106
Density of 65 Mn steel/(kg/m3) 7830
Poisson’s ratio of 65 Mn steel 0.3
Shear modulus of 65 Mn steel/Pa 7.9 * 106
Coefficient of restitution between the soil and soil 0.3
Coefficient of static friction between the soil and soil 0.5
Coefficient of rolling friction between the soil and soil 0.2
Coefficient of restitution between the soil and 65 Mn steel 0.5
Coefficient of static friction between the soil and 65 Mn steel 0.6
Coefficient of rolling friction between the soil and 65 Mn steel 0.2
Particle radius/mm 3
Number of particles 214500
Bonding radius/mm 3.5

4.3 Simulation
In the Geometries function branch in the pre-processing module of EDEM2018 soft-
ware, the Geometries sample in IGS format is imported into EDEM software, and the
working depth of the Geometries sample is 150 mm by adjusting its position relative to
the soil groove. The forward speed of the deep pine shovel is set to 0.5 m/s. The initial
state of soil model simulation in EDEM software is shown in Fig. 5.

Fig. 5. Initial simulation state of simulation

In the EDEM solver module setup, this paper takes the experience of the prede-
cessors and the way to find related documents, sets the fixed time step to 33%, saves
the time every 0.0001 s. In the simulation process, the method of modifying the
dynamic coefficient of rolling friction between the soil and the sample is used to
simulate the lubrication and non-lubrication operations. The larger friction coefficient is
Soil Resistance Computation and Discrete Element Simulation Model 77

used as the case of the sample operation without lubrication, and the smaller friction
coefficient is used as the operation of lubricating the sample with the lubricating
medium. After performing multiple simulations, through the analysis of the data, the
resistance data in the steady state after the sampled material is placed in the soil, after
processing the data, the working resistance of the sample is finally shown in Table 2.

Table 2. Simulated resistance


Lubrication No lubrication
Coefficient of rolling friction 0.2 0.6
Average resistance (N) 830.59 961.36

It can be seen from the table that when coefficient is 0.6, the working resistance of
the sample is 961.36 N, which corresponds to the resistance in the case of no lubri-
cation. When coefficient is 0.2, the working resistance is 830.59 N, which corresponds
to the resistance in the case of lubrication.

5 Test

5.1 Design and Build Test System


In order to test the sample, this paper designed a simple test system. The test system is
shown in Fig. 6. The test system is provided with traction by electric capstan to drive
the working platform forward. A water tank, a water pump, a battery, and a tension
sensor and a pressure sensor are mounted on the work platform. One side of the
platform is connected to the sensor via a paperless recorder, and the paperless recorder
provides instant display and recording of test data.

Fig. 6. Assembly of test system

In order to realize the experiment of the sample conveniently and quickly, and to
adjust the experimental scheme in real time, this paper built a simple soil trench test
bench indoors. The design drawing is shown in Fig. 7. The test bench is shown in
Fig. 8. The measurement of parameters is shown in Fig. 9.
78 G. Liu et al.

Fig. 7. Design drawing Fig. 8. Test bench Fig. 9. Measure parameters

After constructing the test bench, installing the rack and filling the soil in the soil
tank, this paper collected the physical parameters of several fields in the Huanghuaihai
area. The method of adding water and compacting the soil tank is adopted to make the
soil physical property parameters in the soil tank consistent with the field parameters.

5.2 Test and Analysis


In order to fully test the samples, this paper carried out several experiments in the indoor
soil tank test bed, the experimental field in the south campus of Shandong Agricultural
University and the Huanghuaihai corn innovation center. Figures 10, 11 and 12 show
the test.

Fig. 10. Indoor soil groove Fig. 11. South school Fig. 12. Huanghuaihai corn
test field innovation center

The basic parameters of the soil tank and Daejeon are shown in Table 3 during the test.

Table 3. Physical properties of the test site


Physical parameters Depth Test site
(mm) Indoor soil South school Huanghuaihai corn
groove test field innovation center
Soil compactness 100 71.21 65.87 79.30
(Kpa) 200 172.64 156.90 170.25
300 285.65 321.88 347.10
Average soil moisture 100 18.63 13.63 23.20
content (%) 200 19.11 17.23 28.31
300 19.85 21.51 27.78
Soil Resistance Computation and Discrete Element Simulation Model 79

Fig. 13. Laboratory test result Fig. 14. South school test Fig. 15. Test result of Huang-
result huaihai corn innovation center

Figures 13, 14 and 15 show the test results of the multi-site test. Figure 13 shows
the results of the lubrication drag reduction test conducted on the indoor soil test bench.
The average resistance of liquid lubrication test is 750.73 N, and the average working
resistance is 889.55 N under the condition of no liquid lubrication. The drag reduction
rate is 15.61%. Figure 14 shows the test in the school test field. During the test, the soil
moisture content is small and the soil is dry. In the case of liquid lubrication, the
average operating resistance of the sample is 574.49 N. In the case of no liquid
lubrication, the average operating resistance is 778.45 N. The drag reduction rate is
26.2%. Figure 15 shows the sample test in Huanghuaihai corn innovation center. The
average resistance of liquid lubrication test is 835.11 N, and the average working
resistance is 924.40 N under the condition of no liquid lubrication, and the drag
reduction rate is 9.63%. At this time, the center’s field is in a state shortly after
irrigation, and the soil moisture content is high.

6 Discussion

After tests, according to the data of the tests and the comparison with the results of
theoretical calculation, it is found that the error between the theoretical calculation
results and the results of indoor tests is 2.4%, and the error between the results of
Huanghuaihai regional tests is 1.4%. Therefore, it is considered that this theoretical
calculation method is feasible. Meanwhile, comparing the simulation and experiment, it
is found that the error of the simulated resistance is also within the acceptable range.
Therefore, it is considered that the simulation model is correct, and it is feasible to adjust
the rolling friction factor in the simulation to correspond to the watering condition.

7 Conclusions

This paper chose the active lubrication drag reduction method for research and pro-
cessed a sample. A theoretical calculation method of sample resistance is proposed, and
a simulation model which can simulate the working resistance of the sample is
established. The correctness of the theoretical calculation method and the simulation
model is verified by experiments, and the drag reduction effect of the active lubrication
and drag reduction operation mode is verified.
80 G. Liu et al.

Acknowledgements. This work was supported by National Key R&D Program of China
(2017YFD0701103-3) and Key research development plan of Shandong Province (2018GNC11
2017, 2017GNC12108).

References
1. Zhang, J., Zhai, J., Ma, Y.: Design and experiment of biomimetic drag reducing deep
loosening shovel. J. Agric. Mach. 45(04), 141–145 (2014)
2. Kou, B.: Resistance reduction by bionic coupling of earthworm lubrication function. Master’s
thesis, Jilin University, pp. 19–24 (2011)
3. Gill, W.R., Vanden Berg, G.E.: Soil Dynamics in Tillage and Traction. China Agricultural
Machinery Publishing House, Beijing (1983)
4. Bai, J.: Analysis of Anti-Drag Performance for Vibrating Bionic Subsoiler. Northwest
Agriculture and Forestry University, Xianyang (2015)
5. Deng, J., Hu, J., Li, Q., Li, H., Yu, T.: Simulation and experimental study of deep loosening
shovel based on EDEM discrete element method. Chin. J. Agric. Mech. 37(04), 14–18 (2016)
6. Deng, J.: Simulation and Experimental Study of the Subsoiler Tillage Resistance Based on
Discrete Element Method. Heilongjiang Bayi Agricultural University, Daqing (2015)
7. Hu, J.: Simulation analysis of seed-filling performance of magnetic plate seed-metering device
by discrete element method. Trans. Chin. Soc. Agric. Mach. 45(2), 94–98 (2014)
8. Wang, X.: Calibration method of soil contact characteristic parameters based on DEM theory.
J. Agric. Mach. 48(12), 78–85 (2017)
9. Liu, X., Du, S., Yuan, J., Li, Y., Zou, L.: Analysis and experiment on selective harvesting
mechanical end-effector of white asparagus. Trans. Chin. Soc. Agric. Mach. 49(04), 110–120
(2018)
Simulation Analysis of Soil Resistance of White
Asparagus Harvesting End Actuator Baffle
Parts Based on Discrete Element Method

Haoyu Ma1, Liangliang Zou2, Jin Yuan1(&), and Xuemei Liu2


1
College of Mechanical and Electronic Engineering,
Shandong Agricultural University, Tai’an 271018, China
[email protected]
2
Shandong Provincial Key Laboratory of Horticultural Machinery
and Equipment, Tai’an 271018, China
[email protected]

Abstract. In this paper, the baffle in the end effector of white asparagus harvest
is taken as the research object, and five different baffles were designed. The
discrete element method was used to analyze the resistance of different baffles
when entering the soil. The simulation test results are as follows. The resistance
of these five baffles depends on the depth and velocity of the incoming soil and
is a quadratic function of these two factors. When the depth of the baffle into the
soil is less than 6 cm and the velocity of the baffle into the soil is 0.2 m/s, the
resistance of the inclined cylindrical baffle is minimal. When the speed is
0.4 m/s and 0.6 m/s, the rectangular inclined sheet-like baffle has the least
resistance. When the depth of the baffle into the soil is greater than 6 cm, the
resistance of the triangular plate baffle is minimal regardless of the speed. This
study can provide a theoretical basis for structural parameter optimization of
white asparagus harvest end effectors.

Keywords: Discrete element simulation  White asparagus harvest  Soil


resistance  End effector

1 Introduction

White asparagus is a perennial herb with high nutritional value and anti-cancer health
effects [1]. China is a big country for white asparagus cultivation, but the harvest of
white asparagus is mainly based on artificial picking. There is no machinery for har-
vesting white asparagus in China. The harvest time of white asparagus is morning or
evening, and the harvesting time is relatively concentrated. The manual harvesting
workload is large and the efficiency is low. The high-efficiency and low-damage har-
vesting requirements of white asparagus have become the bottleneck restricting the
development of China’s asparagus industry [2].
The mechanical harvesting of white asparagus can be divided into the following
steps: the harvesting mechanism is inserted into the soil, the root of the white asparagus
is cut, and the whole white asparagus is taken out [3]. The harvesting machine has a
special baffle structure or clamping structure that prevents white asparagus from falling

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 81–88, 2020.
https://doi.org/10.1007/978-981-15-2341-0_11
82 H. Ma et al.

during the harvesting process. These structures increase the resistance to the harvesting
mechanism when it enters the soil, thereby increasing energy consumption.
EDEM software is a software that uses discrete element technology to simulate and
analyze the interaction of particles and particle clusters. EDEM software is a software
for finite element simulation by synthesizing macroscopic objects through microscopic
particles and imparting mechanical properties between the particles. It adopts advanced
DEM algorithm and can simulate the finite element particles reliably. EDEM software
has numerous applications in industry and agriculture, such as the study of viscous and
non-viscous soils, crop harvesting and screening during crop harvesting, and the design
and optimization of various knives and shovel [4–6].
In this paper, the five kinds of baffle structures in the self-designed white asparagus
harvesting end effector are analyzed by discrete element method. Through simulation
analysis, the variation of the resistance of different baffles with the velocity and depth
of the baffle entering the soil is obtained. It is ultimately determined which baffle
receives the least resistance to soil. This study can provide a theoretical basis for
structural parameter optimization of white asparagus harvest end effectors.

2 End Effector Baffle Structure Design

In this paper, five kinds of baffle structure models are designed with Solidworks. The
various models are shown in Fig. 1, and are represented by A, B, C, D, and E,
respectively. All baffles are attached to the same shelf. The A baffle is composed of two
triangular hollow sheets. The B baffle is welded by two stainless steel cylinders with
the same angle as the angle of the A baffle. The C-Baffle is formed by bending two
stainless steel cylinders. The three baffles A, B, and C have the same length in the
horizontal direction. The D baffle and the E baffle are welded at the same angle by a
stainless steel plate and a cylinder. The width of the steel plate of the A baffle and the D
baffle is 2 mm and the thickness is 1 mm. The stainless steel cylinders in the B, C and
E baffles are both 2 mm in diameter. The distance between the uppermost and low-
ermost portions of the five baffles in the vertical direction is the same. In the following
sections, the five types of baffles are represented by their corresponding uppercase
English letters.

A B C D E

Fig. 1. Five baffle models


Simulation Analysis of Soil Resistance 83

3 Simulation Parameter Setting

In EDEM software, the interaction force between particles and particles, the interaction
force between particles and boundaries, and the force of the particles themselves are
generally analyzed using different contact models [7]. In this study, the ‘Hertz-Mindlin
with bonding’ model in the EDEM2018 version was used to set up the soil model for
simulation. The parameters that need to be set for this model are unit hardness, stress
and bond radius [8]. This model can be used to simulate problems such as fracture and
fracture, and the bond between the particles is destroyed by external forces. This model
is used in this paper to simulate soil bonding and fracture. The simulation parameters of
the model are shown in Table 1.

Table 1. Model simulation parameter settings


Parameter Numerical value
Poisson’s ratio of soil particles 0.4
Shear modulus of soil particles/Pa 1.09e+06
Soil particle density (kg.m−3) 2600
Poisson’s ratio of stainless steel 0.3
Shear modulus of stainless steel material/Pa 8e+10
Density of stainless steel material (kg.m−3) 7800
Restoration coefficient between soil particles and soil part 0.3
Rolling friction coefficient between soil and soil 0.3
Static friction coefficient between soil and soil 0.5
Recovery coefficient between soil and stainless steel 0.3
Rolling friction coefficient of soil and stainless steel 0.13
Static friction coefficient between soil and stainless steel 0.6

The particle radius is 1.25 mm and the particle size ratio fluctuates between 0.8 and
1.2. The parameters of the contact model are shown in Table 2. The speed of the baffle
is set to 0.2 m/s, 0.4 m/s and 0.6 m/s, and the direction is set to the direction of gravity
acceleration. The depth of the baffle into the soil is 12 cm and the set time step is 30%.
The discrete element model in the simulation process is shown in Fig. 2.

Table 2. Model parameter settings


Parameter Numerical value
Unit area method phase stiffness 2e+07 N/m2
Tangential stiffness per unit area 1e+07 N/m2
Critical phase stress 7e+06 Pa
Tangential critical stress 4e+06 Pa
Bonding radius 2.2 mm
Bonding time 0s
84 H. Ma et al.

A B C D E

Fig. 2. Simulation model of five kinds of baffles

4 Analysis of Simulation Results


4.1 Analysis of the Resistance of Different Models in the Same Speed
Through the EDEM post-processor, the resistance of each model along the Z-axis is
obtained and imported into an Excel spreadsheet. The force diagrams of the five models
are plotted at different speeds, as shown in Figs. 3, 4, and 5.

13
11 A
9 B
Force (N)

7 C
5 D
3 E
1
-1 0 0.1 0.2 0.3 0.4 0.5 0.6

Time(s)

Fig. 3. Force diagram when the speed is 0.2 m/s

When the baffle has just entered the soil, the forces of the three baffles A, B, and C
are approximately equal and significantly larger than the resistance of the D and E
baffles. When the baffle half enters the soil, the force of the B model increases sig-
nificantly, reaching 13 N. The other four baffles are subjected to forces between 10 N
and 11 N, and the difference in resistance does not exceed 1 N. Therefore, when the
speed is 0.2 m/s, the D and E baffles can be used to reduce the resistance of the end
effector into the soil.
Simulation Analysis of Soil Resistance 85

17
15
13
11 A
9 B
Force(N)

7
5 C
3 D
1 E
-1
0 0.05 0.1 0.15 0.2 0.25 0.3
Time (s)

Fig. 4. Force diagram of each baffle at a speed of 0.4 m/s

In the first half of the simulation, the B and C baffles are subjected to a large force,
the A-shaped baffle is centered, and the D and E baffles are less stressed. In the second
half of the simulation, the B-plate is significantly increased in force. The resistance of
the D and E baffles exceeds the A-type baffle within 0.2 s. The B and E baffles are
subjected to a maximum force of 17 N, and the A baffle is subjected to a force of at
least 14 N. Therefore, when the speed is 0.4 m/s, the A baffle can be selected.

24
21
A
18
B
15
Force (N)

12 C
9 D
6 E
3
0
0 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2
Time (s)

Fig. 5. Force diagram of each baffle at a speed of 0.6 m/s

When the speed is 0.6 m/s, the overall trend of the force of each baffle is the same
as that of 0.4 m/s, but the interval between the force curves is obviously increased
compared with the speed of 0.4 m/s, and this The trend gradually increases with the
increase of the depth of the soil. B baffle force is up to 23.5 N, and A baffle force is
only 18 N. By analyzing the force curves of the three speeds, it can be found that when
the depth of the baffle entering the soil is less than 6 cm, the resistance of the D and E
baffles is approximately the same, and is always smaller than the resistance of the other
baffles.
86 H. Ma et al.

4.2 Analysis of the Resistance of the Same Baffle at Different Speeds


In order to determine the resistance of each baffle when the depth of the baffle entering
the soil is greater than 6 cm, the force diagrams of the different baffles are plotted. As
shown in Fig. 6. The trend of the resistance of each baffle as a function of speed is
analyzed by Fig. 6.

24 0.2m/s
20 0.2m/s 20 0.4m/s

Force(N)
16 0.4m/s 16
Force(N)

0.6m/s 0.6m/s
12 12
8 8
4 4
0
0
0 2 4 6 8 10 12
0 2 4 6 8 10 12
A Depth(cm) B Depth(cm)

20 0.2m/s 20 0.2m/s
16 0.4m/s 16 0.4m/s
Force(N)
Force(N)

12 0.6m/s 12 0.6m/s
8 8
4 4
0 0
0 2 4 6 8 10 12 0 2 4 6 8 10 12
C Depth(cm) D Depth(cm)

20 0.2m/s
16 0.4m/s
Force(N)

12 0.6m/s
8
4
0
0 2 4 6 8 10 12

E Depth(N)

Fig. 6. Force diagram of different speeds of each baffle

When the depth of the baffle into the soil is greater than 6 cm. The resistance of the
B baffle increases rapidly with increasing speed. The maximum resistance of the B
baffle is 24 N, and that of the other baffles is 20 N. When the soil depth is between
6 cm and 12 cm, the magnitude of the resistance of the remaining four baffles does not
differ by more than 1 N. But different baffles have different trend of resistance change.
As the speed and depth of entering the soil increase, the slope of the force curve of the
C, D, and E baffles becomes larger and larger. Compared with other baffles, the force of
the A baffle increases more slowly. By analyzing the previous single variable, it can be
Simulation Analysis of Soil Resistance 87

found that the relative magnitude of the resistance of each baffle changes when the
depth of the soil is 6 cm. Therefore, the force of the soil depth of 6 cm and 12 cm is
taken to determine the selection of the baffles at different depths and different speeds.
When the depth of the soil is less than 6 cm, the force curve of the D and E baf-fles
does not intersect with the force curve of the other baffles, and is always below the
force curve of the other baffles. Table 3 can be used to determine the selection of
baffles at different speeds. When the depth of the soil is greater than 6 cm, the resis-
tance of the A baffle increases the slowest, and it can be seen from Table 4 that the
resistance of the A baffle is the smallest, so the A-plate is the least stressed regardless of
the speed.

Table 3. Forces on baffles at different velocities at depths of 6 cm


Force (N) Baffle
Speed (m/s) A B C D E
0.2 5.00 5.24 5.58 4.41 4.05
0.4 7.21 8.17 7.52 5.64 6.64
0.6 10.01 10.49 9.98 7.91 7.95

Table 4. Forces on baffles at different velocities at depths of 12 cm


Force (N) Baffle
Speed (m/s) A B C D E
0.2 10.47 12.67 10.92 10.71 10.48
0.4 14.30 16.60 15.39 15.27 16.76
0.6 18.15 23.42 18.39 19.37 20.44

5 Conclusions

The resistance of these five baffles depends on the depth and velocity of the incoming
soil and is a quadratic function of these two factors. When the depth of entering the soil
is less than 8 cm and the speed is 0.2 m/s, the E baffle receives the least resistance. At
speeds of 0.4 m/s and 0.6 m/s, the D baffle receives the least resistance. When the
depth of entering the soil is greater than 8 cm, the A baffle receives the least resistance
regardless of the speed.

Acknowledgements. This work is supported by Key R&D plan of Shandong Province (2017
GNC12110, 2017GNC12108), by National Natural Science Foundation of China (51675317)
and by National Key R&D Program of China (2017YFD0701103-3).
88 H. Ma et al.

References
1. Huang, Z.: Key points of cultivation techniques of white asparagus in Dongshan County,
Fujian Province. Agric. Eng. Technol. 39(05), 77+85 (2019)
2. Chen, D., Wang, S., Wang, X.: Analysis on the status quo and development of mechanized
harvesting technology of asparagus. J. China Agric. Univ. 21(04), 113–120 (2016)
3. Du, S.: Optimization design and experimental study of selective harvesting end effector for
white asparagus. Shandong Agricultural University (2018)
4. Fang, H., Ji, C., Chandio, F.A., Guo, J., Zhang, Q., Chaudhry, A.: Analysis of soil movement
behavior in rotary cultivating process based on discrete element method. Trans. Chin. Soc.
Agric. Mach. 47(03), 22–28 (2016)
5. Wang, J., Wang, Q., Tang, H., Zhou, W., Duo, T., Zhao, Y.: Design and experiment of deep
burying and stalk returning device for rice straw. Trans. Chin. Soc. Agric. Mach. 46(09), 112–
117 (2015)
6. Zhang, Q., Liao, Q., Yu, W., Liu, H., Zhou, Y., Xiao, W.: Optimizati and experiment of the
surface of the ditching plow of the rape direct seeding machine. Trans. Chin. Soc. Agric.
Mach. 46(01), 53–59 (2015)
7. Wu, Y., Bai, X., Zhang, S., Zhai, J.: Discrete element analysis of material movement in rice
mill based on discrete element EDEM. Cereal Process. 43(02), 52–55 (2018)
8. Wang, X., Yue, B., Gao, X., Zheng, Z., Zhu, R., Huang, Y.: Simulation and experiment of
soil disturbance behavior when different heights of shingling shovel are installed. Trans. Chin.
Soc. Agric. Mach. 49(10), 124–136 (2018)
Simulation and Experimental Study of Static
Porosity Droplets Deposition Test Rig

Laiqi Song1, Xuemei Liu2(&), Xinghua Liu2, and Haishu Zhang1


1
College of Mechanical and Electronic Engineering,
Shandong Agricultural University, Tai’an 271018, China
2
Shandong Provincial Key Laboratory of Horticultural Machinery
and Equipment, Tai’an 271018, China
[email protected]

Abstract. In the process of field spraying, the migration and deposition of


pesticide droplets in plant canopy are the key to ensure the high efficiency of
application. This paper takes cotton as an example to study the spatial droplet
deposition distribution based on the static porosity of cotton. The cotton canopy
was regarded as porous medium from the point of droplet transportation in
cotton canopy. Cotton canopy was divided into three layers and represented
different cotton forms through the combination of different porosity. Set up a
static porosity test device for cotton and conduct spray deposition test. At the
same time, the spatial distribution of droplets under different porosity was
obtained through CFD simulation. The results show that the static porosity
droplet deposition test device can be used to analyze the spatial distribution of
droplet deposition.

Keywords: Static porosity  CFD simulation  Droplet deposition

1 Introduction

In the field operation of the sprayer, the penetration and deposition of droplets are
important indicators to measure the performance or the appropriateness of the
parameters. The deposition and infiltration of droplets in the canopy of crops has a
major impact on the control of pests and diseases as well as pesticide pollution. In order
to improve the control effect, avoid environmental pollution and reduce the amount of
pesticides used, it is particularly important to study the deposition distribution of
droplets in plant canopies.
At present, the test method for droplet osmosis deposition in crops is mainly to use
computational fluid dynamics (CFD) simulation and field testing of real plants. Dekeyser
et al. Used artificial trees to compare the sedimentary quality of seven orchard spray
application techniques in trees [1]. Tonggui Wu et al. proposed a three-dimensional
structural index that can represent the optical porosity of multi-row tree-shaped wind-
breaks, and applied it to wide-width tree-shaped windbreaks, revealing the relationship
between the protective effect of tree windbreaks and optical porosity [2]. Cian James
Desmond et al. found that an accurate assessment of the porosity of the canopy, and
specifically the variability with height, improves simulation quality regardless of the

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 89–97, 2020.
https://doi.org/10.1007/978-981-15-2341-0_12
90 L. Song et al.

turbulence closure model used or the level of canopy geometry included [3]. Hong et al.
established a comprehensive CFD model to predict displacement of pesticide spray
droplets discharged from an air-assisted sprayer, depositions onto tree canopies, and off-
target deposition and airborne drift in an apple orchard [4, 5]. Endalew et al. proposed a
new CFD integration method for airflow and droplet deposition simulation of airflow
assisted spray, simulating orchard canopy targets, assisting airflow, complex interactions
of air and mist flow, and proposing a random deposition model. This model was used for
droplet deposition of leaves and verified the model to improve the design characteristics
of nebulizer and the calibration of operating parameters to improve spray efficiency and
reduce environmental impact [6–8].
As one of China’s important economic crops, cotton needs to be sprayed with
pesticides during the growth process. As the cotton grows to the middle and late stages,
the occlusion of the leaves is very serious, and the internal porosity is small, which is
not conducive to the uniform deposition of droplets. In this paper, cotton is used as the
research object and the research on spatial droplet deposition based on static porosity of
cotton is carried out.

2 Construction of Space Droplets Deposition Test Device


Based on Static Porosity

This paper only discusses the static porosity of plant populations and there is no change
in porosity during spraying. The canopy profile of multiple cottons is identical to the
canopy profile of individual cotton and the porosity is also the same. The establishment
process of the static porosity testing device for cotton is as follows.
(1) Layering cotton plants. Four cotton plants with certain interaction between
branches and leaves were selected as the droplet deposition test area. At the same
time, along the vertical direction of the plants, they were divided into three
different layers, namely the top layer, the middle layer and the lower layer. The
height of each floor from the ground is shown in Fig. 1.

Fig. 1. Schematic diagram of cotton plant stratification

(2) Porosity measurement of cotton plants. Assume that the porosity of each layer of
cotton plants in (1) is 60%, 50% and 35%, respectively.
Simulation and Experimental Study 91

(3) Arrangement of porosity per layer in the test device. The porosity of each layer is
described by using a 9 cm diameter disc in a square area of 1  1 m. According
to the porosity assumed above, when the porosity is 60%, the number of discs is
64 and the actual calculated porosity is 59.28%. When the porosity is 50%, the
number of discs is 81 and the actual porosity calculated is 48.47%. When the
porosity is 35%, the number of discs is 100 and the actually calculated porosity is
36.38%. All the discs on each layer are evenly dispersed on the square area.
(4) Construction of space droplet deposition test device based on static porosity.
Aluminum profiles were used to build a three-layer disc placement platform to
ensure that the heights of each layer from the ground were 0.9 m, 0.59 m and
0.28 m respectively. Finally, a static porosity similarity test device for cotton
population plants was obtained, as shown in Fig. 2.

Fig. 2. Test device for static porosity of cotton plants

3 Static Porosity Space Droplet Deposition Distribution Test

3.1 Experimental Method and Environment

(1) The test site is the Spray Performance Laboratory of Shandong Agricultural
University. The effect of natural wind is not considered during the test. As shown
in Fig. 3. The porosity is adjusted by changing the size between the wires.

Fig. 3. Static porosity deposition test device and test process


92 L. Song et al.

(2) Set the sampling point. According to the porosity of each layer of the prediction
model, select different sampling points on the front of the disc. A 3  4 cm
water-sensitive paper was used as a droplet deposition carrier and the water-
sensitive paper was fixed to a disk by a paper clip. It is randomly placed in the
sampling point and numbered on the back of the water-sensitive paper.
(3) Start spray test. The test used water instead of pesticide to spray. By adjusting the
duty cycle and the nozzle flow rate was adjusted to 0.37 L/min. When the nozzle
sprayed the droplets to a stable state, the mobile platform is controlled to pass
through the spray zone at a speed of 0.65 m/s.
(4) Collect water-sensitive paper. After the spray is complete, wait for the water
sensitive paper to dry, place it in a sealed bag for post treatment. Repeat the above
steps and adjust the combination of porosity conversion devices according to the
test scheme. Each test combination was repeated three times.
(5) The DepositScan software was used to process the water-sensitive paper. Then the
analysis results are exported to Excel to obtain the unit deposition amount on each
water-sensitive paper. The average deposition per unit area of each layer after
treatment is calculated according to Eq. (1), and the deposition amount of the
whole layer would be calculated by formula (2):

q1 þ q2 þ q3 þ    þ qm
q ¼ ð1Þ
m
Q ¼ q  S ð2Þ

Where
m—Number of sampling points per layer
q—Average deposition at all sampling points in each layer (ll/cm2)
S—Total area per layer (cm2)
Q—Total deposition per layer (ll).

3.2 Combination of Test Schemes


In order to study the deposition characteristics of droplets under different porosity,
according to the relationship between cotton plant type and porosity, the three kinds of
porosity assumed according to different plant types are combined to obtain test devices
with different static porosity. The combination of the three types of porosity on the
droplet deposition device is shown in Table 1.
Simulation and Experimental Study 93

Table 1. Three combinations of porosity on droplet deposition devices


Porosity Top Middle Lower Middle and lower Overall
assemblage porosity porosity porosity porosity porosity
1 59.28% 36.38% 59.28% 22.79% 16.40%
2 59.28% 48.47% 36.38% 33.06% 18.25%
3 59.28% 59.28% 48.47% 53.30% 26.01%
4 48.47% 59.28% 36.38% 29.47% 14.95%
5 48.47% 36.38% 48.47% 21.85% 17.03%
6 48.47% 48.47% 59.28% 37.78% 28.90%
7 36.38% 48.47% 48.47% 12.48% 11.18%
8 36.38% 36.38% 36.38% 22.42% 12.16%
9 36.38% 59.28% 59.28% 25.59% 16.62%

For the above nine sets of porosity combinations, according to the relationship
between different cotton canopy morphology and porosity, the corresponding canopy
shapes of the nine devices are shown in Fig. 4.

Fig. 4. Schematic diagram of canopy morphology corresponding to nine groups of devices

According to Fig. 4, combination 1 represents fusiform plant, combination 2 rep-


resents tower plant, combination 9 represents inverted tower plant, combination 8
represents cylindrical plant. Other porosity combinations can be considered as transi-
tional combinations of the above four basic forms.
94 L. Song et al.

4 CFD Simulation of Droplet Deposition in Static Porosity


Droplet Tester
4.1 Spray Calculation Simulation of Static Porosity Tester
The simulation conditions should be the same as the actual spraying experiment, so that
the simulation test results can be comparable with the actual experimental results.
Therefore, according to the actual situation, it is necessary to analyze the hypothesis of
discrete phase model, droplet termination mode and boundary conditions in the sim-
ulation test.
For droplets in the spray test, they are regarded as discrete phase because of their
low volume concentration in the air. The Lagrangian discrete phase model is used to
describe and track the spatial motion of droplets in the static porosity similarity test
device. In this paper, a nozzle of a plate fan atomization model is set above the top
495 mm to simulate the nozzle in the actual spray process. The particle packet number
is 200 and the particle type is inertial particle. The particle size of droplets is not
considered. The atomization half Angle is 55° and the spray flow rate is 0.006167 kg/s.
Setting of boundary conditions: Calculation area all sides set to free export (out-
flow), the discrete phase boundary condition is set to escape. The bottom surface of the
device model was set as ground and the boundary condition was set as Wall and the
boundary condition of discrete fog droplets was set as trap. The boundary conditions of
the upper surface of each layer of the disc were set as Wall and the boundary conditions
of discrete phase droplets were set as trap. CFD simulation process of the droplet
deposition test device is shown in Fig. 5.

Fig. 5. CFD simulation process of static porosity droplet deposition test device

In the process of solving and calculation, the droplets are sprayed at the initial
position for 50 steps, a total of 0.5 s, so that the drops fall to the ground. Then, the
calculation area moves 145 steps, a total of 1.45 s, at the speed of 0.65 m/s. The
particle spraying is completed and the motion stops. The rest of the particles that don’t
Simulation and Experimental Study 95

fall off continue to calculate until all the particles fall off, complete the calculation.
During the whole spray movement, the total spraying time of droplet particles was
1.95 s.

4.2 Simulation and Experimental Analysis of Spatial Droplet Deposition


Distribution Under Static Porosity
In order to analyze the deposition and penetration of droplet in the test device, the
amount of spray in the simulation process needs to be calculated. During the whole
spray movement, the total spraying time of droplet particles was 1.95 s and the flow
rate of the nozzle was 0.37 l/min. Equation (3) is used to calculate the amount of spray
in the entire spray.

Q ¼ q  t  106 ð3Þ

Where
Q—The total amount of spray in the spray process ðL=min)
q—Single nozzle flow ðL=min)
t—Duration of spray process ðs)
The calculation results show that the total injection volume in the simulation
process is 12025 lL. In the simulation, the deposition amount of droplets in each layer
is shown in Table 2.

Table 2. Statistical table of droplet deposition of each layer in the process of simulation
Porosity assemblage Droplet deposition per layer (ll)
Upper layer Middle layer Lower layer
1 4266.68 3472.28 881.6
2 4145.7 3342.44 2176.58
3 3855.67 2732.95 1132.56
4 5098.16 2562.66 1497.09
5 5094.98 2794.13 1237.62
6 5094.58 2292.17 1186.24
7 5587.17 2562.66 1090.65
8 5570.52 2232.16 1277.60
9 5557.00 2110.45 697.60

The deposition data obtained in laboratory tests are processed and the deposition
per layer is expressed as a percentage of the deposition per layer relative to the total jet
volume. The percentage of deposition in each layer and two kinds of test errors are
shown in Table 3.
96 L. Song et al.

Table 3. Statistical table of droplet deposition of each layer in the process of simulation
S/N Upper layer (%) Middle layer (%) Lower layer (%)
SL Exp ERR SL Exp ERR SL Exp ERR
1 35.48 30.48 14.09 28.88 26.38 8.80 7.33 5.27 28.10
2 34.48 29.78 13.63 27.79 25.38 8.69 18.10 15.12 16.44
3 32.06 26.96 15.91 22.72 23.20 11.13 9.41 8.11 13.86
4 42.40 38.80 8.49 21.31 19.55 5.26 12.45 12.14 2.51
5 42.37 38.27 9.68 23.23 24.15 8.96 10.29 10.27 0.21
6 42.37 38.17 9.91 19.06 15.89 10.13 9.86 9.99 13.16
7 46.46 43.16 7.10 21.31 17.02 10.74 9.07 7.63 15.82
8 46.32 43.52 6.04 18.56 14.24 12.53 10.62 9.87 7.07
9 46.21 42.21 8.65 17.55 12.63 10.92 5.80 4.47 23.01

It can be seen from Table 3 that the simulation results are basically consistent with
the test results. Only in combination 1 and 9, the errors of the lower layer deposition
amount are 28.1% and 23.01% respectively, which are mainly caused by the errors of
the test system, and the rest are within 16.5%. It shows that the simulation basically
reflects the actual spray situation, and the droplet deposition distribution test device
with similar static porosity can be used to analyze the spatial distribution of droplet
deposition.

5 Conclusions

This paper describes the density of plant canopy by using porosity, and designs a kind
of space droplet deposition test device which can replace the static porosity of plant
canopy. Taking cotton as an example, a static porosity deposition test device was set up
and spray deposition test was carried out. At the same time, the spatial distribution of
fog droplets under different porosity was obtained through CFD simulation. The results
show that the static porosity droplet deposition test device can be used to analyze the
spatial distribution of droplet deposition.

Acknowledgements. This work was supported by the National Natural Science Foundation of
China (51475278) and the Agricultural Machinery Equipment R&D Innovation Project of
Shandong Province (2018YF002).

References
1. Dekeyser, D.: Spray deposition assessment using different application techniques in artificial
orchard trees. Crop Protect. 64(10), 187–197 (2014)
2. Wu, T.: Relationships between shelter effects and optical porosity: a meta-analysis for tree
windbreaks. Agric. For. Meteorol. 259(9), 75–81 (2018)
Simulation and Experimental Study 97

3. Desmond, C.J.: A study on the inclusion of forest canopy morphology data in numerical
simulations for the purpose of wind resource assessment. J. Wind Eng. Ind. Aerodyn. 126
(03), 24–37 (2014)
4. Hong, S.-W.: CFD simulation of airflow inside tree canopies discharged from air-assisted
sprayers. Comput. Electron. Agric. 149(06), 121–132 (2018)
5. Hong, S.-W., Zhao, L., Zhu, H.: CFD simulation of pesticide spray from air-assisted sprayers
in an apple orchard: tree deposition and off-target losses. Atmos. Environ. 175(03), 109–119
(2018)
6. Endalew, A.M.: Modelling pesticide flow and deposition from air-assisted orchard spraying in
orchards: a new integrated CFD approach. Agric. Forest Meteorol. 150(10), 1383–1392
(2010)
7. Endalew, A.M.: A new integrated CFD modelling approach towards air-assisted orchard
spraying—part I: model development and effect of wind speed and direction on sprayer
airflow. Comput. Electron. Agric. 71(02), 128–136 (2009)
8. Endalew, A.M.: A new integrated CFD modelling approach towards air-assisted orchard
spraying—Part II: validation for different sprayer types. Comput. Electron. Agric. 71(02),
137–147 (2009)
Effect of Heat Treatment on the Ductility
of Inconel 718 Processed by Laser Powder
Bed Fusion

Even Wilberg Hovig1(&) , Olav Åsebø Berg1, Trond Aukrust2,


and Harald Solhaug3
1
SINTEF Manufacturing AS, Trondheim, Norway
[email protected]
2
SINTEF Industry, Oslo, Norway
3
Hydro Innovation & Technology, Finspång, Sweden

Abstract. Inconel 718 is a precipitation-hardening alloy, with a typical heat


treatment consisting of solution annealing before a two-stage ageing process.
Depending on the heat treatment procedure, strength or ductility can be
enhanced, typically at the cost of the other. This study aims to determine a heat
treatment procedure suitable for applications that require high ductility. Tensile
tests of Inconel 718 processed by laser powder bed fusion additive manufac-
turing has been carried out on specimens subjected to different heat treatment
procedures. The results show that solution annealing above 1010 °C followed
by a two-stage ageing at 720 °C/8 h with furnace cooling to 620 °C/2 h and
final holding at 620 °C/10 h, produce elongation at break of above 30% tested at
room temperature and above 24% tested at 650 °C.

Keywords: Additive manufacturing  Powder bed fusion  Selective laser


melting  Inconel 718  Ductility  Elongation at break

1 Introduction

Inconel 718 is a high temperature, high strength, corrosion-resistant nickel base


superalloy that has been widely used in aerospace and energy industries [1]. The
microstructure consist of a c matrix, and is preferentially strengthened by precipitation
of c′′-Ni3Nb phase [2]. Other phases, such as brittle c′, d, Laves, and r phases, will
form if the material is exposed to high temperatures for extended periods of time [2].
The formation of these phases is what effectively limits the service temperature of the
alloy.
Inconel 718 has proven to be a suitable material for laser powder bed fusion
(LPBF) additive manufacturing, with excellent material properties and high relative
density [1]. Heat treatment for cast and wrought Inconel 718 has been thoroughly
researched and standardized, but the recommended heat treatment for LPBF Inconel
718 in e.g. ASTM F3301-18 refers to SAE AMS2774E, which is developed for con-
ventionally manufactured Inconel 718, and does not take into account the unique
properties of LPBF materials. The microstructure of LPBF Inconel 718 is typically fine

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 98–105, 2020.
https://doi.org/10.1007/978-981-15-2341-0_13
Effect of Heat Treatment on the Ductility of Inconel 718 Processed 99

grained compared to its cast and wrought counterparts [3], with traces of microseg-
regation, MC carbides and Laves-phase as a result of the LPBF process [4–8].
Several studies have been conducted to investigate the influence of different heat
treatment procedures on the microstructure and mechanical properties of Inconel 718
processed by LPBF. Aydinöz et al. [4] investigated the effect of, amongst other,
solution annealing (1000 °C/1 h, air cooling) and ageing (720 °C/8 h, furnace cool to
621 °C/2 h, 621 °C/8 h), and reported elongation at break of approximately 10% at
room temperature. Chlebus et al. [5] conducted experiments with an identical ageing
scheme, but with different annealing temperatures prior to ageing. The reported elon-
gation at break for specimens built parallel to the build direction and solution annealed
at 1100 °C is 19 ± 2% at room temperature. Hovig et al. [2] reports elongation at
break of 5 ± 2% at room temperature for specimens manufactured parallel to the build
direction, solution annealed at 980 °C/1 h, and aged following the same scheme as
Aydinöz et al. and Chlebus et al. Hot isostatic pressing (HIP) at 1160 °C prior to
ageing increased the elongation at break up to 9 ± 4%. Schneider et al. [9] compared
the mechanical properties of 10 different heat treatment variations, with reported
elongation at break ranging from 15.44 ± 2.00% to 34.34 ± 1.52% depending on the
heat treatment condition. Amongst the aged conditions, stress relief at 1066 °C/1.5 h,
solution treatment at 954 °C/1 h, followed by ageing at 720 °C/8 h and 620°C/10 h,
produced the highest elongation at break with a reported value of 21.96 ± 0.37%.
Common for all the mentioned studies are yield strengths upwards of 1000 MPa, and in
some cases exceeding 1300 MPa.
When conducting tensile tests at elevated temperatures the yield strength, UTS, and
elongation at break is reduced when comparted to testing at room temperature. Trosch
et al. [3] compared the tensile properties of cast, forged and additively manufactured
Inconel 718 tested at room temperature, 450° and 650 °C. The material was heat
treated with solution treatment at 980°/1 h followed by ageing at 720 °C/8 h and 620 °
C/8 h. The elongation at break dropped from 20.4% at room temperature to 14.2% at
650 °C for the LPBF specimens built in parallel to the build direction. The elongation
at break of LPBF specimens were reported to drop further than that of forged and casted
samples at 650 °C compared to room temperature, which is attributed to the presence
of d-phase within the grains.
Zhang et al. [10] conducted a study where three different levels of d-phase was
present in the material. The material was tested at 950 °C and the material condition
without d-phase display the highest elongation at break. As the d-phase content is
increased the elongation at break is significantly reduced. By increasing the level of d-
phase from 0% to 3.79%, the elongation at break is reduced by 20%. If the level of d-
phase is further increased from 3.79% to 8.21%, the elongation at break is reduced by
an additional 20%. The yield strength and UTS was not as greatly affected by the
alteration of d-phase content.
This study focuses on understanding the mechanisms that influence the elongation
at break, in order to increase the ductility while maintaining acceptable strength.
100 E. W. Hovig et al.

2 Experimental Method

Twenty tensile specimens were manufactured out of Inconel 718 powder using an EOS
GmbH EOSINT M280. The processing parameters are denoted as In718 Performance
2.1 by the vendor, and argon shielding gas was used with a Grid Nozzle type 2200
5501. The specimen geometry is shown in Fig. 1, with the specimen orientation with
respect to the build direction indicated.

Fig. 1. Specimen geometry. All dimensions in mm, except for roughness (lm).

The chemical composition of the powder feedstock as given by the material vendor
is shown in Table 1.

Table 1. Chemical composition of the Inconel 718 powder feedstock as supplied by the material
vendor.
Ni Cr Nb Mo Ti Al Co Cu C Fe
wt–% 50–55 17–21 4.75–5.5 2.8–3.3 0.65–1.15 0.2–0.8 <1.0 <0.3 <0.08 Bal.

Prior to machining, eight specimens were stress relieved at 980 °C followed by


furnace cooling to room temperature. After machining, the specimens were heat treated
according to the recommendation by the material supplier (AMS 5664). The recom-
mended heat treatment consists of a solution annealing at 1065 °C/1 h followed by air
cooling. After solution annealing, precipitation ageing was carried out at 760 °C/10 h,
followed by furnace cooling to 650°/2 h, where it was held at 650 °C/8 h before air
cooling to room temperature. This is later referred to as heat treatment condition 1
(HT1).
Twelve specimens were solution annealed and then aged according to AMS 5663 at
720 °C/8 h, followed by furnace cooling to 620 °C/2 h, and held at 620 °C/10 h. Prior
to ageing, six specimens were solution annealed at 1010 °C/1 h, while the last six were
solution annealed at 1065 °C/1 h. These are denoted as HT2.a, and HT2.b,
respectively.
Effect of Heat Treatment on the Ductility of Inconel 718 Processed 101

The tensile specimens were tested in an MTS3 tensile machine with a 100 kN load
cell at a displacement rate of 0.6 mm/min, at either room temperature, 550 °C, 600 °C,
or 650 °C (induction heating).

3 Results and Discussion

All the Inconel 718 samples display a microstructure typical for LPBF. The
microstructure is cellular, with fine grains in a narrow area close to the laser scan
trajectory, and equiaxed grains growing between the laser paths. This complies well
with the findings of e.g. Rodgers et al. [11].
In the stress relieved condition, the microstructure is decorated with needle shaped
d-phase in the interdendritic region, with additional d–phase on the grain boundaries,
and Al-oxides distributed across the microstructure, as shown in Fig. 2(a).

3.1 Heat Treatment Condition 1


The tensile properties for tensile testing at 550 °C and 600 °C are tabulated in Table 2.

Table 2. Mechnical properties of Inconel 718 tested at 550 °C and 600 °C, heat treated
according to HT1. The values in parenthesis are the reported expected values by the material
supplier tested at 650 °C [12].
Rp0.2 (MPa) UTS (MPa) Elongation at break (%)
550 °C
Mean 1101 (1010 ± 50) 1267 (1210 ± 50) 11.49 (20 ± 3)
Std. Dev. 4.61 9.01 0.96
600 °C
Mean 1076 (1010 ± 50) 1220 (1210 ± 50) 11.53 (20 ± 3)
Std. Dev. 2.59 9.55 0.33

The tensile strength properties in HT1 are about 30–40 MPa lower when tested at
600 °C compared to 550 °C. The elongation at break in HT1 did not seem to be
affected by the different testing temperatures. The elongation at break compares to
reported values in the literature [4], with elongation at break of 11.5%, but the elon-
gation is not satisfactory when compared to the material data sheet provided by the
material supplier, which suggests an elongation at break of 20 ± 3% [12]. The material
in HT1 was subjected to a stress relief prior to the recommended heat treatment,
however. The influence of the stress relief on the microstructure is discussed later in
this section.
Figure 2(b) shows the microstructure of Inconel 718 after HT1. d-phase was
observed on the grain boundary, an Al-rich spherical phase was observed within the
grains, and evidence of Laves-phase was observed on the grain boundaries. Some
d-phase was observed within the grains as well.
102 E. W. Hovig et al.

Fig. 2. SEM micrograph of stress relieved (a) and HT1 (b) Inconel 718. The arrows indicate
(1) interdendritic d-phase, (2) grain boundary d-phase, (3) Al-oxides, and (4) Irregular particles,
believed to be either Laves, carbides, or inclusions because of powder impurity or introduced
during the sample preparation.

3.2 Heat Treatment Condition 2


The tensile properties for tensile testing at room temperature and 650 °C under argon
atmosphere for heat treatment condition HT2.a and HT2.b are tabulated in Tables 3 and 4
respectively.

Table 3. Mechanical properties of Inconel 718 heat treated according to HT2.a tested at room
temperature and 650 °C.
Rp0.2 (MPa) UTS (MPa) Elongation at break (%)
Room temperature
Mean 1160 1402 32.2
Std. Dev. 10.10 10.10 0.50
650 °C
Mean 978 1098 25.6
Std. Dev. 0.9 3.0 2.8

Table 4. Mechanical properties of Inconel 718 heat treated according to HT2.b tested at room
temperature and 650 °C.
Rp0.2 (MPa) UTS (MPa) Elongation at break (%)
Room temperature
Mean 1188 1425 33.0
Std. Dev. 4.5 4.46 0.3
650 °C
Mean 1047 1164 24.1
Std. Dev. 16.8 7.93 0.5
Effect of Heat Treatment on the Ductility of Inconel 718 Processed 103

The elongation at break in HT2.a and HT2.b far exceeds the reported values in the
literature [2, 4, 5, 9], while at the same time keeping a satisfactory yield strength. The
heat treatment procedure was based on findings by Schneider et al. [9] in order to
maximize the elongation at break. In HT2.a and HT2.b the material was not subjected
to a stress relief prior to solution annealing, and the ageing temperature was lower
compared to HT1. HT2.b was solution annealed at a slightly higher temperature
(1065 °C) compared to HT2.a (1010 °C). The higher temperature seems to increase
both strength and ductility. Figure 3 shows the microstructure after HT2.a and HT2.b
heat treatments respectively. d-phase was observed both on the grain boundaries and
within the grains. The Al-rich phase is also observed, but no Laves-phase was detected.
Solution treatment at a higher temperature reduces the amount of d-phase observed.

Fig. 3. Microstructure of Inconel 718 with heat treatment HT2.a (a) and HT2.b (b). The arrows
indicate (1) d-phase (white) and (2) Al-oxide with Ni3Nb phase (black dots).

4 Discussion

Based purely on the tensile results in this study, it is apparent that HT2 is superior to
HT1 with respect to elongation at break. There is one main difference between the two
heat treatment procedures; HT1 was stress relieved prior to solution annealing and
ageing.
The elongation at break after HT1 of 11.5% at 550 °C and 600 °C is comparable to
the findings of Aydinöz et al. [4], where the material was solution annealed at 1000 °C
prior to ageing. Similar results are reported by Hovig et al. [2], where the material was
solution annealed at 980 °C prior to ageing. In the reported cases from the literature
with solution annealing temperatures above 1000 °C the elongation at break for tensile
specimens built parallel to the build direction in LPBF is in the region of 20% or higher
[5, 9]. The elongation at break after HT2.a and HT2.b is 25.6% and 24.1% at 650 °C,
which compares favourably to previous studies with similar solution annealing tem-
peratures [5, 9].
In an effort to explain why a change in solution annealing temperature can more
than double the elongation at break, an isothermal transformation diagram (TTT dia-
gram) is consulted. The TTT diagram for Inconel 718 published in Hovig et al. [2]
104 E. W. Hovig et al.

shows that unwanted phases such as d and r, form at temperatures between 650 °C and
1000 °C. d-phase can form if the material is held at temperatures between 900 °C and
1000 °C in excess of one hour. Grain-boundary d can form after just minutes if held at
the same temperature. In order to form r-phase the material must be held at temper-
atures in excess of 660 °C for extended periods of time (>1000 h). The high amount of
d-phase in the stress relieved condition (Fig. 2(a)) is likely a result of the slow cooling
rate. Solution treatment at sufficiently high temperature reduces the content of d-phase
[13], which can be seen in the reduction of d in HT2.b compared to HT2.a (Fig. 3).
In other words, when solution annealing (or stress relief) is performed at temper-
atures below 1000 °C unwanted phases such as d are allowed to form. The same would
be the case if the cooling rate from temperatures above 1000 °C is not sufficiently high.
This is supported by the microscope image in Fig. 2(a), where a high density of both
interdendritic and grain-boundary d is evident. Most of the interdendritic d-phase
dissolve in the solution annealing that follows the stress relief in HT1, but the
microscope analysis still shows traces of d-phase within the grains (Fig. 2(b)). There is
still evidence of d-phase, especially on the grain boundaries, in HT2 (Fig. 3) as well,
but to a far lesser extent. Zhang et al. [10] demonstrated how an increased d-phase
content reduces the elongation at break, which supports the findings in this study. In
order to increase the elongation at break, the amount of d-phase, especially within the
grains, should be reduced through careful selection of heat treatment parameters.

5 Summary and Conclusions

In an effort to increase the ductility of Inconel 718 processed by LPBF the effects of
different heat treatment procedures have been investigated. The following conclusions
can be drawn from this work:
LPBF Inconel 718 subjected to stress relief or solution annealing at temperatures
below 1000 °C, or with insufficient cooling rates from higher temperatures, exhibit an
unsatisfactory elongation at break. This is attributed to an excessive formation of
d-phase.
The heat treatment denoted HT2.a and HT2.b resulted in elongation at break of
32.2% and 33.0% at room temperature and 26.6% and 24.1% at 650 °C respectively.
This heat treatment procedure more than doubled the elongation at break at high
temperature compared to HT1, while maintaining satisfactory strength.

Acknowledgements. The authors would like to thank Tronrud Engineering, Hønefoss, Norway
for supplying the Inconel 718 test specimens and performing the stress relief on the samples
denoted HT1. Furthermore, thanks are due to SWEREA KIMAB, Stockholm, Sweden for heat
treatment and tensile testing, and to Hydro Innovation & Technology, Finspång, Sweden for
microstructure investigations.
This study is financed by the Norwegian research council, grant number 256623.
Effect of Heat Treatment on the Ductility of Inconel 718 Processed 105

References
1. Wang, X., Gong, X., Chou, K.: Review on powder-bed laser additive manufacturing of
Inconel 718 parts. Proc. Inst. Mech. Eng. Part B: J. Eng. Manuf. 231(11), 1890–1903 (2016)
2. Hovig, E.W., Azar, A.S., Grytten, F., Sørby, K., Andreassen, E.: Determination of
anisotropic mechanical properties for materials processed by laser powder bed fusion.
Determination of Anisotropic Mechanical Properties for Materials Processed by Laser
Powder Bed Fusion (2019)
3. Trosch, T., Strößner, J., Völkl, R., Glatzel, U.: Microstructure and mechanical properties of
selective laser melted Inconel 718 compared to forging and casting. Mater. Lett. 164, 428–
431 (2016)
4. Aydinöz, M.E., Brenne, F., Schaper, M., Schaak, C., Tillmann, W., Nellesen, J., Niendorf,
T.: On the microstructural and mechanical properties of post-treated additively manufactured
Inconel 718 superalloy under quasi-static and cyclic loading. Mater. Sci. Eng., A 669, 246–
258 (2016)
5. Chlebus, E., Gruber, K., Kuźnicka, B., Kurzac, J., Kurzynowski, T.: Effect of heat treatment
on the microstructure and mechanical properties of Inconel 718 processed by selective laser
melting. Mater. Sci. Eng., A 639, 647–655 (2015)
6. Li, X., Shi, J.J., Wang, C.H., Cao, G.H., Russell, A.M., Zhou, Z.J., Li, C.P., Chen, G.F.:
Effect of heat treatment on microstructure evolution of Inconel 718 alloy fabricated by
selective laser melting. J. Alloy. Compd. 764, 639–649 (2018)
7. Moussaoui, K., Rubio, W., Mousseigne, M., Sultan, T., Rezai, F.: Effects of selective laser
melting additive manufacturing parameters of Inconel 718 on porosity, microstructure and
mechanical properties. Mater. Sci. Eng. A 735, 182–190 (2018)
8. Popovich, V.A., Borisov, E.V., Popovich, A.A., Sufiiarov, V.S., Masaylo, D.V., Alzina, L.:
Impact of heat treatment on mechanical behaviour of Inconel 718 processed with tailored
microstructure by selective laser melting. Mater. Des. 131, 12–22 (2017)
9. Schneider, J., Lund, B., Fullen, M.: Effect of heat treatment variations on the mechanical
properties of Inconel 718 selective laser melted specimens. Addit. Manuf. 21, 248–254
(2018)
10. Zhang, S.-H., Zhang, H.-Y., Cheng, M.: Tensile deformation and fracture characteristics of
delta-processed Inconel 718 alloy at elevated temperature. Mater. Sci. Eng. A 528(19),
6253–6258 (2011)
11. Rodgers, T.M., Madison, J.D., Tikare, V.: Simulation of metal additive manufacturing
microstructures using kinetic Monte Carlo. Comput. Mater. Sci. 135, 78–89 (2017)
12. EOS GmbH: EOS NickelAlloy IN718 (2014). http://ip-saas-eos-cms.s3.amazonaws.com/
public/4528b4a1bf688496/ff974161c2057e6df56db5b67f0f5595/EOS_NickelAlloy_IN718_
en.pdf. Accessed 16 May 2019
13. Zhang, H.Y., Zhang, S.H., Cheng, M., Zhao, Z.: Microstructure evolution of IN718 alloy
during the delta process. Proc. Eng. 207, 1099–1104 (2017)
Comparative Study of the Effect of Chord
Length Computation Methods in Design
of Wind Turbine Blade

Temesgen Batu1,2 and Hirpa G. Lemu3(&)


1
School of Mechanical and Industrial Engineering, Wollo University,
Dessie, Ethiopia
[email protected]
2
Department of Mechanical Engineering,
Addis Ababa Science and Technology University, Addis Ababa, Ethiopia
3
Faculty of Science and Technology, University of Stavanger,
Stavanger, Norway
[email protected]

Abstract. Design of wind turbine blade is the most important step in devel-
oping efficient non-conventional energy converters in order to tackle today’s
energy crisis scenario. In horizontal axis wind turbines in particular, blades are
the most important parts of the turbine and needs optimized design since it has
direct relation with output performance. In addition, the turbine blade is critical
part in terms of the manufacturing cost of the blade, which is about 15–20% of
the total wind turbine plant cost. One of the main design parameters in the
geometry of wind turbine blades is the cord length. This parameter affects not
only the performance, but also the blade structural stiffness. Closer review of the
literature shows that there is no commonly accepted formula for calculation of
chord length distribution. Thus, this paper focuses on comparative study of the
works of different researchers on the formulas used for calculation of chord
length distribution along the blade. Upon evaluating the available formulas,
some methods of combining works of different researchers for designing hori-
zontal axis wind turbine blade are included such as chord length distribution,
manufacturing complexity of wind blade, weight of the blade and power output
obtained. Each method is compared for the same airfoil, input parameters like
radius, wind speeds, number of blades, tip speed ratio and materials. Effect of
the chord length of the blade on the performance of the wind turbine is com-
puted a dedicated software (Qblade) and analyzed.

Keywords: Horizontal axis wind turbine  Wind turbine blade  Chord length
distribution  Qblade

1 Introduction

Currently, wind turbine industries are highly interested to conduct studies on ways to
increase the efficiency of wind turbines. This is because more and more electric power
needed to come from renewable sources due to the increasing environmental concerns

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 106–115, 2020.
https://doi.org/10.1007/978-981-15-2341-0_14
Comparative Study of the Effect of Chord Length Computation Methods 107

and the rising oil prices. In addition, the concerns over the reducing fossil-fuel
resources has forced the search for alternative energy sources such as wind energy,
which has been proved to be an important source of nonpolluting and renewable energy
source and special focus is given on wind energy due to its advantages [1] compared
with fossil fuels. The reliability, sustainability, and risks related with wind energy
applications have been studied by BoroumandJazi et al. [2]. As reported in [3], wind
turbines as electric power generators are also found to be attractive even for moderate
wind speeds. When designing a wind turbine, the blade design is extremely important.
For design of horizontal axis wind turbine blades, in particular, the information or
parameters required are the variation of the rotor blade chord and blade twist as a
function of the blade radius, as well as the airfoil section shapes used for the rotor and
their corresponding aerodynamic characteristics. Chord length is the basic part which
has great effect in increasing the performance of wind turbines. Specially, the chord
length distribution is required to give maximum power output with low weight of the
blade. The chord length distribution varies from root to tip as a function of each
segment of the blade as illustrated in Fig. 1.

Fig. 1. Chord length distribution on the Blade Element Model

The chord length affects the performance of the wind turbine blade as well as the
blade structural stiffness. In the design process of the blade, primarily, the blade
parameters of each section are obtained, and then the main parameters such as chord
length and torsion angle are calculated. Several researchers have used several formulas
for calculating the chord length distribution [4–6]. El-Okda [7] collected a total of ten
different methods for the calculation of the chord length distribution along the rotor
blade, with and without considering wake rotation, under design methods of horizontal
axis wind turbine rotor blades and concluded that Schmitz method for optimum flow
angle and chord distribution is found to be the simplest and most straight forward result
from BEM optimization analysis.
Due to the observed variations of approaches to calculate the chord length distri-
bution, this paper focuses on a comparison study of a total number of 19 different chord
length calculation methods used in designing horizontal axis wind turbine blade. The
methods are evaluated based on output power, torque, manufacturing cost, and weight
of the blade for the same material. Moreover, the Qblade software is used to compare
the performance of each chord length calculation formula in blade designing for the
108 T. Batu and H. G. Lemu

same airfoil having the same input parameters like angle of attack, lift coefficients,
wind speeds and other parameters which are common for all the considered cases.

2 Chord Length Distribution Calculation Methods

To study the chord length calculation methods for design of horizontal axis wind
turbine (HAWT) blade, the following parameters are used for each of the methods.
Referring to the data of design of rated wind speed Vrated = 12.62 m/s, blade pitch is
fixed at an angle of hcp = −2°, rotor radius R = 4.95 m, hub radius (rhub) = 0.15 m and
number of blades B = 3. According to the range of tip speed ratio of high-speed wind
turbines, k = 7 is selected for all considered formulas. NACA 4415 airfoils is selected
because it has desirable features in an effective angle of attack of each radial segment of
the blade, i.e. the ratio CL/CD is a maximum. So, for NACA 4415, a = 7° is found to be
the optimum angle of attack according to the design data in Qblade software, with the
corresponding lift coefficient CL, opt(7°) = 1.20423 and drag coefficient of CD, opt
(7°) = 0.01162 [8]. The rated tip speed ratio (kr) is given as a function of the arbitrary
radius (r) as follows:
r
kr ¼ k ð1Þ
R

Table 1. Different formulas for calculating chord distribution and twist angle
No Chord length distribution References
 
1 Cðr Þ ¼ BCRL k 1:868 þ 5:957 Raju [9] and Edon
kr  k2 þ 
3:1 0:5433 0:2917
k3r k4r
r [10]
cos /
2 Cðr Þ ¼ 8pr3Bk ; / ¼ 900  23 tan1 k1r Ingram [6]
r 
3 Cðr Þ ¼ 9k8R0:8 2  kk0:8 C2p , where k0:8 imply at r = 0.8R Burton et al. [11]
L kB
2
4 Cðr Þ ¼ 9BC
16pR
k2 r
Jamieson [12]
L
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
5 C ðr Þ ¼ 2pr 8 U
B 9CL Urel ; where Urel ¼ U 2 þ Vw2 Hau [5], Schubel
and Crossley [13]
sin / 2
6 Cðr Þopt ¼ ð8ar
1aÞBCn ; / ¼ tan1 ð3k2 r Þ Hansen [4]
Cn ¼ CL cos / þ CD sin /; a ¼ 0:2
7 Cðr Þ ¼ BCl
8pr
ð1  cos /Þ; /ðr Þ ¼ 23 tan1 ð3k2 r Þ Duran [14];
Manwell et al. [15]
sin /
8 Cðr Þ ¼ 8pr
3BCL kr ; / ¼ tan1 ð3k2 r Þ Manwell et al. [15]
without considering wake rotation
9 Cðr Þ ¼ 9BC16pR
pffiffiffiffiffiffiffi
1 ; / ¼ tan1 ð3k2 r Þ; a ¼ 1=3 Gundtoft [16]
L
k k2 4
r þ9
 
10 Cðr Þ ¼ 16pr 2 1 1 1
; /ðr Þ ¼ 23 tan1 ð3k2 r Þ DNV [17]
BCL sin 3 tan kr

(continued)
Comparative Study of the Effect of Chord Length Computation Methods 109

Table 1. (continued)
No Chord length distribution References
  
11* C ðr Þ ¼ 8pFr sinð/Þ
;F ¼ p2 cos1 exp ðRr Þ
 B2rsin/ Maalawi [18]
BCL ð1k
kr þ tan /
r tan /
CD =CL Þ

12* sinð/Þ Fhub; if r  0:8R Shen et al. [19] and
Cðr Þ ¼ BC k8pFr ; F ¼
L ð1kr tan /CD =CL Þ Ftip; if r [ 0:8R
r þ tan /
Maalawi [18]
  
1 BðRr Þ
Ftip ¼ p cos
2
exp g 2rsin/ ;
  
Fhub ¼ p cos
2 1
exp g Bð2rsin/rrhub Þ

/ðr Þ ¼ 23 tan1 ð3k2 r Þ; Where g ¼ e0:125ðBk21Þ þ 0:1


  
13* Cðr Þ ¼ BC k8pFr sinð/Þ
; F ¼ 2
cos 1
exp  BðRr Þ
; Maalawi [18] and
L ð1kr tan /CD =CL Þ
r þ tan / tip p 2rsin/
   Elkoda [7]
Fhub ¼ p2 cos1 exp  Bð2rsin/ rrhub Þ
; F ¼ Ftip  Fhub

14* sinð/Þ Fhub ; if r  0:8R Maalawi [18] and El
Cðr Þ ¼ BC k8pFr ; F¼
L ð1kr tan /CD =CL Þ Ftip ; if r [ 0:8R
r þ tan /
khchine [20]
  
1 BðRr Þ
Ftip ¼ p cos
2
exp  2rsin/
  
Fhub ¼ p2 cos1 exp  Bð2rsin/ rrhub Þ

  
15 ðRr Þ Biadgo and
/ðr Þ ¼ 23 tan1 ð3k2 r Þ; F ¼ p2 cos1 exp  B2rsin/
Aynekulu [21]
8pFr sinð/Þ ðcos /ðr Þ  kr sinð/ÞÞ
C ðr Þ ¼ BCl ðsin / þ kr cos/Þ
sinð/Þ ðcos /ðr Þ  kr sinð/ÞÞ 1 2
16 Cðr Þ ¼ 8pFrBCl Þ ; /ðr Þ ¼ 3 tan ð3kr Þ;
2 Elkoda [7] and
 ðsin / þ kr cos/  Biadgo, Aynekulu
ðR  r Þ
Ftip ¼ p2 cos1 exp  B2rsin/ ; [21]
  
ð  Þ
Fhub ¼ p2 cos1 exp  2rsin/hub
B r r
; F ¼ Ftip  Fhub
sinð/Þ ðcos /ðr Þ  kr sinð/ÞÞ
17 Cðr Þ ¼ 8pFrBCl ðsin / þ kr cos/Þ ;
El khchine [20] and
 Biadgo, Aynekulu
Fhub ; if r  0:8R
F¼ [21]
Ftip ; if r [ 0:8R
    
ðR  r Þ
Ftip ¼ p cos1 exp g B2rsin/
2
; / ¼ 23 tan1 k1r
  
Fhub ¼ p2 cos1 exp g Bð2rsin/ r  rhub Þ
; e g ¼ e0:125ðBk21Þ þ 0:1;
ð/Þ ðcos /ðr Þ  kr sinð/ÞÞ
18 Cðr Þ ¼ 8pFrsin ðsin / þ kr cos/Þ ; /ðr Þ ¼ 23 tan1 ð3k2 r Þ Shen et al. [19] and
 BCl
   Biadgo, Aynekulu
Fhub; if r  0:8R ðRr Þ
F¼ ; Ftip ¼ p2 cos1 exp  B2rsin/ [21]
Ftip; if r [ 0:8R
  
Fhub ¼ p2 cos1 exp  Bð2rsin/ r  rhub Þ

19 C ðr Þ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
16p Raut et al. [22]
ðð49Þ þ kr þ 21Þ
2
9Nk
*
The formulas given in 11–14 include tip losses, F, and drag is not omitted.
110 T. Batu and H. G. Lemu

2.1 Chord Length Distribution Comparison


The chord length of each blade is calculated using VBA Excel and the result of each
chord length calculation is given graphically as C/R vs r/R (Fig. 2). As it can be
observed in the figures (Fig. 2(a)–(d)), there is a big difference of C/R vs r/R relation
for all the considered formulas.

Fig. 2. Comparative graphical distribution of C/R vs r/R for references in Table 1: (a) 1–8,
(b) 15, 17 and 18, (c) 11, 12 and 14, (d) 13 and 16

Figure 2(a) indicates that the chord length distribution of those researchers listed in
the figure give small value compared to the other methods. The method used by Burton
et al. [11] gives chord length that has a linear distribution, i.e. all values are the same
along the blade like a beam. As a result, the chord distribution is constant everywhere.
This reduces the power output of the wind turbine. Gundtoft [16] method (Fig. 2(b))
has also approximately linear chord distribution. Linear chord distribution is not rec-
ommended because equal chord lengths along the blade sections located at larger radius
will experience higher normal force and torque. This makes failure to the wind turbine
parts like hub and also affects the performance of the wind turbine.
The cord length distribution formulas or methods for reference numbers 11–18 in
Table 1 depend on tip-hub loss factor (F), while those formulas for 11 to 14 include tip
losses, F, and thus drag is not omitted. Comparing the results of those four chord length
distributions, the value for the combined relation of Maalawi [18] and Elkoda [7] are
Comparative Study of the Effect of Chord Length Computation Methods 111

very high, while the other three formulas give almost the same results. On the other
hand, small chord length distributions are obtained by combining work of Maalawi [18]
with El khchine [20] as shown in Fig. 2(c) and (d).
The formula from 15 to 18 include tip losses, F, and thus drag force is omitted.
When the results of those four chord length distributions are compared, the combined
work of Elkoda [7] and Biadgo and Aynekulu [21] gives very high values while the
other three values are almost the same. By combining the formulas used by Shen et al.
[19] and Biadgo and Aynekulu [21], however, small chord length distribution is
obtained (Fig. 2(b) and (d)).
Almost all formulas used from 11–18 give high chord length near the root (r/R 
0.2) portion of the blade, which makes the manufacturing cost of this type of blade
design very expensive. Primarily, the mold to be used for fabricating the blades would
have complicated shape and thus more expensive. Secondly, a large increase in the
chord from 0.1  r/R  0.4 adds considerable weight to the blade where there is very
little contribution to power generation. Finally, the size of the inboard rotor section
adds considerable weight to the rotor blade, which affects the cost of every major
component that makes up a wind turbine. For example, an increase in blade weight
requires a stronger drive shaft, gearbox, tower and foundation that ultimately add to the
cost of a new wind turbine [8].

3 Blade Design and Simulation Using Qblade Software

3.1 Qblade Software


Qblade is a recently developed open-source tool developed at the Berlin Institute of
Technology (TU Berlin) to assist design and simulation of particularly wind turbine
blades [23, 24]. It is intended to serve as a single tool comprising the necessary
functionality for aerodynamic wind turbine design and simulation. It is developed as an
independent tool with no need for data importing from other sources and converting to
other forms. To enable this, Qblade is equipped with functionalities or modules that are
accessible from its graphical user interface [24]. Essentially the program is a collection
of methods and tools used to create early stage early design of a wind turbine blade. As
part of this study, the values of chord length calculated by all formulas (Table 1) are
designed and simulated for a HAWT blade in Qblade software (as illustrated in Fig. 3).

3.2 Blade Design and Modelling in Qblade Software


Blade design starts from selection of airfoil shape. For this study, NACA 4415 was
selected for all methods and the model of this profile in Qblade gave a maximum value
of lift to drag ratio (Cl/Cd  104) at angle of attack of 7°.
While designing the wind turbine blade, the airfoil was divided into 20 segments
and the calculated chord length is inserted. The blade was designed for each method by
using the calculated chord length and the twist angle formulas, which are almost similar
for all of the methods, and optimized in using the software. The results indicate that
there is a large growth in the chord length near the root (r/R  0.2) portion of the blade
112 T. Batu and H. G. Lemu

for most of them. It is thus observed that the chord length must be reduced or the twist
angle be increased. The blades are modeled using Qblade software for each of the
methods, and blade shapes for selected formulas are given in Fig. 4. As can be
observed, the shape of many of them is very different from each other.

Fig. 3. Design of blade using Q blade software

Fig. 4. Blade shapes of selected methods (formulas, in reference to Table 1)

It is further observe from Fig. 4 that some portions of the shapes have beam like
shape, which may have great risk on life of the parts of wind turbine. To model the
twist angle, each method is optimized within the same optimum lift/drag ratio. Fur-
thermore, many of the models have tapered shape. As the bending stresses are reduced
toward the tip of a cantilevered beam, tapered blades towards their tip are expected to
be lighter than straight blades. The length of the chord obtained by the combined
methods in formula nr. 13 (Maalawi [18] and Elkoda [7]), combined methods in
formula nr. 16 (Elkoda [22] and Biadgo and Aynekulu [21]) and that of formula
Comparative Study of the Effect of Chord Length Computation Methods 113

nr. 9 (Gundtoft [16]) are observed to give very high chord length. Thus, it is concluded
disregard these shapes from further study of their performance as their shapes are not
proper for designing the blade.

3.3 Blade Element Analysis and Different Comparison


Due to the variations in chord length distribution and the shape of the blade, it is
obviously possible to get different results in terms of torque, thrust load, and power
coefficient. As these output parameters are interconnected and the main target of wind
turbine operation is improving power output, the comparison of selected formulas
(Table 2) is conducted based on the power output harvested in the time domain.
When we compare the values obtained of some methods with practically used in
recent wind turbine blade very far from it. Especially formulas 11–14 (ref. [7, 18–22])
which use tip or hub loss give very high chord length distribution around the blade root.
Power Output Comparison: Performing an unsteady simulation for HAWT in the
time domain by using Q-blade for all methods, the power output and power coefficient
obtained from each method is given in the Table 2, which are valid for wind speed
vhub = 12.62 m/s and hub height of 10 m.

Table 2. Comparison of output power.


Reference number to methods (articles), from Table 1
1 2 3 4 5 6 7 8 10
Power [kW] 23.65 51.25 18.94 50.54 – 10.65 50.44 50.06 50.44
Power coeff. Cp 0.231 0.499 0.185 0.493 – 0.104 0.492 0.488 0.492

Weight Comparison: Weight of the blade, which has great relation with the chord
length, affects the power output of the wind turbine. As given in Table 3, the weight of
each model of the wind blade is compared using the same material for each method.
The materials used for shell material and internal materials are polyurethane 20GF 65D
and foam respectively. As overall tabulated values show the used methods in different
researches give different values of chord length distribution and hence the power output
obtained from each of them show big variations. The analysis also shows that the
weight of the blade increases with increasing chord length. On the other hand, a low
weight blade is favored for high power output.

Table 3. Comparison of weight of each model for polyurethane 20GF 65D/foam material
No. 1 2 3 4 5 6 7 8
Weight [kg] 28.353 119.032 1.601 313.453 5217.690 3.231 42.261 106.198
No. 10 11 12 13 14 16 18 19
Weight [kg] 42.261 4682.62 4682.78 4509.87 273.925 1216.79 1174.76 0.4809
114 T. Batu and H. G. Lemu

Observing the conducted analysis closely, the formulas used by Duran and Manwell
et al. [14, 15], Ingram [6], Manwell et al. [6], without considering wake rotation and
Jamieson [12] may be recommended from the point of view of low weight for the
blade, the best blade shapes which do not need any modification for design, power
output obtained, and relative ease of manufacturing near the root.

4 Conclusion

Wind turbine blade is considered as the most critical component is the wind turbine
system because both the manufacturing cost of the blades and their maintenance costs
represent significant portion of the total manufacturing and maintenance costs
respectively. The cord length of the blade is the basic parameters of the wind turbine
blade that highly influences the manufacturing cost, complexity of the shape of the
blades, general performance of the wind turbine, reduction of weight of the blade, etc.
The present work is carried out to compare the methods used to calculate chord length
based on chord length distribution, manufacturing complexity of the blade, weight of
the blade and power output obtained. The comparison for each method is based on the
same airfoil, input parameters like radius, wind speeds, number of blades, tip speed
ratio, the same material, etc.
The result indicates that for many of the method’s chord length distribution is very
large. The shape of the blades obtained from some of them is complicated especially
around the root and some of the models have unexpected shape which will make the
manufacturing of wind turbine design very costly. Since fabricating the blades of
irregular shapes is an expensive affair, the observed large increase in the chord for the
range (0.1  r/R  0.4) would add considerable weight to the blade where there is
very little contribution to the wind turbine power generation. The weight of the blade
for each method is also very different. This variation affects the cost of every major
component of the turbine such as the drive shaft, gearbox, tower and foundation that
ultimately add to the purchase cost. In general the chord length distribution variations
affect the whole wind turbine performance.

References
1. Mahmoud, N.H., El-Haroun, A.A., Wahba, E., Nasef, M.H.: An experimental study on
improvement of Savonius rotor performance. Alex. Eng. J. 51, 19–25 (2012)
2. BoroumandJazi, G., Rismanchi, B., Saidur, R.: Technical characteristic analysis of wind
energy conversion systems for sustainable development. Energy Convers. Manag. 62, 87–94
(2013)
3. Karamanis, D.: Management of moderate wind energy coastal resources. Energy Convers.
Manag. 52, 2623–2628 (2011)
4. Hansen, M.O.L., Johansen, J.: Tip studies using CFD + and comparison with tip loss
models. Wind Energy 7(4), 343–356 (2004)
5. Hau, E.: Wind Turbines, Fundamentals, Technologies, Application, and Economics.
Springer, Heidelberg (2006)
Comparative Study of the Effect of Chord Length Computation Methods 115

6. Ingram, G.: Wind turbine blade analysis using the blade element momentum method. http://
www.dur.ac.uk/g.l.ingram/download/wind_turbine_design.pdf. Accessed 18 Aug 2019
7. El-Okda, Y.M.: Design methods of horizontal axis wind turbine rotor blades. Int. J. Ind.
Electron. Drives 2(3), 135–150 (2015)
8. Corke, T., Nelson, R.: Wind Energy Design. CRC Press, Boca Raton (2018)
9. Raju, B.K.: Design optimization of a wind turbine blade. Master thesis, The University of
Texas at Arlington (2011)
10. Edon, M.: Meter wind turbine blade design. Internship Report, Folkecenter for Renewable
Energy (2007)
11. Burton, T., Sharp, D., Jenkins, N., Bossanyi, E.: Wind Energy Handbook. Wiley, New York
(2001)
12. Jamieson, P.: Innovation in Wind Technology. Wiley, London (2011)
13. Schubel, P.J., Crossley, R.J.: Wind turbine blade design. Energies 5(9), 3425–3449 (2012)
14. Duran, S.: Computer-aided design of horizontal-axis wind turbine blades. Master thesis,
Middle East Technical University (2005)
15. Manwell, J.F., McGowan, J.G., Rogers, A.L.: Wind Energy Explained: Theory, Design and
Application, 2nd edn. Wiley, London (2009)
16. Gundtoft, S.: Wind turbines. Technical report, University of Aarhus, Denmark (2009)
17. DNV/Risø: Guide Lines for Design of Wind Turbines. Wind Energy Department, Risø, 2nd
edn. ISBN 87-5502870-5 (2002)
18. Maalawi, K.: Special Issues on Design Optimization of Wind Turbine Structures (2011).
http://www.intechopen.com/books/wind-turbines/special-issues-on-designoptimization-of-
wind-turbinestructures. Accessed August 2018
19. Shen, W.Z., Mikkelsen, R., Sørensen, J.N., Bak, C.: Tip loss corrections for wind turbine
computations. Wind Energy 8(4), 457–475 (2005). https://doi.org/10.1002/we.153
20. El khchine, S., Sriti, M.: Improved blade element momentum theory (BEM) for predicting
the aerodynamic performances of horizontal axis wind turbine blade (HAWT). Tech. Mech.
38(12), 191–202 (2018)
21. Biadgo, A.M., Aynekulu, G.: Aerodynamic design of horizontal axis wind turbine blades.
FME Trans. 45, 647–660 (2017). https://doi.org/10.5937/fmet1704647m
22. Raut, S., Shrivas, S., Sanas, R., Sinnarkar, N., Chaudhary, M.K.: Simulation of micro wind
turbine blade in Q-blade. Int. J. Res. Appl. Sci. Eng. Technol. 5(IV), 256–262 (2017)
23. Qblade website. http://www.q-blade.org/. Accessed 18 Aug 2019
24. Marten, D., Wendler, J., Pechlivanoglou, G., Nayeri, C.N., Paschereit, C.O.: QBLADE: an
open source tool for design and simulation of horizontal and vertical axis wind turbines. Int.
J. Emerg. Technol. Adv. Eng. 3(3), 264–269 (2013)
On Modelling Techniques for Mechanical
Joints: Literature Study

Øyvind Karlsen(&) and Hirpa G. Lemu

Faculty of Science and Technology, University of Stavanger, Stavanger, Norway


{oyvind.karlsen,hirpa.g.lemu}@uis.no

Abstract. Most mechanical joints are in one way or another exposed to some
tear, wear, corrosion and fatigue, and likely to fail over time. If the contacting
surfaces in a connection are exposed to tangential loading due to vibrations,
small amplitude displacements called fretting can be induced at the surface, and
might result in crack nucleation and possible propagation. The review and
analysis reported herein are based on study of a wide range of reported literature
during the last 25 years, and it looks into which methods, techniques and tools
that have been used in those studies, including laboratory and full-scale testing,
especially around fretting fatigue issues in interference fit joints, and pre-tension
issues in pre-loaded bolts. In modelling of mechanical joints, the main tech-
niques can be categorized under either stochastic (or probabilistic) type, where
the random variation is usually based on fluctuations observed in historical data
for a selected period of time using standard time-series techniques, or deter-
ministic techniques, with analytical and numerical models and methods, which
are widely applied. Analytical techniques typically include models like; New-
ton’s laws of motion, Laplace transformation methods, Lagrange differential
equations, D’Alambert’s principle of virtual work, Hamilton’s principle, Fourier
series and transform, Duhamel’s integral and Lamé’s equations. Today, com-
puters are widely used to find approximate solutions to complex engineering
problems by combining the above-mentioned methods. A large number of these
combined methods, also referred to as numerical methods, fall under finite
element analysis (FEA). One way to reduce fretting issues in mechanical con-
nections could be to increase the research and investigation effort on expanding
pin solutions, which are different from the traditional press and shrink fit
methods.

Keywords: Fretting fatigue  Interference fitted joints  Joint modelling 


Expanding pin

1 Introduction

In most mechanical systems, there are needs for power transmission and connection
techniques using various shapes and dimensions, and the most familiar transmission
systems might be of electrical or mechanical type. Among those, spline, dovetail,
riveted, interference fitted, pre-stressed and open-hole pin/bolt connections are the
typical mechanical joint types [1].

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 116–125, 2020.
https://doi.org/10.1007/978-981-15-2341-0_15
On Modelling Techniques for Mechanical Joints 117

This literature study aims to look deeper into which contact modelling techniques
and tools are used in previous studies and investigations on the mechanisms of dam-
ages that occur in connections in mechanical systems. The goal of using joints and
other fasteners in many engineering and mechanical structures is to transfer moments
and forces through frictional contact surfaces into substructures. When there is a lack of
information about certain parameters, such as contact stiffness, damping, surface
roughness quality, preload etc., the frictional contact interfaces introduce uncertainty in
the dynamic properties of assembled structures, and then measurements, quantification
and modelling of this uncertainty becomes important [2].
Different mechanical connection techniques are used for joints of mechanical
systems including the following:
– Interference fit, also known as press fit or friction fit, is a fastening method where
two parts, typically circular cross section shape, are pushed together (one into the
other) and kept in place due to the pressure between them.
– Shrink fit is an interference fit technique where typically the inner part is being
cooled down (shrinks), or the inner surface of the outer part is being heated up, to
make the fit possible. After assembly, the temperature-affected part will adjust its
size back to original size, and there will be an interference fitted connection.
– The mechanical radial expansion by companies like Bondura and Expander creates
the interference fit slightly different from both the “normal” press fit and shrink fit
techniques, and one of its advantages is the possibility to easily control and adjust
the fit level over time.
– A typically open-hole connection is a type of connection with a cylindrical pin in a
joint with an operation tolerance equal to the installation tolerance. In this type of
connection, the pin is “loose” within the bore and has the possibility to move
relative to the bore during operation, as moveable joints in cranes or heavy
machinery.
– A pre-loaded bolt that contains typically of the skank, bolt head and a threaded part
with a nut. The bolt is typically tightened up by torqueing or hydraulic tensioning of
the skank, to reach a pre-defined tension force in the bolt skank.

2 Modelling Approaches, Typical Test Methods and Models

A research problem is always a statement regarding a concern, or area of concern, for


someone. This could be anything from a practical problem in the industry, to a theo-
retical issue never resolved before. Such problems can be sought resolved or high-
lighted from new points of view, by a variety of different approaches. An investigation
of a research problem will always include a model of the problem, in one way or
another, this article focuses on the different modelling approaches in general and the
models the different authors have had in the corresponding cited investigations.
An overview and discussion of the different models and tools applied for different
research problems is given below. In modelling of mechanical joints, the main tech-
niques can be categorized under either stochastic (or probabilistic) and deterministic
118 Ø. Karlsen and H. G. Lemu

techniques. In addition, the widely used modelling approaches can be analytical,


numerical, and experimental or a combination of the above mentioned.

2.1 Stochastic Models and Methods


A stochastic model is a tool for estimating probability distributions of potential out-
comes by allowing for random variation in one or more inputs over time. The random
variation is usually based on fluctuations observed in historical data for a selected
period using standard time-series techniques.
Uncertain factors are built into the model, and the model produces many answers,
estimations, and outcomes—like adding variables to a complex mathematical
problem—to see their different effects on the solution. The same process is then
repeated many times under various scenarios [2, 3]. Parameters with high impact of
variation in a bolted mechanical joint could typically be: relaxation, scatter, geometric
variables and friction. A stochastic model could be analytical when it applies proba-
bility based mathematical equations. The classical analytical method to model bolted
joint is based on the well-known VDI 2230 (The Association of German Engineers)
standard [4].

2.2 Deterministic Method


Deterministic modelling gives the same exact results for a particular set of inputs, no
matter how many times the model is re-calculated. Here, the mathematical properties
are known, where none of them is random, and there is only one set of specific values
and only one answer or solution to a problem. Contrary to stochastic models, the
uncertain factors of deterministic models are external to the model.
The VDI 2230 standard, also called the Extreme Value Method, is often applied
when a deterministic approach to a bolt problem in a mechanical joint is required, and
is the most popular analytical method and addresses most mechanical bolt connection
problems very well [4].
Analytical Models and Methods. Analytic methods use exact theorems to present
formulas that can be used to present numerical solutions to mathematical problems
with or without the use of numerical methods. Analytical approaches provide closed
forms of solutions. In other words, they provide exact and explicit solutions of
mathematical models. In analytical modelling technique, analytical dynamics treats the
system as a whole dealing with scalar quantities such as the kinetics and potential
energies of the system. The technique uses mathematical models, formulas and tech-
niques that can typically be:

• Newton’s three laws of motion, which are the foundations for classical mechanics
and Newton’s second law in particular, to describe the motions of macroscopic
objects.
• Laplace transformation method, which can be used for calculating the response of a
system to a variety of force excitations, including periodic and non-periodic
responses. This method can threat discontinuous functions with no difficulty and it
On Modelling Techniques for Mechanical Joints 119

automatically takes into account the initial conditions [5]. The Laplace transform is
an integral transform named after its French inventor Pierre-Simon Laplace
(1749–1827).
• Lagrange’s equations are differential equations in which one considers the energies
of the system and the work done instantaneously in time, and obtains the equation
of motion in generalized coordinates approaching the system from the analytical
dynamics point of view [5].
• Application of D’Alembert’s principle, which is an extension of the principle of
virtual work and stated as “the virtual work performed by the effective forces
through infinitesimal virtual displacements compatible with the system constraints
is zero”. The principle of virtual work is essentially a statement of the static or
dynamic equilibrium of a mechanical system [5].
• Hamilton’s principle is the most important and powerful variational principle in
dynamics, and it derives from the generalized D’Alembert’s principle. The varia-
tional principle views the motion as a whole from the beginning to the end.
• Fourier series and transform is a tool that breaks a waveform into an alternate
representation, characterized by sine and cosines, and it shows that any waveform
can be re-written as the sum of sinusoidal functions [5].
• The Duhamel’s integral or convolution integral, or sometimes referred to as the
superposition integral, is used to find the total response x(t) to an arbitrary input [5].
• Lamé’s equations are typically used for finding hoop and longitudinal stresses in a
thick-walled cylinder.
Analytical models and methods are widely applied in studies and investigations of
problems in relation with mechanical joints, and are often combined with other
methods, like numerical methods and experimental tests or real case studies. An
analytical model can be formulated as deterministic when the input values are worst-
case values.
Analytical modelling techniques can be considered as the basis of the development
of computer simulation of mechanical joints such as in finite element methods (FEM).
Examples of analytical models and methods in previous investigations include:
a. Analysis of a shrink-fit failure on a gear hub/shaft assembly was conducted by an
analytical approach, in combination with applying a Finite Element method. In the
analytical approach, the well-known Lamés equations were applied, and the FEM
was implemented using ABAQUS [6].
b. An analytical and experimental study has been completed to assess the effectiveness
of a direct cold expansion technique on the fatigue strength of fastener holes, where
ANSYS 3-D Finite Element was used to carry out the analysis. The fatigue spec-
imens were made from 6.32 mm thick aluminum alloy (grade 7075-T6). An
oversized pin was pushed through the hole to create the cold expansion necessary
[7, 8].

Numerical/FEA Models and Methods. Though mathematical formulation based


analytical approaches are widely used to model mechanical joints, there are many
problems for which it is not possible to find an analytical solution. These are models
that have nonlinear equations. For higher order or nonlinear differential equations with
120 Ø. Karlsen and H. G. Lemu

complex coefficients, it becomes very difficult to find exact solution; therefore,


the numerical methods for solving the equations are needed.
Numerical approaches lead to approximate solutions which are usually obtained
with the help of a computer. A large volume of the use of numerical methods falls
under finite element analysis (FEA), which is widely used to find approximation
solutions of mathematical problems in any field in engineering, hence any result from
the numerical method will be an approximation, and not an exact answer.
There are several FEA techniques in use depending on the problem to be solved.
Some typical techniques can be [4]:
– 2D and 3D Solid model
– Beam Element using multi-point constraint (MPC) method
– Axis-symmetric model
– Parametric, i.e. Design of Experiment (DOE) based modelling.
In addition, there exist other extension of the FEM that are dedicated to particular
application. For instance, Stochastic FEM (SFEM) is an extension of the classic and the
deterministic finite element approach to solve static and dynamic problems with
stochastic mechanical behaviour and loading conditions [9]. Furthermore, the
Extended FEM (XFEM) [10] is additional extension to the classical FEM to model
cracks and other discontinuities by enriching the degrees of freedom in the model with
additional displacement functions that account for the jump in displacements across the
discontinuity.
In modelling of contacts in mechanical joints, numerical methods like FEM are
often combined with analytical methods for comparison of their results and/or verified
through experimental tests. Due to the advances in the computational power of com-
puters and the analysis algorithms, however FEM based models are today indepen-
dently used to predict the behavior of mechanical systems at lower cost. As a result a
significant portion of research in study of complex mechanical systems has focused on
use of numerical simulation approach. Table 1 provides some of the previous resear-
ches with focus on the type the numerical study and the objectives for the studies.

Table 1. Numerical models and methods.


Type of study Objective/methodology used Ref.
Numerical FE (ABAQUS) To simulate evolution of contact profile and predict [11,
fretting fatigue crack initiation by employing an 12]
energy-based wear model. In addition, a mean stress
correction multiaxial fatigue damage model has been
used, and the stress relief groove shape effects on
fretting fatigue was investigated
Numerical FE (FRANC2D®) A FE model was developed and applied to study the [13]
influence of elevated temperature on crack propagation
lifetime, in combination with fretting fatigue
experiments under cyclic contact loading
(continued)
On Modelling Techniques for Mechanical Joints 121

Table 1. (continued)
Type of study Objective/methodology used Ref.
Numerical FE (ABAQUS) To verify simulations by comparing the results of [14]
several analyses. For instance, crack growth in a
shrink-fit assembly subjected to rotating bending was
studied using two FE methods, and the results were
compared with each other and with results from the
literature
Numerical 2D FE The stress and deformation in a shrink-fitted hub-shaft [15]
axisymmetric model (ANSYS) joint have been analysed using Finite Element method
(two-dimensional axisymmetric model – ANSYS), and
compared with the results from Lamés equations, and
they was found to be in good agreement
Numerical FE (ADINA) An interference assembly and fretting wear analysis of [16]
a hollow shaft were studied using FE, in combination
with an analytical approach. The modelling and
simulation work was intended to compute the
equivalent contact stresses due to interference assembly
of the shaft and identify the fretting damage under
different loading cases, levels of interference and
friction
Numerical FE (ABAQUS) Predicting of the fretting fatigue is a big challenge both [17]
for experimental tests and numerical simulation. In an
attempt reported by Sabelkin and Mall, however, a FE
was developed using a four node plane strain
quadrilateral element to compute the local relative slip
on the contact surface from measured global relative
slip away from the contact surface. The results of the
FE simulation were compared with test results
measured using extensometers
Continuum Damage CDM based numerical simulations are used to [18]
Mechanics (CDM in investigate the fatigue damage evolution at a fastener
ABAQUS) hole, which is treated by cold expansion or with
interference pin, in combination with fatigue
experiments. The test specimen was made of Al alloy
7075-T6, and the cold expansion level ranged from
2%–6%
Numerical 3D elastic FEM In a similar problem area, the effect of interference fit [19]
(ANSYS) on fretting fatigue in a single pinned plate in Al 7075-
T6 was studied using FEM, and experimental tests with
1%, 1.5%, 2% and 4% degree of interference fit
Numerical 2D FE (ABAQUS) Analysis of the fretting conditions in pinned [20]
connections of AA7075-T6 aluminum alloy sheet and
aluminum and steel pins is also reported using a FE to
evaluate the local mechanical parameters that control
fretting wear damage
122 Ø. Karlsen and H. G. Lemu

Experimental Models and Methods (Lab/Full-Scale). Experimental tests can give


important information of how a system of interest acts, reacts and interacts. The
experimental tests can be in the range from physically small laboratory tests and up to
major full-scale tests often in real environmental conditions.
Experimental models are often applied together with analytical or numerical models to
analyze a problem in relationship with mechanical joints. The experimental problem is
often modelled by the use of pads and specimens rigged in a hydraulic test-machine for
re-creating a friction or fretting fatigue problem, or specimens exposed to bending
moments for investigating relief groove problems, or using a universal servo-hydraulic
fatigue machine to expose multi or single rivet-row lap joints to a fatigue problem, among
other. Several investigations on contact fatigue of mechanical joints using experimental
models and methods, in combination with numerical methods, are reported in the liter-
ature. List of some of the relevant previous investigations are summarized in Table 2.

Table 2. Experimental models and methods.


Type of study Objective/methodology used Ref.
Actual-size press-fitted axles Analysis of fatigue design methods and fatigue [21]
testing, combined with ultrasonic test and
magnetic particle inspection
Small-scale specimen of 25CrMo4 with contact Fretting fatigue testing with varying load [22]
pads from a railway wheel hub parameters, and FEA (ABAQUS) to characterize
resulting stress-strain field
Shrink-fit pin (soft normalized steel) subjected to Fretting fatigue initiation was studied by [23]
rotated bending, by el-motor experiments and numerical simulations with a 3D
FE model (MARC)
Small-scale specimen with contact pads (all The effect of relief grooves were investigated by [24]
material SCM440H – low alloy) experimental work and FEM (ABAQUS), by
changing systematically the groove shape
Small-scale, press-fitted axle specimen (material To study damage conditions and to characterize [25]
LZ50) fitted into wheel specimen (material rotary bending fretting fatigue behavior of
CL60), subjected to rotated bending railway axles; results compared with those of
damage analysis on the real axle, by comparing
scanning electron microscope images
Interference fitted pin (material AISI-D2) on Applied 3D FE simulations plus experimental [26]
holed plate (material AL-alloy 7075-T6) tests of a 4.5 mm thick holed single plate, with
varying interference fit levels (1%, 1.5%, 2% and
4%), to improve fatigue life in joints
Small-scale cold expansion and interference-fit 3D FE models (ANSYS 11.0) were employed to [27]
test, with plate specimen Al 2024-T3 and pin simulate the laboratory tests, and compare the
material class 12.9 quality steel results, at cold expansion and interference-fit
levels 0%, 1.5% and 4.7%
Experimental test of double lap bolted joint, with The aim was to monitor total lifetime, and to [28]
plate material Al 2024-T3, and steel bolt M8-8.8 perform FEA (ABAQUS) and Continuum
quality Damage Mechanics to develop a predictor tool to
estimate fretting fatigue crack initiation lifetime
Experimental laboratory testing on a double Combined effect of cold expansion and clamping [29]
shear lap joint (3.2 mm thick Al 2024-T3 plate torque investigation to gain more knowledge in
material) efficient design of bolted joints FE simulations
(ANSYS) conducted. Testing and simulations of
0%, 1.5% and 4.7% cold expansion level
On Modelling Techniques for Mechanical Joints 123

3 Discussions and Outlooks

Fretting problems and other challenges are well known and still are continuous issues
in many industries when it comes to mechanical joints, although there have been
conducted numerous investigations for many years. In general, the newer investigations
are building their results on results from earlier investigations combined with new and
often better techniques, test equipment and improved FE tools and methods, in addition
to improved materials and palliatives/surface protection techniques.
Numerical simulations with finite element models in combination with laboratory
tests and/or analytical approaches seems to be well-known, much applied and efficient
methods when investigating around contact problems in mechanical joints. ABAQUS
and ANSYS seem to be the most applied codes in the papers reviewed. Simulations are
also compared to the results from damage analysis from real equipment, in certain
cases.
The mechanical radial expansion, or expanding pin technology [30] mentioned
earlier, i.e. by companies like Bondura Technology and Expander, is not mentioned in
any of the investigations found in this literature study. The mechanical radial expansion
creates the interference fit slightly different from both the “normal” press fit and shrink
fit techniques, and it is possible to control and adjust the fit level over time. Further
research and investigations will be necessary to clarify how efficient an expanding PIN
solution can be, compared to existing solutions, like the traditional press-fit and shrink-
fit.

4 Conclusion

Fretting problems and other challenges are sources for concerns in most mechanical
industries, but also in many other industries that experience vibrations and micro
movements relatively between adjacent material surfaces. In addition to continuing the
investigations, it is imperative to improve material qualities, palliative methods, test
techniques and the way of how to connect the parts of the joint. New methods, as the
mechanical radial expansion technique have been in the market for many years, but are
possibly not well enough known among the researchers. The mechanical radial
expansion technique should be applied more in future investigations regarding how to
reduce fretting fatigue and other related issues in interference fitted joints. Its advan-
tages when it comes to installation, retrieval, elimination of the need for heat/cold, and
possibility to quickly adjust the interference fit level, makes this solution worth
investigate further.
The finite element analysis software tools are improving continuously, and in
combination with increasing processing capacities, new developments in laboratory
equipment, and correct comparison of numerical simulations with analytical results and
laboratory specimen/real life equipment damages, it should be possible to give con-
tinuous feedback to the FE analysis and achieve a continuous improvement of the
results from the FE tools.
124 Ø. Karlsen and H. G. Lemu

References
1. De Pauw, J.: Experimental research on the influence of palliatives in fretting fatigue. Ghent
University, Faculty of Engineering and Architecture, Ghent, Belgium (2016)
2. Jalali, H., Khodaparast, H., Madinei, H., Friswell, M.I.: Stochastic modelling and updating
of a joint contact interface. Mech. Syst. Signal Process. 129, 645–658 (2019)
3. Mignolet, M.P., Song, P., Wang, X.Q.: A stochastic Iwan-type model for joint behavior
variability modeling. J. Sound Vib. 349, 289–298 (2015)
4. Amir, Y., Govindarajan, S., Iyyanar, S.: Bolted joints modeling techniques, analytical,
stochastic and FEA comparison. In: ASME Proceedings of International Mechanical
Engineering Congress and Exposition, (IMECE), Houston, USA (2012)
5. Dukkipati, R.V.: Solving Vibration Analysis Problems Using MATLAB. New Age
International Ltd. (2007)
6. Truman, C.E., Booker, J.D.: Analysis of a shrink-fit failure on a gear hub/shaft assembly.
Eng. Fail. Anal. 14(4), 557–572 (2007)
7. Chakherlou, T.N., Vogwell, J.: The effect of cold expansion on improving the fatigue life of
fastener holes. Eng. Fail. Anal. 10(1), 13–24 (2003)
8. Chakherlou, T.N., Vogwell, J.: A novel method of cold expansion which creates near-
uniform compressive tangential residual stress around a fastener hole. Fatigue Fract. Eng.
Mater. Struct. 27(5), 343–351 (2004)
9. Stefanou, G.: The stochastic finite element method: past, present and future. Comput.
Methods Appl. Mech. Eng. 198(9–12), 1031–1051 (2009)
10. Nikfam, M.R., Zeinoddini, M., Aghebati, F., Arghaei, A.A.: Experimental and XFEM
modelling of high cycle fatigue crack growth in steel welded T-joints. Int. J. Mech. Sci. 153–
154, 178–193 (2019)
11. Zeng, D., Zhang, Y., Lu, L., Lang, Z., Zhu, S.: Fretting wear and fatigue in press-fitted
railway axle: a simulation study of the influence of stress relief groove. Int. J. Fatigue 118,
225–236 (2019)
12. Makino, T., Kato, T., Hirakawa, K.: Review of the fatigue damage tolerance of high-speed
railway axles in Japan. Eng. Fract. Mech. 78(5), 810–825 (2011)
13. Abbas, F., Majzoobi, G.H.: An investigation into the effect of elevated temperatures on
fretting fatigue response under cyclic normal contact loading. Theoret. Appl. Fract. Mech.
93, 144–154 (2018)
14. Gutkin, R., Alfredsson, B.: Growth of fretting fatigue cracks in a shrink-fitted joint subjected
to rotating bending. Eng. Fail. Anal. 15(5), 582–596 (2008)
15. Özel, A., Temiz, Ş., Aydin, M.D., Şen, S.: Stress analysis of shrink-fitted joints for various
fit forms via finite element method. Mater. Des. 26(4), 281–289 (2005)
16. Han, B., Zhang, J.: Interference assembly and fretting wear analysis of hollow shaft. Sci.
World J. 2014, 919518 (2014)
17. Sabelkin, V., Mall, S.: Investigation into relative slip during fretting fatigue under partial slip
contact condition. Fatigue Fract. Eng. Mater. Struct. 28(9), 809–824 (2005)
18. Sun, Y., Hu, W., Shen, F., Meng, Q., Xu, Y.: Numerical simulations of the fatigue damage
evolution at a fastener hole treated by cold expansion or with interference fit pin. Int.
J. Mech. Sci. 107, 188–200 (2016)
19. Mirzajanzadeh, M., Chakherlou, T.N., Vogwell, J.: The effect of interference-fit on fretting
fatigue crack initiation and DK of a single pinned plate in 7075 Al-alloy. Eng. Fract. Mech.
78(6), 1233–1246 (2011)
20. Iyer, K., Hahn, G.T., Bastias, P.C., Rubin, C.A.: Analysis of fretting conditions in pinned
connections. Wear 181 to 183(Part 2), 524–530 (1995)
On Modelling Techniques for Mechanical Joints 125

21. Hirakawa, K., Kubota, M.: On the fatigue design method for high-speed railway axles. In:
Proceedings of the Institution of Mechanical Engineers, Part F (2001)
22. Luke, M., Burdack, M., Moroz, S., Varfolomeev, I.: Experimental and numerical study on
crack initiation under fretting fatigue loading. Int. J. Fatigue 86, 24–33 (2016)
23. Alfredsson, B.: Fretting fatigue of a shrink-fit pin subjected to rotating bending: experiments
and simulations. Int. J. Fatigue 31(10), 1559–1570 (2009)
24. Kubota, M., Kataoka, S., Kondo, Y.: Effect of stress relief groove on fretting fatigue strength
and index for the selection of optimal groove shape. Int. J. Fatigue 31(3), 439–446 (2009)
25. Song, C., Shen, M.X., Lin, X.F., Liu, D.W., Zhu, M.H.: An investigation on rotatory
bending fretting fatigue damage of railway axles. Fatigue Fract. Eng. Mater. Struct. 37(1),
72–84 (2014)
26. Chakherlou, T.N., Mirzajanzadeh, M., Abazadeh, B., Saeedi, K.: An investigation about
interference fit effect on improving fatigue life of a holed single plate in joints. Eur. J. Mech.
A/Solids 29(4), 675–682 (2010)
27. Chakherlou, T.N., Taghizadeh, H., Aghdam, A.B.: Experimental and numerical comparison
of cold expansion and interference fit methods in improving fatigue life of holed plate in
double shear lap joints. Aerosp. Sci. Technol. 29(1), 351–362 (2013)
28. Ferjaoui, A., Yue, T., Wahab, M.A., Hojjati-Talemi, R.: Prediction of fretting fatigue crack
initiation in double lap bolted joint using continuum damage mechanics. Int. J. Fatigue 73,
66–76 (2015)
29. Chakherlou, T.N., Shakouri, M., Akbari, A., Aghdam, A.B.: Effect of cold expansion and
bolt clamping on fretting fatigue behavior of Al 2024-T3 in double shear lap joints. Eng.
Fail. Anal. 25, 29–41 (2012)
30. Bondura Technology AS homepage. www.bondura.no. Accessed 14 Sept 2019
Research on Magnetic Nanoparticle Transport
and Capture in Impermeable Microvessel

Jiejie Cao1(&) and Jian Wu1,2


1
School of Mechanical Engineering, Changshu Institute of Technology,
Changshu, China
{caojj,wujian}@cslg.edu.cn
2
Department of Mechanical and Industrial Engineering,
Norwegian University of Science and Technology, Trondheim, Norway

Abstract. A mathematical model for magnetic targeting is developed, com-


posed of Navier-Stokes equations, magnetic field equations and Fokker-Planck
equations. The process of therapeutic magnetic nanoparticles transport in
impermeable microvessel is studied. The magnetic field, magnetic force and
magnetic nanoparticle distribution is simulated by Matlab to reveal their vari-
ation under certain condition and their effects on magnetic nanoparticle transport
and capture. Results show that, the magnetic force is bigger and the capture
efficiency is higher as the distance between magnet and blood vessel is nearer.

Keywords: Magnetic drug target  Modifield magnetic field  Fokker-Planck


equations  Capture efficiency  Cylindrical magnet

1 Introduction

Traditional drug delivery is achieved by intravenous injection. The disadvantage of this


treatment is that it is used in large quantities, and the drug acts on healthy tissues and
cells at the same time. If the anticancer drug can be targeted to the malignant tissue
through the transport particles, allowing it to stay in the lesion area and release the
drug, the therapeutic efficiency can be improved while greatly reducing the side effects
of the drug. This is the magnetic drug targeted therapy technology that has been widely
concerned by researchers at home and abroad in recent years [1–4]. However, in the
existing magnetic drug targeted therapy model, many important influencing factors in
the particle transport process are neglected in order to simplify the solution process [5].
In addition, most models only consider the attraction of the magnetic field generated by
the external magnet to the transport particles, and do not take into account the magnetic
field effect caused by the interaction between the magnetic particles [6].
This paper mainly studies the transport mechanism and capture efficiency of
magnetic drug nanoparticles in non-permeable wall microvessels. Matlab software was
used to simulate fluid velocity, magnetic field and magnetic force distribution, mag-
netic drug nanoparticle distribution and magnetic particle capture rate. The model
considers the size and magnetization effect of the magnetic particles, the external
magnetic field, the size of the magnets and microvessels, and the velocity of the fluid,
especially considering the correcting effect of the magnetized magnetic particles on the

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 126–133, 2020.
https://doi.org/10.1007/978-981-15-2341-0_16
Research on Magnetic Nanoparticle Transport and Capture 127

magnetic field, i.e. the additional magnetic field effect. The analysis results show that as
the distance from the magnet to the blood vessel is getting closer, the magnetic field,
magnetic force, and capture rate will become larger, and the magnetic particles can be
trapped near the vessel wall to be captured. Therefore, in the microvessels a few
centimeters from the magnetic field, it is feasible to use non-invasive magnetic drug
nanoparticles as transport particles to target the diseased tissue.

2 Theoretical Model

2.1 Fluid Control Equation


Figure 1 shows a schematic of the model in which a magnetized rare earth cylindrical
magnet is placed on the surface of the body near the malignant tissue to create a non-
uniform magnetic field that attracts the transported particles in the vicinity of the
malignant tissue. It is assumed that the magnet is infinitely long and its axis is
orthogonal to the microvessel axis. The transport particles, that is, magnetic drug
nanoparticles, are composed of nano magnetic particles (biodegradable Fe3 O4 )
encapsulating a layer of drug molecules.

Fig. 1. The schema of magnetic drug target model

When the magnetic drug nanoparticles move in the blood, the viscous resistance of
the blood fluid can be obtained according to the Stokes formula. At the same time, the
reaction force of the magnetic particles on the fluid is equal and opposite [7]:

f ¼ 6pgf Rp ðv  vf Þ ð1Þ

Where gf represents the fluid viscosity; Rp represents the magnetic particle radius; v
represents the magnetic particle velocity; and vf represents the fluid velocity.
It is assumed that the distribution of magnetic particles in the microvessels is
isotropic, and the distribution function f ðy; x; v; tÞ is used to describe the distribution
state of the magnetic particles. When the magnetic particle distribution is in equilib-
rium, the influence of the time parameter t on the distribution function can be
neglected, and f ðy; x; v; tÞ is denoted as f ðy; x; vÞ.
128 J. Cao and J. Wu

Therefore, the model fluid control equation is:


Z
@ 2 vf @p 1  
gf ¼ þ 24p2 gf Rp v  vf v2 f ðy; x; vÞdv ð2Þ
@y2 @x 0

The first term in the formula represents the viscous force; the second term repre-
sents the pressure gradient; and the third term represents the viscous resistance of the
magnetic particles to the fluid.

2.2 Magnetic Field and Magnetic Force Equation


2.2.1 External Magnetic Field Acts on the Magnetic Force
of the Magnetic Particle
The magnetic particles are regarded as spherical particles, assuming that the magnetic
particle radius is Rp and the volume Vp ¼ ð4p=3ÞR3p . The magnetic force of the external
magnetic field acting on the magnetic particles is:

Fa ¼ l0 Vp f ðHtotal ÞðHa  rÞHa ð3Þ


8  
3ðvp vf Þ ðvp vf Þ þ 3
<
ðvp vf Þ þ 3 ; Htotal \ Msp
Where f ðHtotal Þ ¼  3vp  (When vp  1 , f ðHtotal Þ ¼
: Msp =Htotal ; Hotal  ðvp vf Þ þ 3 Msp
3vp

3 ; Htotal \Msp =3 P
Msp ); Htotal ¼ Ha þ dHf ; Ha represents the strength of the
; Hotal  Msp =3
Htotal
external magnetic field; Hf represents the additional magnetic field strength; l0 ¼
4p  107 T  m=A represents the vacuum magnetic permeability; vp and vf represents
the magnetic susceptibility of the magnetic particles and the fluid, respectively [8]; and
Msp represents the saturation magnetization of the magnetic particles.
Assuming that the magnet is infinitely long in the z direction (axial direction), there
is no magnetic component in the z direction. Then the components of the external
magnetic field in the y and x direction are:

ðy þ d Þ
Fay ¼ l0 Vp f ðHa ÞMs2 R4mag h i3 ð4Þ
2 ð y þ d Þ 2 þ x2

x
Fax ¼ l0 Vp f ðHa ÞMs2 R4mag h i3 ð5Þ
2 ð y þ d Þ 2 þ x2

2.2.2 Additional Magnetic Field Acts on the Magnetic Force


of the Magnetic Particles
The model considers the additional magnetic fields and additional magnetic forces
generated by the interaction between the magnetic particles. Consider magnetic par-
ticles as equal-point dipoles and assume that there are N point dipoles in the model [9].
Research on Magnetic Nanoparticle Transport and Capture 129

Then the additional magnetic field strength of the magnetic particles acting at the
* *
position rn at the position r is [6]:
0  * *  * *  1
*
  v a3 3 H a r n  r  r n * *  * * 
 @ r  r n  Ha r n A
*
dH f r ¼  * * 2 ð6Þ
v þ 3 *r  *r 3  r  r 
n n

Since every two magnetic particles will interact, the magnetic field force at the
position ðy; xÞ is actually the sum of the additional magnetic forces generated by the
magnetic particles at other N  1 positions. The specific expression is:
Z
Ffy ðy; xÞ ¼ Ffy ðy; x; yi ; xi Þf ðyi ; xi ; vÞv2 dyi dxi dv ð7Þ

Z
Ffx ðy; xÞ ¼ Ffx ðy; x; yi ; xi Þf ðyi ; xi ; vÞv2 dyi dxi dv ð8Þ

2.3 Magnetic Particle Distribution Control Equation–Fock-Plank


Equation
The transport of magnetic particles in blood vessels can be described by the Fock-Plank
equation [10]. The equation considers the effects of external magnetic force and
additional magnetic force on the magnetic particles, ignoring the Brownian motion and
the collision between the magnetic particles. item. The specific expression is:

@f @f @f Fa @f Ff @f
þv þv þ þ ¼0 ð9Þ
@t @r @x m @v m @v
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Where Fa ¼ Fay2 þ F2 ; F ¼
ax f Ffy2 þ Ffx2 .
Microvascular inlet and outlet boundary conditions:
Z 1 
4p f ðy; x; v; tÞx¼0;x¼l v2 dv ¼ n0
0

Where n0 indicates the average density of magnetic particles.


Microvascular wall boundary conditions:

@f ðy; x; v; tÞ
 ¼0
@y y¼R

2.4 Magnetic Particle Capture Rate


In non-permeable microvasculature, the radial position at which the magnetic drug
nanoparticles are captured is defined between the cells in the vicinity of the vessel wall,
130 J. Cao and J. Wu

and the axial position is defined within the length of the applied magnet. The capture
rate expression is as follows:
R xe R ye R 1
x R 0 f ðy; x; vÞv2 dydxdv
CE¼ R l eR R R 1 ð10Þ
l R 0 f ðyi ; xi ; vÞv2 dydxdv

Where yce and xce indicates the radial and axial maximum values at which the
magnetic particles are captured, respectively.

3 Numerical Calculation Results and Analysis

Select typical conditions: vessel radius Rv ¼ 75 lm , magnet radius Rmag ¼ 2:0 cm ,


magnetic particle radius Rp ¼ 100 nm, the distance of magnet to vessel d ¼ 3:5 cm,
vessel length l ¼ 16 cm (4Rmag  l  4Rmag ), magnet cross-section magnetization
Ms ¼ 1  106 A=m (B ¼ 1:256T), saturation magnetization Msp ¼ 4:78  105 A=m.

3.1 Magnetic Field Distribution


According to the above parameters, the magnetic field strength and magnetic field force
caused by the applied magnetic field are calculated by Matlab simulation. The results
are shown in Figs. 2 and 3:

3000
Bax Fax
0.01
Bay Fay
2000
0.00
External magnetic force (pN)
The Magnetic Field (T)

1000 -0.01

-0.02
0

-0.03

-1000
-0.04

-2000 -0.05
-4 -3 -2 -1 0 1 2 3 4 -4 -2 0 2 4
X/Rmag X/Rmag

Fig. 2. Magnetic field components along the Fig. 3. Magnetic force components on a
axis of a microvessel magnetic nanoparticle

As can be seen from Figs. 2 and 3, the extreme value of the radial component Bay of
the magnetic induction occurs at the center position (x=Rmag ¼ 0) of the cylindrical
magnet, and the extreme value of the axial component Bax appears at the edge position of
the magnet (x=Rmag ¼ 1). The axial component Fax of the external magnetic force is
similar to the axial component Bax of the magnetic induction, but the extreme values are
opposite, which indicates that the external magnetic force is not only related to the
magnitude of the magnetic field, but also related to the magnetic field gradient. The radial
component Fay of the external magnetic field force is the dominant force for capturing the
magnetic particles, and the extreme position appears at the center of the magnet.
Research on Magnetic Nanoparticle Transport and Capture 131

Through the control variable method, the change of the external magnetic field
force when the distance between the magnet and the blood vessel is 20 mm, 30 mm
and 40 mm respectively is discussed. The result is shown in Fig. 4:

0.08
0.00
d=20mm -0.02
0.06
d=30mm -0.04
External magnetic force Fax (pN)

0.04 d=40mm -0.06

External magnetic force Fay (pN)


-0.08
0.02 -0.10
-0.12
0.00 -0.14
-0.16
-0.02
-0.18

-0.04 -0.20
-0.22
d=20mm
-0.06 -0.24
d=30mm
-0.26
d=40mm
-0.08 -0.28
-0.30
-4 -2 0 2 4 -4 -2 0 2 4
X/Rmag X/Rmag

(a) (b)

Fig. 4. External magnetic force when d = 20 mm, 30 mm, 40 mm. (a) Fax ; (b) Fay

It can be seen from Fig. 5 that the closer the magnet is to the blood vessel, the
larger the external magnetic force, the greater the attraction to the magnetic particles,
and the better the capturing effect. Therefore, in order to obtain greater magnetic force
and better capture effect, the closer the magnet is to the blood vessel, the better.

3.2 Magnetic Particle Distribution


The spatial distribution of the magnetic particles is obtained by solving the Fokker-
Planck equation, as shown in Fig. 5(a) and (b). It can be seen from the trend of the
pattern change that the magnetic particle has the highest probability of appearing near
the vessel wall by the interaction of the external magnetic field and the additional
magnetic field [11]. Therefore, a large amount of magnetic particles will stay in the
vicinity of the tube wall and be trapped.

0.0026
0.0034

0.0025
0.0032
Dimensionless Distribution

0.0024
Dimensionless Distribution

0.0030

0.0023

0.0028

0.0022

0.0026

0.0021

0.0024
0.0020
-0.4 -0.2 0.0 0.2 0.4 -0.4 -0.2 0.0 0.2 0.4
Dimensionless Axis(y=0) Dimensionless Diameter(x=0)

(a) (b)

Fig. 5. Magnetic nanoparticles distribution along the microvessel. (a) axial direction; (b) radial
direction
132 J. Cao and J. Wu

3.3 Magnetic Particle Capture Rate


R R
The capture ratio of the magnetic particles when yce ¼  45 Rv and  mag
2  xce  2 is
mag

obtained from the formula (10). The result is shown in Fig. 6. It can be seen from the
trend of the curve that the closer the magnet is to the blood vessel when other con-
ditions are constant, the higher the efficiency of capturing the magnetic particles.

40
reference
model
35

30
CE%

25

20

15

10
1 2 3 4 5
Distance between microvessel and magnet d(mm)

Fig. 6. The influence of distance d between magnet and blood vessel on capture efficiency

4 Conclusion

In this paper, a mathematical model for studying the transport mechanism and capture
efficiency of magnetic drug nanoparticles in non-permeable wall microvessels was
established. Through theoretical analysis and numerical simulation, the conclusions are
as follows:
(1) When other conditions are constant, the smaller the distance from the magnet to
the blood vessel, the greater the magnetic field and magnetic field force, the
stronger the attraction to the magnetic particles, and the higher the capture effi-
ciency of the magnetic particles.
(2) Most of the magnetic particles will be concentrated near the vessel wall after
being captured. Since this model discusses non-permeate wall microvessels,
particles arriving at the vessel wall will be bounced back into the blood, so it is
more reasonable to define the radial position to be captured within the interval
near the vessel wall when calculating the magnetic particle capture rate.
(3) When other conditions are constant, the capture efficiency of the magnetic par-
ticles increases as the distance from the magnet to the blood vessel decreases. The
closer the position of the magnet is to the diseased tissue, the higher the capture
rate of the magnetic particles, and the more drug molecules are released, the better
the therapeutic effect.
Research on Magnetic Nanoparticle Transport and Capture 133

References
1. Mondal, A., Shit, G.C.: Transport of magneto-nanoparticles during electro-osmotic flow in a
micro-tube in the presence of magnetic field for drug delivery application. J. Magn. Magn.
Mater. 442, 319–328 (2017)
2. Jafarpur, K., Emdad, H., Roohi, R.: A comprehensive study and optimization of magnetic
nanoparticle drug delivery to cancerous tissues via external magnetic field. J. Test. Eval.
47(2), 681–703 (2019)
3. ChiBin, Z., XiaoHui, L., ZhaoMin, W., et al.: Implant-assisted magnetic drug targeting in
permeable microvessels: comparison of two-fluid statistical transport model with experi-
ment. J. Magn. Magn. Mater. 426, 510–517 (2017)
4. Shaw, S.: Mathematical model on magnetic drug targeting in microvessel. Magn. Magn.
Materials 83 (2018)
5. Wang, S., Zhou, Y., Tan, J., et al.: Computational modeling of magnetic nanoparticle
targeting to stent surface under high gradient field. Comput. Mech. 53(3), 403–412 (2014)
6. Mikkelsen, C., Fougt, H.M., Bruus, H.: Theoretical comparison of magnetic and
hydrodynamic interactions between magnetically tagged particles in microfluidic systems.
J. Magn. Magn. Mater. 293(1), 578–583 (2005)
7. Morrison, F.A.: An Introduction to Fluid Mechanics. Cambridge University Press,
Cambridge (2013)
8. Hehl, F.W., Obukhov, Y.N.: Foundations of Classical Electrodynamics: Charge, Flux, and
Metric. Springer, Heidelberg (2012)
9. Jones, T.B.: Electromechanics of Particles. Cambridge University Press, Cambridge (2005)
10. Kazem, S., Rad, J.A., Parand, K.: Radial basis functions methods for solving Fokker-Planck
equation. Eng. Anal. Bound. Elem. 36(2), 181–189 (2012)
11. Cao, Q., Han, X., Li, L.: Numerical analysis of magnetic nanoparticle transport in
microfluidic systems under the influence of permanent magnets. J. Phys. D Appl. Phys.
45(46), 465001 (2012)
Analysis of Drag of Bristle Based on 2-D
Staggered Tube Bank

Xiaolei Song1, Meihong Liu1(&), Xiangping Hu2(&), Yuchi Kang1,


and Baodi Zhang1
1
Faculty of Mechanical and Electrical Engineering,
Kunming University of Science and Technology,
Kunming, Yunnan 650500, China
[email protected]
2
Industrial Ecology Programme, Department of Energy and Process
Engineering, Norwegian University of Science and Technology,
Trondheim, Norway
[email protected]

Abstract. In this paper, a 2-D staggered tube bank of bristle pack is established
to examine the effect of flow on bristle pack, and Gambit is used to generate the
mesh. The Resistance of the bristle in the transient condition is analyzed using
simulation with Fluent. The resistance of the brush seal includes friction drag,
pressure drag, and interference drag. Results show that the pressure drag plays
the leading role owing to the shape of the bristle, and pressure drag increases
slowly in the axial direction but increases significantly in the end row. The drag
grows gradually with increasing pressure. The results also show that when the
value of bristles gap decreases, the drags in the front rows increases more slowly
but decrease significantly in the end row.

Keywords: Brush seal  Staggered tube bank  Fluent  Pressure drag

1 Introduction

Brush seal is a contact seal, and it has been widely applied to aero-engine and gas
turbine. Due to the excellent sealing performance, the brush seal can reduce the leakage
effectively. Comparing to a labyrinth seal, the leakage rate of brush seal is only 5%–
10% of leakage from the labyrinth seal, and the leakage mainly occurs between the
small gap of bristles. The inhomogeneity of the bristle pack leads to self-sealing effect.
The researches on the brush seal develop in two directions, experimental measurements
and numerical simulation. Numerical simulation has become an important approach for
studying performance of the brush seal. The staggered arrangement of tube bank model
can analyze the bristles’ pressure drop and flow media. Therefore, this model is an
important model for analyzing the performance of the brush seal.
Air passing through the bristle pack can be treated as flowing around a cylinder.
When air passes through the bristle pack, drag will be produced. Depending on how the
drag generates, the drags of the brush seal can be divided into friction drag, pressure

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 134–141, 2020.
https://doi.org/10.1007/978-981-15-2341-0_17
Analysis of Drag of Bristle Based on 2-D Staggered Tube Bank 135

drag, and interference drag. Because the bristles are blunt body, the pressure drag is
much greater than the friction drag. Therefore, the main analysis is focused on the
pressure drag.
Dai et al. [1] analyze the pressure distribution and the velocity of the flow field in
the compact cross-sectional tube bank. Huang et al. [2] create a 3-D bristles model to
study both the flow and temperature fields. Kang et al. [3] analyze the entry number of
brush seals built on the 2-D staggered tube bank model. Liu et al. [4] analyze the flow
resistance of the brush seal based on a 2-D staggered arrangement of elliptical tube
bank model. Fuchs et al. [5] analyze the both 2- and the 3-D brush seal models without
the front and back plates, and they found that when the bristle’s gap d = 0.008 mm
(that is the ½ diameter of bristles in the literature, the leakage rate of the 2-D model was
the most consistent with the experimental data. In this paper the 2-D staggered
arrangement of tube bank is used to analyze the pressure drop and the bristle gap effects
on the bristle drop.

2 Model

2.1 Simulation Model


In this paper, a 2-D staggered model of a tube bank is established by intercepting the
bristles back at the gap of the back plate. The diameter of the bristles is 0.076 mm. The
model has 15 and 6 rows of bristles in the axial and radial directions, respectively. To
avoid the inlet and outlet back flow, the length of upstream is chosen to be 15 time of
the bristle’s diameter, and the length of downstream is 20 time of the bristle’s diameter,
as shown in Fig. 1. The values of the geometry parameters are given in Table 1, and
the design formulas are:

SD ¼ ST ¼ d þ d ð1Þ
pffiffiffi
3
SL ¼ ðd þ dÞ ð2Þ
2

where ST and SL are the radial spacing and the axial spacing, respectively. SD is the
slant spacing. d is the diameter of bristle, and d is the gap between bristles.
136 X. Song et al.

Fig. 1. The sketch of 2-D staggered arrangement of tube bank model

Table 1. Geometry parameters of the simulation model


Unit: mm
SD/d 1.05 1.10 1.15
d 0.0038 0.0076 0.0114
ST 0.0798 0.0836 0.0874
SL 0.0691 0.0724 0.0757

2.2 Condition Assumptions


According to the structure and operating condition of the brush seal, the following
assumptions are implicitly used for the model:
1. the bristles are rigid bodies;
2. the medium is an ideal-gas;
3. the effects of the front plate, the back plate, and the cant angle of the bristles are
negligible;
4. the bristles are arranged in a hexagonal arrangement;
5. the surface of the bristles is smooth.

3 Mesh Generation

Since the size of the brush seal is at the micron level, the mesh has to be very dense. In
this paper the mesh is generated using Gambit. The zone of the bristle pack is the
important domain in the simulation, so the mesh of this zone requires very high quality.
According to Tan’s test of the grid independence, the node of the cylinder surface is 220
[6]. The bounding layer is four. The mesh of the bristle is the unstructured mesh, and it
must ensure that the mesh between the gap is greater than 2. However, since the zone of
the upstream and the downstream is not the essential computational domain, sparser
structure grid can be used. The final structure of grid used here is presented in Fig. 2.
Analysis of Drag of Bristle Based on 2-D Staggered Tube Bank 137

Fig. 2. The structure of mesh

4 Computing Method

4.1 Governing Equations


This simulation model uses the equation of continuity, Navier-Stokes equations, and
energy equation as follows,

@q @ ðquÞ @ ðqvÞ
þ þ ¼0 ð3Þ
@t @x @y

@ðquÞ @ ðquuÞ @ ðquvÞ @p


þ þ ¼
@t   @x  @y @x
    ð4Þ
@ @u 2 @u @v @ @v @u
þ l 2  þ þ l þ
@x @x 3 @x @y @y @x @y

@ðqvÞ @ ðqvuÞ @ ðqvvÞ @p


þ þ ¼
@t   @x @y  @y   ð5Þ
@ @v @u @ @v 2 @u @v
þ l þ þ l 2  þ
@x @x @y @y @y 3 @x @y

@ ðqT Þ @ðquTÞ @ ðqvT Þ @ k @ k


þ þ ¼ ð grad TÞ þ ð grad TÞ ð6Þ
@t @x @y @x CP @y CP

where p is the gas pressure; q is the gas density; u and v are the axial velocity and the
radial velocity, respectively; l is the viscosity of fluid; CP is the specific heat; k is the
heat transfer coefficient of the fluid; T is the temperature, and grad stands for the
gradient.
The gas medium is an ideal-gas, and satisfies the ideal gas state equation,

p ¼ RqT ð7Þ

where R is the gas constant.


138 X. Song et al.

4.2 Determination of Reynolds Number


Whether a flow is the laminar flow or the turbulent flow can be determined by the
Reynolds number. When SD/d = 1.10 and Dp = 0.5 MPa, the velocity of the gap of the
last row is about 400 m/s, the air density is q ¼ 1:177 kg  m3 and the viscosity is
l ¼ 1:716  105 kg  m1  s1 . With these settings, we can obtain the Reynolds
number,

qud
Re ¼ ¼ 2085 ð8Þ
l

where Re is the Reynolds number. The flow of bristle pack is considered as the unsteady
circular cylinder. This state belongs to the subcritical area, when 300 < Re < 3  105.
The boundary layer is the laminar boundary layer, but the flow separation can occur
around the cylinder and the Karman vortex street can be formed. These phenomena
influence the downstream [7]. Therefore, the standard k-e Model is selected in this
paper.

4.3 Solver
In this paper, the standard software Fluent is used for conducting the simulation, and
the two-dimensional pressure-based solver is chosen. As discussed above, the working
medium is an ideal-gas. Furthermore, the standard k-e model is selected with viscidity
follows the Sutherland Formula. Since the Karman vortex street influences the flow
condition of the downstream, the analysis uses the transient state.
Fluent is based on the finite volume method. The staggered mesh calculates and
stores the pressure and velocity components on the nodes in different mesh systems.
For the 2-D simulation model, the p, u, and v are stored in three different mesh systems.
The calculation firstly uses SIMPLE format for the steady value, and the result is
the initial value for the transient state simulation. Since PISO is superior to both
SIMPLE and SIMPLEC in the transient state, it is adopted for the transient state in the
simulation, and the time step is 10−4. The reference area is the frontal area, and it used
the diameter of the bristle.
The boundary layer of the upstream is the pressure inlet, and the values can be
0.201325 MPa, 0.301325 MPa, 0.401325 MPa, 0.501325 MPa or 0.601325 MPa. The
boundary layer of the downstream is the pressure outlet and the value is
0.101325 MPa. The top and bottom margins are the symmetric boundary conditions.
The surface of the bristle is the non-slip wall.

5 Theoretical Analysis of Drag

When there is a relative velocity between the body and fluid, the force from the fluid
will be acted on the body. The drag is the component of the force that is parallel to the
direction of the relative speed.
Analysis of Drag of Bristle Based on 2-D Staggered Tube Bank 139

Pressure drag is dependent on the pressure drop between the front and back of the
body. Usually, unless the value of Re is very low, the separation phenomenon is
inevitable in the flow of the pure bluff body. Therefore, the pressure is the leading role
in the drag. The pressure drag can be calculated by the following equation,
Z
FP ¼ pcoshdA ð9Þ
A

where p is the pressure acts upon the area of elements dA; FP is the pressure drag; h is
the angle of the normal of dA and the flow direction, shown in Fig. 3.

Fig. 3. The pressure drag acting on the body

6 Discussion

The effect of Dp on the drag are shown in Fig. 4. The drag increases with Dp and with
the number of rows. When Dp is higher than 0.2 MPa, the drag in the end row will
increase significantly. The reason might be that the bristles are the blunt body, and the
pressure drag plays a leading role in the drag of the bristles [8]. The pressure drop at the
end row of the bristles is higher than the other rows, as illustrated in Fig. 5. The drag in
the end row increases dramatically. In the end row the drag can reflect the pressure
drop. The decline of pressure leads to the stress concentration and the friction between
the back plate and the bristles. The multistage brush seal can be adopted to decrease the
pressure in the end row.
140 X. Song et al.

0.011 ∆p=0.1MPa
0.010 ∆p=0.2MPa
∆p=0.3MPa
0.009 ∆p=0.4MPa
0.008 ∆p=0.5MPa
0.007

Drag (N)
0.006
0.005
0.004
0.003
0.002
0.001

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Row
*∆p is the pressure difference between upstream and downstream

Fig. 4. The change curve of drag of bristles (SD/d = 1.10)

Δp=0.1MPa
600000 Δp=0.2MPa
Δp=0.3MPa
500000 Δp=0.4MPa
Δp=0.5MPa
Static Pressure (Pa)

400000

300000

200000

100000

0.001 0.002
Position (mm)

Fig. 5. Pressure distribution along axial gap of bristle under pressure differentials

SD/d=1.05 0.014 SD/d=1.05


SD/d=1.10 SD/d=1.10
0.03 0.012
SD/d=1.15 SD/d=1.15
Sum Drag of 1~14rows

0.010
Drag of End row

0.008
0.02
0.006

0.004
0.01
0.002

0.000
0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5
∆p (MPa) ∆p (MPa)

A B

Fig. 6. The drags in the front rows (A) and the end row (B)
Analysis of Drag of Bristle Based on 2-D Staggered Tube Bank 141

When Dp is lower than 0.3 MPa, the resistance reduces with decreasing d. How-
ever, the rate of rise increases when d decreases, as shown in Fig. 6(A). In Fig. 6(B), it
shows that the drag in the end row can be cut down by reducing the d, since small d
could lead to the turbulence intensity power. The uniformity of the pressure distribution
becomes well with decreasing d [10]. The pressure in the front rows may increase
indistinctively and the turbulent flow reduces the pressure drag at a low Dp [9]. When
Dp increases, the pressure in the anterior rows increases obviously and the effect of the
turbulent flow weakens. Therefore, the increasement of the drag in the front rows is
more observable with reducing d.

7 Conclusion

In this paper, the drag of bristle based on 2-D staggered tube bank is analyzed using
numerical simulation with Fluent. From the results, the following conclusion can be
drawn,
(1) The drag increases when the pressure difference increases. Due to the shape of the
bristles, the reduction of the pressure drop is reflected by the drag.
(2) The reduction of the gap between bristles can decrease the drag in the end row,
but the drag increasement in the front row is not significant. Due to the reduction
of gap, the rate of increment is quicker with the increasing pressure difference.

Acknowledgement. The current research has been supported by National Natural Science
Foundation of China (granted no. 51765024).

References
1. Dai, W., Liu, Y.: Numerical simulation of fluid flow across compact staggered tube array.
Manuf. Autom. 33(2), 107–110 (2011). (in Chinese)
2. Huang, S., Suo, S.F., Li, Y., et al.: Flows in brush seals based on a 2-D staggered tube
bundle model. J. Tsinghua Univ. (Sci. Technol.) 56(2), 160–166 (2016). (in Chinese)
3. Kang, Y., Liu, M., Kao-Walter, S., et al.: Predicting aerodynamic resistance of brush seals
using computational fluid dynamics and a 2-D tube banks model. Tribol. Int. 126, 9–15 (2018)
4. Liu, J., Liu, M., Kang, Y., et al.: Numerical simulation of the flow field characteristic in a
two-dimensional brush seal model (2018)
5. Fuchs, A., Oskar, J.: Numerical investigation on the leakage of brush seals. In: Proceedings
of Montreal 2018 Global Power and Propulsion Forum (2018)
6. Tan, Y., Liu, M., Kang, Y., et al.: Investigation of brush seal vortex separation point based
on 2-D staggered tube bundle mode. J. Drain. Irrigat. Mach. Eng. 35(7), 602–608 (2017). (in
Chinese)
7. Jiang, K., Zhang, D., Qi, Y., et al.: Study on the characteristics of flow around cylinder at
subcritical reynolds number. Ocean Eng. Equip. Technol. 4(1), 37–42 (2017). (in Chinese)
8. JSME, Zhu, B., et al.: Fluid Mechanics. Peking University Press (2013). (in Chinese)
9. Shigenao, M., Wang, S., et al.: Heat Transfer. Peking University Press (2011). (in Chinese)
10. Kang, Y., Liu, M., et al.: Numerical simulation of pressure distribution in bristle seal for
turbomachinery. J. Drain. Irrigat. Mach. Eng. 36(5), 58–63 (2018). (in Chinese)
Design of Large Tonnage Lift Guide Bracket

Lanzhong Guo1 and Xiaomei Jiang1,2(&)


1
Jiangsu Key Construction Laboratory of Elevator Intelligent Safety,
Changshu, China
[email protected], [email protected]
2
School of Mechanical Engineering, Changshu Institute of Technology,
Changshu, China

Abstract. Due to the great improvement of industrial automation level and the
rapid growth of economy, the demand for large tonnage lift is increasing day by
day. Large tonnage lifts usually have carrying capacity up to thousands of tons,
some even tens of thousands of tons. As one of the main components of fixing
guide rail, the car guide rail bracket transfers the force of the whole lift to the
shaft wall, which is very important for the deformation of the whole guide rail
system. The stiffness and strength of the car guide rail bracket have a certain
impact on the stability of the lift, and its size determines the cost of lift pro-
duction, so it is necessary to save materials to ensure safe and reliable operation.
The dimension and structure of the guide bracket are designed. Then, the
structural dimension of the product is changed through the study of the opti-
mization design method. The finite element analysis is used to check whether
the guide bracket meets the standard requirements, the feasibility of the design
scheme are verified by calculation and software simulation, the comprehensive
performance improved and the material reduced at the same time.

Keywords: Large tonnage lift  Guide bracket  Stress check  FEA

1 Introduction

In the rapid development of modern production environment, large tonnage lifts need
to be used in more and more occasions. In other words, it is the continuous devel-
opment of large tonnage lifts that gives the possibility of the development of modern
heavy industry. The rated load of large tonnage lift is very large, so the maximum
effective area of the corresponding car is large, which can meet the needs of most
factories with special cargo. The lift guide rail bracket is mainly composed of the main
parts of the bracket, the guide rail clamp plate and the fastening elements. A guide
bracket can only use the same fastening element, usually using fastening bolts as the
fastening element of the guide bracket. The guide rail bracket fixes the guide rail on the
wall of the shaft, thus transferring the force of the whole lift to the wall. In addition, the
guide rail bracket can also make the adjacent guide rail installed according to the set
distance, so as to provide sufficient strength support for the fixed guide rail [1]. In the
process of lift design, production, manufacture and installation, people often concen-
trate most of their attention on the guide rail, thus ignoring the guide rail bracket,

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 142–150, 2020.
https://doi.org/10.1007/978-981-15-2341-0_18
Design of Large Tonnage Lift Guide Bracket 143

resulting in the final production of the guide rail bracket cannot meet the national
standards, or even lead to potential safety hazards. Whether or not to choose a suitable
guide bracket is related to the whole guide system, sometimes threatening the safety
and reliability of the lift, making the lift unable to run smoothly [2].
According to different uses, guide brackets can be divided into the following three
categories: car guide bracket, counterweight guide bracket, car and counterweight
sharing guide bracket. According to different structural forms, guide brackets can be
divided into two categories: integral structure and combined structure. According to
different shapes, guide brackets can be divided into the following three categories:
mountain guide bracket, angle guide bracket and frame guide bracket. The guide rail of
the lift is mainly fixed by the guide rail bracket, so the strength of the guide rail bracket
plays a decisive role in the safe and reliable operation of the lift. In the actual instal-
lation process, considering that the guide bracket of the lift car may follow the guide
rail and lead to other unexpected torsion and synchronous rotation of the guide plate
relative to the guide rail, at the same time, the traditional guide plate can no longer
adapt to the movement caused by the adjustment of the guide and the calibration of the
guide. Therefore, it is necessary to design a different kind of rail bracket, so that it can
not only adapt to different specifications and sizes of the rail installation, but also
facilitate the dynamic fixing of the rail in the installation process [3]. We are paying too
much attention to the selection and design of the guide rail, but not to the bracket of the
guide rail. Once the guide bracket of the lift is unable to provide sufficient support, the
quality of the lift will not meet the national standards, resulting in complaints from
customers, loss of potential market, and even security risks. As for the design of guide
bracket, most manufacturers choose the steel frame structure according to the char-
acteristics of the rigid frame structure itself. First of all, steel has high strength, high
density and good mechanical properties. Secondly, steel is an ideal elastic material with
uniform quality. At the same time, the quality of steel products should be strictly
controlled in production, and the stress situation in the project should be most con-
sistent with the mechanical calculation results, so that the running reliability can be
high [4]. The strength of lift guide bracket directly affects the stability of operation, and
its number is determined by the height of the floor. The higher the floor, the larger the
number, so the cost must be considered. In order to achieve the balance between the
two, the structure needs be optimized and allocated reasonably.

2 Primary Design of Guide Bracket

GB 7588-2003 10.1.2.2 maximum allowable deformation of “T” type guide: (a) For the
car (counterweight) guide rail with safeties, when the safeties operate, the two direc-
tions are 5 mm; (b) For the car (counterweight) guide rail without safeties, the two
directions are 10 mm. The allowable deformation of guide bracket can adopt the
allowable deformation of guide, that is, when the safety is in action, it is 5 mm in both
directions. Each guide shall have no less than two guide brackets with spacing less than
or equal to 2.5 m. Under special circumstances, measures shall be taken to ensure that
144 L. Guo and X. Jiang

the installation of the guide meets the bending strength requirements specified in GB
7588-2003 [5]. The car guide rail bracket belongs to statically indeterminate steel
frame, which can be calculated by force method. Force method requires that the
original statically indeterminate steel frame be transformed into statically indeterminate
base, and additional unknown forces are added to statically indeterminate base [6].
Then the displacement or deformation of statically indeterminate base at redundant
constraints under the action of loads and unknown forces are listed [7]. In addition,
their constraints, i.e. the compatibility conditions of deformation, can meet the con-
ditions of the original statically indeterminate steel frame. The complementary equa-
tions including load and unknown forces are transformed into the deformation
compatibility conditions by physical equation. Then the unknown forces can be
obtained by solving these equations, and the other unknown forces can be obtained by
static equilibrium equation [3]. Using the formula of simple statically indeterminate
rigid frame in “mechanical design manual” [8], the bending moments of guide brackets
under the action of Fy and Fx are calculated respectively by superposition method, and
then superimposed and summarized. Figure 1 is type p car guide bracket.

Fig. 1. (a) Structural drawing of car guide bracket (b) Fy calculation (c) Fx calculation

Calculating bending moment of guide bracket under Fy

k ¼ II21  hl ¼ HL ; N1 ¼ k þ 2 ¼ HL þ 2 ; N ¼ 6k þ 1 ¼ 6H
L þ1
    2
F  
L L
2 1
1 F L 2
MA ¼ Pab 2N1  2N2
1 2b1
¼ L2 2 2 H 1þ 2  2 6H2þ 1 ¼ 8ðH þ 2LÞ
y y
l ðL Þ ð Þ
   L
Fy L2L2 F L2
MB ¼  Pab 1
þ 2b1
¼  H
1
þ
¼  4ðHyþ 2LÞ
l
 N1 2N2
 L
L  2
F LL F L2
MC ¼  Pab N1  2N2
1 2b1
¼  y L2 2 H þ1 2 ¼  4ðHyþ 2LÞ

l
  L 
F LL 211 F L2
MD ¼ Pab 2N1 þ 2N2
1 2b1
¼ y L2 2 2 H 1þ 2  2 6H2þ 1 ¼ 8ðHyþ 2LÞ
l ðL Þ ðL Þ
Design of Large Tonnage Lift Guide Bracket 145

Calculating bending moment of guide bracket under Fx


 
Ph 3k þ 1 Fx H 3H L þ1 Fx Hð3H þ LÞ Ph 3k
MA ¼  ¼  6H ¼ ; MB ¼ 
2 N2 2 L þ1
2 ð 6H þ L Þ 2 N2
3H
Fx H 3Fx H
¼  6H L ¼ ; MC ¼ MB ; MD ¼ MA
2 L þ 1 2 ð6H þ LÞ

The composite bending moment of the guide rail bracket can be seen from the
figure that the maximum bending moment of the car guide rail bracket is at C.

Fy L2 3Fx H
MC ¼  
4ðH þ 2LÞ 2ð6H þ LÞ

The car guide bracket belongs to the three-time statically indeterminate rigid frame,
on which the guidance force of the guide rail Fx and Fy acts. The formulas for cal-
culating the bending moment of car guide bracket based on the rigid frame calculating
formulas in “mechanical design manual” [8] have been derived, and the load-bending
moment diagram MFx and MFy of Fx and Fy can be drawn separately on the basis of the
formulas for calculating the bending moment of the car guide. In order to obtain the
displacement of the guide support under the action of Fx and Fy , a unit force is applied
at the statically determinate base action points of Fx and Fy respectively, and the unit
moment diagrams MPFx and MPFy of the unit force acting on the statically determinate
base are drawn at the same time. In addition, a statically indeterminate rigid frame has
many statically determinate bases to choose from. If the statically determinate bases
multiplied by the unit moment diagram and the load moment diagram are selected, the
calculation can be simplified.
Calculating the displacement of the guide bracket under Fy action
The load bending moment diagram and unit bending moment diagram of the car
guide bracket under the guidance force Fy are shown in Fig. 2. When calculating the
displacement in the direction of Fy action, the area of the unit bending moment diagram

Fig. 2. Calculation sketch of guidance force Fy


146 L. Guo and X. Jiang

MPFy is multiplied by the corresponding centroid position coordinates in the load


bending moment diagram MFy by the diagram multiplication method.
    
1 1 L L 2 Fy L Fy L2 1 Fy L2
Fy ¼ 2       
EI 2 2 4 3 4 4ðH þ 2LÞ 3 4ðH þ 2LÞ
Fy L3 ð2H þ LÞ
¼
96EIðH þ 2LÞ

Because the area of the unit bending moment diagram and the corresponding
centroid position coordinate in the load bending moment diagram are located on the
same side of the baseline, the displacement is positive, that is, the direction of action is
the same as that of Fy .
Calculating the displacement of the guide bracket under Fx action
The load bending moment diagram and unit bending moment diagram of the car
guide bracket under the guidance force Fx are shown in Fig. 3. Similarly, when cal-
culating the displacement in the direction of Fx action, the area of the unit bending
moment diagram MPFx is multiplied by the corresponding centroid position coordinates
in the load bending moment diagram MFx by the diagram multiplication method [6].

Fig. 3. Calculation sketch of guidance force Fx

3 Calculation and Check of Car Guide Bracket

In this paper, a car guide rail bracket is designed for a large tonnage lift with load of
9500 kg. The crossbeam and side beams of the bracket are made of 80*80*8 angle
steel, E = 2.06  1011 Pa, I = 11.21  10−8m4, W = 3.13  10−6m4, rb = 370 MPa,
Q = 9500 kg, P = 7000 kg.
Calculating the action force of the guidance force on the Y axis with safety activated
 

k1 gn Qxq þ Pxp 2:0  9:8  9500  1:4
8 þ 7000  10
1:4
Fx ¼ ¼ ¼ 1414:88 N ð1Þ
nh 2  20
Design of Large Tonnage Lift Guide Bracket 147

Calculating the action force of the guidance force on the X axis with safety
activated
 

k1 gn Qyq þ Pyp 2:0  9:8  9500  1:6
8 þ 7000  10
1:6
Fy ¼ nh ¼ ¼ 2959:6 N ð2Þ
2
2  20=2

Calculating the maximum bending moment of car guide bracket

Fy L2 Fx H2 2959:6  0:52 1414:88  12  3


MC ¼  ¼  ¼ 419 N:m
4ðH þ 2LÞ 2ð6H þ LÞ 4ð1 þ 2  0:5Þ 2ð6  1 þ 0:5Þ
ð3Þ

Calculating the bending stress of car guide bracket

MC 419
r1 ¼ ¼  106 ¼ 133:87 MPa\165 MPa ð4Þ
W 3:13

Calculating the displacement of car guide bracket

Fy L3 ð2H þ LÞ 2959:6  0:53 ð2  1 þ 0:5Þ


DFy ¼ ¼
96EIðH þ 2LÞ 96  2:06  1011  11:21  106  ð1 þ 2  0:5Þ
¼ 0:209 mm\5 mm ð5Þ

Fx H3 ð3H þ 2LÞ 1414:88  13 ð3  1 þ 2  0:5Þ


DFx ¼ ¼
12EIð6H þ LÞ 12  2:06  1011  11:21  106  ð6  1 þ 0:5Þ
¼ 3:142 mm\5 mm ð6Þ

In conclusion, the preliminary design size of the p type guide rail bracket, that is,
500 mm long, 1000 mm high, can meet the load of 9500 kg of large tonnage lift
requirements.

4 Structural Optimization and Finite Element Analysis

In order to further optimize the designed car guide rail bracket, considering saving
material and space, the original design of guide rail bracket was changed into 350 mm
long and 1000 mm high (in order to ensure sufficient strength, the size involved could
not be reduced too much), and a reinforcing bar (300 mm long) was added to each of
the two bracket feet. The material is 80  80  8 angle steel, as shown in Fig. 4 below.
148 L. Guo and X. Jiang

Fig. 4. Design of guide rail bracket for 9500 kg carrying capacity

According to the metal structure of lift guide rail bracket, the original wire frame
model is established by using Pro/E software. Creating beam idealization: angle steel
with section shape 80  80  8; Material allocation for the model: steel is defined as
the guide bracket material. Load: According to the design calculation mentioned above,
the load on the Y axis of the guide bracket is 1414.88 N, and the load on the X axis of
the guide bracket is 2959.6 N. The final guide bracket definition completion model is
established. According to the actual structure, fixed the lower support angle steel of the
guide rail bracket (actually fixed with civil engineering) [9, 10]. Define rotation and
translation constraints of supporting angle steel in X, Y and Z direction. According to
the load diagram of the guide bracket, all loads are loaded on the guide bracket. From
the mentioned above, it can be seen that the load on the Y axis of the guide bracket is
1414.88 N, and the load on the X axis of the guide bracket is 2959.6 N. According to
the load diagram of the guide bracket, all loads are loaded on the guide bracket, and the
load diagram of the guide bracket is obtained. Finally, the fully defined model of the
guide bracket is established as shown in Fig. 5.

Fig. 5. Definition completion model of guide bracket


Design of Large Tonnage Lift Guide Bracket 149

The deformation of the metal structure of the guide rail bracket after loading is
shown in the following Fig. 6. The maximum displacement of the bracket is 0.19 mm
and arises on the upper bracket plate on the side.

Fig. 6. Displacement deformation diagram

The stress of the metal structure of the lift guide bracket after loading is shown in
Fig. 7 below. The figure shows that the maximum stress is 31.84 Mpa, which occurs at
the contact point between the bracket foot and the bracket plate.

Fig. 7. Stress deformation diagram

According to the results of finite element analysis, the maximum displacement of


the first section of the structure is 0.19 mm. According to the relevant national stan-
dards of lift guide, the maximum deformation of lift guide is that the static stiffness is
not more than 5 mm, 0.19 mm < 5 mm, so the guide bracket meets the design
requirements. According to the finite element calculation results, the maximum stress
of the guide bracket is 31.84 MPa. According to the material requirements of the guide
bracket, the static strength should not exceed 165 MPa. However, 31.84 MPa < 165
MPa shows that the static strength of the guide bracket meets the requirements.
150 L. Guo and X. Jiang

5 Conclusions

The basic dimensions of the guide rail bracket are preliminarily determined by referring
to the design calculation book. The finite element analysis of the newly designed guide
bracket is carried out by using Pro/E software. The displacement, stress, static stiffness
and static strength are checked. The optimized guide bracket is not only more stable,
but also conforms to the current development trend of green economy and environ-
mental protection. The design of guide bracket is optimized, which not only improves
the stability of the product, but also saves material and installation space. By adding
reinforcing ribs, the original length and height are shortened, and the improved design
is more safe, stable and green.

Acknowledgements. The work is supported by Natural Science Research Major Project of


higher education institution of Jiangsu Province (grant no. 17KJA460001).

References
1. Koe, S., Imrak, C.E., Targit, S.: Design parameters and stress analysis of elevator guide rail
brackets. China Elevat. 12–15 (2012)
2. Shi, L., Le, S., Lu, Q.: Elevator guide selection, design and analysis of guide bracket
spacing. Technol. Innov. Appl. 14–15 (2015)
3. Chen, H.: A calculation example of box beam rail bracket of elevator. China Elevat. 35–41
(2013)
4. GB/T7025.2-2008 Lifts-Main specifications and the dimensions arrangements for its cars,
wells and machine rooms- Part 2: Lifts of class IV
5. GB 7588-2003 Lift manufacture and installation safety norm
6. Liu, H.: Material Strength, 5th edn. Higher Education Press, Beijing (2010)
7. Jiang, X.: Design of marine elevator car frame. In: International Workshop of Advanced
Manufacturing and Automation. Lecture Notes in Electrical Engineering, pp. 583–591.
Springer, Singapore (2018)
8. Wen, B.: Mechanical Design Manual. Machinery Industry Press, Beijing (2010)
9. http://www.chinaelevator.org. Accessed 21 May 2019
10. GB/T10060-2011 Lift installation and acceptance specification
Stress Analysis of Hoist Auto Lift Car Frame

Xiaomei Jiang1,2(&), Michael Namokel3, Chaobin Hu1,2,


Jinzhong Dong2, and Ran Tian2
1
Jiangsu Key Construction Laboratory of Elevator Intelligent Safety,
Changshu, China
[email protected]
2
School of Mechanical Engineering, Changshu Institute of Technology,
Changshu, China
3
Nordhessen University of Applied Sciences, Kassel, Germany

Abstract. Compared with the traditional auto ramp, the auto lift can save 80%
of the building area and more than double the car turnover efficiency. Auto lift is
referred to as non-commercial auto lift for carrying passenger auto, one of the
cargo lifts in the catalogue of special equipment in the GB7024-2008. The auto
lift shaft structure should be strong enough to bear not only the load of ordinary
lift but also the load produced by loading and unloading of auto. The size and
structural type of the car are specially designed to facilitate the access of autos,
the selected frame beam system checked, mainly based on whether the bending
stress can meet the maximum allowable stress of the selected material; Finally, a
three-dimensional model of part of the structure is established and analyzed by
finite element method including analysis of the front wheel of auto into the car,
analysis of the auto entered into the car, analysis of the car frame compress car
buffer, and the stress analysis of main components.

Keywords: Auto lift  Car frame  Stress check  FEA

1 Introduction

As we all know, lifts can be divided into passenger lift, cargo lift, passenger lift, bed
lift, residential lift, dumbwaiter lift, marine lift, sightseeing lift, auto lift from the use,
be divided into low speed, medium speed, high speed and ultra-high speed lift from the
running speed, be divided into direct current, alternating current, hydraulic, pinion and
rack, spiral and linear motor driving lift from the driving method, be divided into
machine room lift and machine-room- less lift, be divided into gear and gearless tractor
lift from the traction structure [1]. The development of human beings is in an infinite
process, while the development of cities is in a limited space. The three-dimensional of
urban architecture and the three-dimensional of urban traffic correspond to each other.
In order to solve the three-dimensional traffic problem in the city, the auto lift has come
to reality from the dream. Auto lift is a special lift to solve the problem of auto vertical
transportation. It has been widely used in 4S stores, repair workshops and large
shopping malls with parking lot on the building roof. Its standard load is 3000 kg,
5000 kg. The speed is 0.25 m/s and 0.5 m/s [2, 3].

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 151–159, 2020.
https://doi.org/10.1007/978-981-15-2341-0_19
152 X. Jiang et al.

2 Primary Design of Car Frame

The sheave traction ratio which is also called rope ratio refers to the circumferential
speed of the traction sheave namely the ratio of rope moving speed to the car vertical
lifting speed, when ignoring the movable sheave weight and the sliding friction
between wire rope and sheave, friction between the sheave internal structure in the
ideal case, the input and output power should be the same [4], so the lifting weight is
far greater than the weight of the movable sheave on large tonnage elevator, labor-
saving ratio can be approximately regarded as being inversely proportion to traction
ratio. In addition, the winding method of the steel wire rope is closely related to the
elevator machine room type. The rated load of the lift car frame is 3200 kg, the rated
operating speed is 0.5 m/s, and the guide rail type is the symmetrical arrangement of
two guide rails.
Referring to GB/T 7025.2-2008 [5], it can be seen that the car width is the same as
the length of the car inlet and outlet, which is very helpful for the entry and exit of the
auto and the handling of the parts. As shown in Fig. 1, 1 represents the safety device,
b1 = b2 represents the same width and length of the inlet and outlet of the car, and d1
represents the depth of the car.

Fig. 1. Type of car door open and close

The dimensions of the actual elevator car and the size and type to be designed are
used for reference. The height of the car and the height of the entrance and exit are all
equal to 2500 mm. According to the design requirements of minimum shaft size, the
schematic diagram of auto elevator car frame is obtained in Fig. 2.

Fig. 2. Schematic diagram of auto lift car frame


Stress Analysis of Hoist Auto Lift Car Frame 153

3 Material Selection and Check

The car is composed of a car frame, a car bottom tray, a car wall and a car roof
guardrail device. The car frame is the supporting structure of whole elevator, which
bears the effect of the car system (including the travelling cable) deadweight P and the
design rating load Q, which is the direct guarantee of elevator safety performance. The
car frame supports its own weight and the weight of various loads, so when designing
the car frame, choose materials with good rigidity and strength. In general, the 1.2–
1.5 mm thick steel plate is pressed and cast as a channel combination of the car top
compartment. The material of the car wall is roughly the same as the car top. And the
car bottom is generally composed of frame and bottom plate. In order to make the car
bottom stable, the frame is mostly designed for welding of channel steel and angle
steel, and adding a layer of 3–6 mm thick steel plate (hairline steel bottom). Generally,
the car frame is made of section steel or cold roll forming steel plate connected by bolts
or welded into a beam structure, statically indeterminate rigid frame, channel steel 25a
is the first choice, according to the “Non-standard Mechanical Design Manual” [6], the
theoretical mass for channel steel 25a is 27.410 Kg=m, L = car clear width +straight
beam leg width  2 + stiffening plate thickness  2 = 2500 + 112  2 + 8 +
24 = 2756 mm,
 
3 Kg
Mubeam ¼ 2  2756  10 m  27:410 ¼ 151:08392 Kg
m

The vertical beam use the I - steel 22b, the theoretical mass for 22b I - steel is
36:524 Kg=m [7], calculate the length of the vertical beam of the car frame, L = car
height + car bottom thickness + bottom beam height + upper beam height + return
sheave radius + car top thickness +car floor thickness = 2500 + 120 + 200 + 250 +
520/2 + 8 + margin = 3394, therefore Mvbeam ¼ 2  ð3394  103 m  36:524 Kg=m)
¼ 247:925 Kg. The material generally select 345 (16Mn), which is a low alloy high
strength steel; in design of some light load car frame will also choose low carbon steel
Q235. Material selection of lower beam is the same as that of upper beam. Then
calculate the length of the lower beam of the car frame.
 
3 Kg
Mlbeam ¼ 2  2508  10 m  22:637 ¼ 113:547192 Kg
m

In the design, because the car frame is two rail center oriented, car depth is
3170 mm, in order to keep the car bottom balance, so the lower beam is designed as
welding component, lower beam assembly includes 4 transverse beams, 2 longitudinal
beams; the longitudinal beam length is 3170/2 = 1585 mm, so

Mlbeama ¼ 2  Mlbeam þ 2  1:585  22:637 ¼ 275:312 Kg


154 X. Jiang et al.

Car bottom bracket includes: 7 longitudinal beams, 2 transverse beams, 6 mm


pattern steel (car floor), therefore

12:059 Kg 12:059 Kg
Mbracket ¼ 2  6 m  þ 7  2:5 m  þ 445:9302 Kg
m m
¼ 892:113 Kg

The rest mass of the attachment is 1072.4348 kg, so the car weight is
P = 2953.567 kg.
The upper beam check:the acceleration will produce when lift starting, the accel-
eration acts on the wire rope, producing a traction force K:

ðP þ Q þ MTrav Þðgn þ aÞ
K¼ þ MSRcav ðgn þ r:aÞ
r

MTrav -traveling cable weight, MSRcav -suspension wire rope weight on the car side

ð2953:567 þ 3200Þ  9:81 þ 0:25


K¼ þ 8  0:575  2  18  ð9:81 þ 2  0:25Þ
2
¼ 32659:778 N

Then 2 K = 65319.556 N, 《elevator principle, installation, maintenance》 [8] is


referred to, wire rope: smooth wire, fiber core, structure 8*19, wire rope dN = 13 mm,
weight 0.575 kg/m, for U520 return sheave, 4 pieces of wire ropes, lifting height 18 m.
So, MSR cav ¼ 4  4  18  0:575  ð9:81 þ 4  0:25Þ ¼ 1707:336 N
The working condition of the safety action and the car hitting the buffer are checked
to ensure the safety and reliability of the lower beam.
Where q is the uniform load,

ðP þ QÞg ð2953:567 þ 3200Þ  9:81 N


q¼ ¼ ¼ 24069:5743 N=m
L 2:508 m

qL
F1 and F2 are support reaction, F1 ¼ F2 ¼ 2 ¼ 30183:2408 N

q L qL L qL2
Mmax ¼  :ð Þ2 þ : ¼
2 2 2 2 8

24069:5743 N=m  ð2:508 mÞ2


Mmax ¼ ¼ 18924:8943 N:m
8
q 2 q qL
Mþ x  F1 x ¼ 0; M ¼  x2 þ F1 x ðF1 ¼ Þ
2 2 2
0
Mmax ¼ K1 Mmax ¼ 2:0  18924:8943 ¼ 37849:7906 ðN:mÞ
Stress Analysis of Hoist Auto Lift Car Frame 155

K1: progressive safety gear action impact factor, by the “lift standard compilation
GB 7588-2003” [2], K = 2.0. The lower beam select channel steel 20a, bending section
modulus W ¼ 178  106 m3 , and bending stress [7]:
0
Mmax 37894:7906 N:m
r1 ¼ ¼ ¼ 106:446041 Mpa
2W 2  178  106 m3

n ¼ rrbp ¼ 106:446041
470 MPa
MPa ¼ 4:415 [ 2:0, So the lower beam can meet the safety
requirements.
In order to ensure the sufficient section area and the bending section coefficient, the
22b I-steel is chosen for the vertical beam, A = 46.528  10−4m2, the tensile force and
stress of the vertical beam under normal conditions:

ðP þ QÞg F 30183:2461 N
F¼ ¼ 30183:2461 N; r1 ¼ ¼ ¼ 6:487 MPa
2 A 46:528  104 m2

Calculation of the eccentric force of lift rated load:

QgyQ 3200  9:8  Dy =8


Fy ¼ ¼ ¼ 2287:44844 N
h 3:394

Calculation of the eccentric force of traveling cable:

0:5  g  MTrav
MTrav ¼ 0:5Hqt ¼ 0:5  18  1:56 ¼ 14:04 ðKg), Fm ¼ ¼ 20:27 N
h

Calculate the bending moment of the vertical beam



Fy þ Fm  h ð2287:44844 þ 20:27Þ  3:394
M¼ ¼ ¼ 2467:1991 ðN:mÞ
4 4
M 2467:1991 N:m
r2 ¼ ¼ ¼ 57:780 MPa; r1 þ r2 ¼ 6:487 þ 57:780
W 42:7  106 m3
¼ 64:267 MPa

So n = 520 MPa/64.267 MPa = 8.091 > 7, the allowable stress of the I-steel is
increased to 520 MPa, safety requirements attain.
The return sheave shaft is a symmetrical structure, and the maximum diameter of
the sheave shaft is corresponding to the return sheave hub, and two tapered roller
bearings are positioned by the shaft shoulder. Among them, the bearing model is
C2914, which can bear a certain axial load when the return sheave is in axial motion.
Because return sheave axle belongs to the mandrel, it will bear bending moment when
ðP þ Q) g
the return sheave rotates [9]. F1 ¼ F2 ¼ F3 ¼ F4 ¼ 4 ¼ 15091:6231 N,
3
pd 6 2 1260:15053 N:m
W¼ 32  0:1d ¼ 51:2  10 m , r ¼
3
ca 51:2106 ¼ 24:612 MPa
According to “mechanical design” [10], select material 20Cr, r1 ¼ 60 MPa,
rca  ½r1 , so return sheave axle can meet the strength requirements.
156 X. Jiang et al.

4 Finite Element Analysis

The auto car frame is the load-bearing component with sufficient strength, and the
mechanical calculation shall be carried out to necessarily ensure that the car frame has
sufficient strength and safety coefficient. The car size of the auto lift is usually relatively
large, so it also determines that the structure of the car frame of its carrying part is more
complex than that of the lift with equal rated load. It is difficult to use the simplified
beam to check the force, and the calculation error will be large. Therefore, the car frame
is usually checked and calculated by finite element method.
Its arrangement contains four guides for guiding, 4:1 traction structure. The main
structure is shown in Fig. 3. The main structural components are upper beam, lower
beam, upper crossbeam, lower crossbeam, door machine beam, straight beam, upper
crossbeam oblique brace, upper beam oblique brace, pull rod, car bottom (the car
bottom is welding composition of the car bottom beam and the car bottom plate). The
finite element calculation software ANSYS, is selected as follows: pull rod as 3d rod
element link180, car bottom plate as 3d shell element shell181 and the other beams as
3d beam element beam189 (setting the section shape of beam).

Fig. 3. Three-dimensional model of car frame

The weight of car wall and car roof is 1100 kg, the contact area of the auto wheel
and car bottom plate is 200 mm * 200 mm in the above external load of car frame. The
distance between left and right auto wheel is 1600 mm, the distance between front and
rear auto wheel is 2500 mm, when loading on the car bottom plate. The stresses of
main components of car frame are as follows (Figs. 4, 5, 6, 7 and 8):

Fig. 4. Constraints and loads when auto front wheel just entering the car
Stress Analysis of Hoist Auto Lift Car Frame 157

Fig. 5. Stress of pull rod when auto front wheel just entering the car

Fig. 6. Stress of main components when auto front wheel just entering the car

Fig. 7. Stress of pull rod when auto entered into the car

Fig. 8. Stress of main components when auto entered into the car
158 X. Jiang et al.

The following Table 1 is the maximum stress table of the main components of the
car frame when the auto front wheels are just entering the car and the auto has entered
into the car. From the maximum stress and material of the beams, the safety factor of
the main components under this working condition can be obtained. The beam of the
car frame meets the requirements of this working condition.

Table 1. Stress on main components when auto front wheel just entering the car and auto
entered into the car
Main components Material Maximal Safety Maximal Safety
stress 1 factor 1 stress 2 factor 2
Pull rod Q235 18 Mpa 17.1 31.7 Mpa 9.7
Upright beam Q345 23.9 Mpa 17.8 33.1 Mpa 12.8
Front upper beam Q345 25.3Mpa 16.8 57.7 Mpa 7.4
Rear upper beam Q345 4.41 Mpa 96.4 6 Mpa 70.8
Front lower beam Q345 1.56 Mpa 269 15.7 Mpa 27
Rear lower beam Q345 0.78 Mpa 544 3.4 Mpa 125
Upper crossbeam Q345 44.8 Mpa 9.5 71.9 Mpa 5.9
Lower crossbeam Q345 15.4 Mpa 27.6 26.3 Mpa 16.2
Crossbeam brace Q235 30.4 Mpa 10.1 85 Mpa 7.4
Upper beam Q235 8.28 Mpa 37.2 26.2 Mpa 10.9
brace

Table 2 is the maximum stress table of the main components of the car frame when
the car stalls downward and the safety is activated. The safety factor of the main
components under this working condition can be obtained from the maximum stress
and material of the beams. The beams of the car frame can meet the requirements of
this working condition.

Table 2. Stress of main components with car stalling downward and activated safety
Main components Material Maximal stress Safety factor Load type Impact coefficient
Pull rod Q235 27.7 Mpa 6.6 Dynamic 2
Upright beam Q345 23.3 Mpa 10.9 Dynamic 2
Front upper beam Q345 0.5 Mpa 510 Dynamic 2
Rear upper beam Q345 1.6 Mpa 159 Dynamic 2
Front lower beam Q345 4.0 Mpa 63 Dynamic 2
Rear lower beam Q345 0.5 Mpa 510 Dynamic 2
Upper crossbeam Q345 11.9 Mpa 21.4 Dynamic 2
Lower crossbeam Q345 22.9Mpa 11.1 Dynamic 2
Crossbeam brace Q235 21.8 Mpa 8.5 Dynamic 2
Upper beam brace Q235 2 Mpa 92.5 Dynamic 2
Stress Analysis of Hoist Auto Lift Car Frame 159

5 Conclusions

From the checking calculation of auto lift car frame under various working conditions,
it can be concluded that the car frame in this paper meets the requirements of use. The
finite element method has great advantages in checking calculation of complex car
frame structure, and has higher calculation accuracy than traditional simplified calcu-
lation. Compared with the safety factor of each working condition, some structural
beams still have room for optimization, and the targeted structural optimization can be
carried out according to the calculated results, which can reduce the self-weight of the
car frame itself and thus reduce the production cost.

Acknowledgements. The work is supported by Natural Science Research Major Project of


higher education institution of Jiangsu Province (grant no. 17KJA460001).

References
1. National Elevator quality supervision and Inspection Center (Guangdong). Elevator standard
compilation. China Standard Press, Beijing (2012)
2. GB 7588-2003 Elevator manufacture and installation safety norm
3. GB 25856-2010 Freight elevator manufacture and installation safety norm
4. Jiang, X.: Vibration analysis and simulation of traction inclined elevator. In: International
Workshop of Advanced Manufacturing and Automation, pp. 221–224. Atlantis Press,
Manchester (2015)
5. GB/T7025.2-2008 Lifts-Main specifications and the dimensions arrangements for its cars,
wells and machine rooms- Part 2: Lifts of class IV
6. Cen, J.: Non Standard Mechanical Design Manual. National Defense Industry Press,
Changsha (2008)
7. Liu, H.: Material Strength, 5th edn. Higher Education Press, Beijing (2010)
8. GB/T10060-2011 Elevator installation and acceptance specification
9. Guo, L.: Design of large tonnage elevator sheave block. In: International Workshop of
Advanced Manufacturing and Automation, pp. 207–217. Springer, Changshu (2017)
10. Lianggui, Pu: Mechanical Design, 8th edn. Higher Education Press, Beijing (2006)
Research on Lift Fault Prediction
and Diagnosis Based on Multi-sensor
Information Fusion

Xiaomei Jiang1,2(&), Michael Namokel3, Chaobin Hu1,2,


and Ran Tian1,2
1
Jiangsu Key Construction Laboratory of Elevator Intelligent Safety,
Changshu, China
[email protected]
2
School of Mechanical Engineering, Changshu Institute of Technology,
Changshu, China
3
Nordhessen University of Applied Sciences, Kassel, Germany

Abstract. Nowadays, most of the fault diagnosis methods are based on the
collected data, which can not realize the timely prediction of fault diagnosis, and
the measurement information based on a single sensor cannot fully and accu-
rately reflect the working status of lift, thus causing the uncertainty and inac-
curacy of fault diagnosis. An lift fault diagnosis algorithm based on DS data
fusion is proposed. Multi-sensor fusion is used to initialize the initial reliability
distribution of the sensor according to the membership degree of each diagnostic
category. The data collected by each sensor is taken as evidence body. The final
diagnosis results are obtained by data fusion method. Experiments on lift fault
diagnosis show that the proposed method can correctly and timely predict the
fault, overcome the uncertainty and inaccuracy of single sensor fault diagnosis,
and improve the accuracy of lift fault diagnosis and prediction.

Keywords: Multi-sensor synthesis  Fault diagnosis  Prediction  Data


fusion  Lift

1 Introduction

With the rapid increase of the number of lifts and accidents, real-time and effective fault
prediction and diagnosis of lifts is of great significance to ensure the safe and reliable
operation of lifts. Therefore, it has become one of the key topics in lift research in
recent years [1, 2]. Due to the low intelligence of sensors installed on lifts and the
measurement errors caused by sensors themselves, the detection information is
uncertain and incomplete. In addition, the current lift prediction and diagnosis system is
often based on the collected sensor data, which has a certain delay, while the lift has a
strong real-time demand [3, 4]. When the fault diagnosis symptoms data are collected
in real time, the lift has been faulted, which will greatly threaten the safety of lift
passengers. Therefore, the accuracy of lift fault prediction and diagnosis is not limited
by the delay of sensor data samples.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 160–168, 2020.
https://doi.org/10.1007/978-981-15-2341-0_20
Research on Lift Fault Prediction and Diagnosis 161

Based on the above factors, a fault diagnosis method based on neural network
prediction and DS data fusion is proposed. This method predicts the possible data of
the sensor at the next moment by neural network, and fuses the data by DS data fusion
theory, and obtains the fault diagnosis results based on the predicted data of multiple
sensors. It overcomes the shortcomings of the traditional method of delay. Based on the
historical data of all sensors received by the system, the lift fault diagnosis is carried out
by using the method of data fusion.

2 Multi-sensor Synthesis Analysis

Using multi-sensors of on-line intelligent detection system for data fusion processing
can eliminate the uncertainty of single or single sensor detection, improve the reliability
of detection system, obtain more accurate knowledge of detection objects, help
inspectors to make effective judgments on lift status, and reasonably determine the
verification time and verification plan. Therefore, this topic intends to establish a
distributed online multi-sensor system, and apply information fusion technology to the
dynamic intelligent detection of lifts, so as to provide a reasonable evaluation of the
operation status of lifts. Because the traditional lift monitoring and detection sensors
mostly adopt mechanical structure and are limited by mechanical structure, they have
inherent defects, such as low accuracy, small dynamic response range, difficulty in low
frequency detection, difficulty in distinguishing background noise and so on. With the
development of lift intelligence, the requirement of network technology for detection is
constantly increasing.
When load and position of lift change, the natural frequency of lift system will
change as shown in Fig. 1. X axis is the coordinate of lift load change, Y axis is the
coordinate of lift car position change, Z axis is the coordinate of natural frequency
change. When the load increases, the natural frequency decreases, but with the increase
of the load, the natural frequency decreases slowly. The influence of lift position
change on natural frequency is complex. The natural frequency decreases with the
increase of car height and change symmetrically with height.

Fig. 1. Natural vibration frequency with the car load and position change
162 X. Jiang et al.

Vibration sensors include vibration displacement, vibration speed and vibration


acceleration sensors. When the lift starts up or brakes down, there will be acceleration.
If the acceleration is within a certain range, there will be no discomfort in the human
body. The acceleration or deceleration process of lift can be detected by acceleration
sensor. When acceleration or deceleration is too fast, the lift control system can adjust
the speed and output power of traction motor to slow down the process of acceleration
or deceleration [5]. Vibration sensor can play a warning and protection role. Laser
ranging sensor is usually installed on the top of the lift to measure the verticality of the
lift in real time. When the lift inclines too much, the relevant personnel will know the
situation real time and deal with it in time. Laser ranging sensor real-time output angle
signal can also be displayed through related instruments and can be networked to form
a monitoring network, so as to better maintain the use of lifts and ensure the safety of
people in and out. Nowadays, the most popular sensor used to prevent pinch injury in
lifts is the opposite-emission infrared sensor. The infrared sensor emits a beam of
device that illuminates the transmitter or receiver installed on the other side of the lift
door. The lift door is equipped with emitter on one side and receiving pole on the other.
When the middle beam is blocked, the receiving pole cannot receive the emitted beam;
it reacts to the controller and then reacts to the lift motherboard [6]. The lift door opens
naturally. In addition, there is an encoder in the lift, which converts the speed signal
into electrical signal. It has two main functions: one is to detect the real-time position of
the lift in the well; the other is to detect the real-time speed of the lift.
Detect the magnetic properties of the traction wire rope, that is, to detect the
relevant data of the traction wire rope under different operating conditions by Hall
element and induction coil. Laser testing method, using laser ranging, fixes the laser
transmitter at one end of lift and the receiver at the other end. Record the signal data of
the receiver, so as to detect and record the linearity and distortion of the guide rail.
At present, the latest application of sensor with more long-term significance is
realized by using sensor data in lift remote monitoring with Internet of Things. By
installing a variety of sensors and measuring instruments on the lift body, and using
multi-sensor information fusion technology to digitalize the lift operation status,
making full use of various information resources transmitted by multi-sensor, the
accuracy and reliability of the detection system are improved.
The real-time state of lift mechanical nodes is the most critical feature of lift
characteristics. Based on sensor and communication technology, the real-time opera-
tion status of each important mechanical node of lift is perceived. Real-time feature of
lift mechanical parts collected by sensors is the most valuable characteristic value. In
this paper, according to the criteria for discarding lift main parts [7], the lift charac-
teristic values can be collected by sensor technology as shown in Fig. 2. The lift
characteristic values can be collected at the same time, totaling 28-dimensional fea-
tures. How to remove the correlation features with small influence factors from a large
number of complex original features and extract the key features is essential to improve
the detection accuracy and efficiency.
Research on Lift Fault Prediction and Diagnosis 163

Fig. 2. Real-time characteristic value of lift partse

3 D-S Theory

D-S theory is a general method proposed by Basir et al. in 2007 to directly calculate the
basic trust allocation function based on multi-sensor measurements. Basir method
embodies the design idea of generating evidence according to the difference between
the measured and theoretical values of sensor data [8]. However, this method does not
consider the general situation of more sensors, and it is difficult to reflect the relative
evidence information between sensors through the difference of sensor data. More
importantly, many faults are not suitable for D-S theory to calculate evidence diagnosis
directly from sensor measurements [9].
2 3 2 3
x1 x1 x12 ... x1n
6 x2 7 6 x2 x22 ... x2n 7
6 7 6 7
x¼6 .. 7¼6 .. .. .. .. 7 ð1Þ
4 . 5 4 . . . . 5
xq xq1 xq2 ... xqn
164 X. Jiang et al.

8h
> Pnk  P i1p
< j¼1 skj  xji k¼1
dkj ¼ h i1 j ¼ 1; 2;    ; N ð2Þ
: Pnu skj  xji P p
>
k ¼ 2; 3;    ; M
i¼1

2 3
d11 d12 d1q
6 d21 d22  7
6 d2q 7
D¼6 .. .. .. 7 ð3Þ
4 . . . 5
dp1 dp2  dpq

The smaller the distance from dkj , the greater the possibility of judging the object in
class j state according to the kth sensor’s information. Hence the definition:

1
mkj ¼ ð4Þ
dkj
PQ
Normalization: j¼1 mkj ¼ 1
2 3 23
m11 m12 m1Q M1
6 m21 m22  7 6 M2 7
6 m2Q 7 6 . 7
M¼6 .. .. .. 7¼4 . 5 ð5Þ
4 . . . 5 .
mP1 mP2  mPQ Mp
 
Mk ¼ MK1; MK2;    ; MkQ; ð6Þ

So it can be used as the reliability value of the kth sensor for state recognition. On
this basis, the theory is optimized and improved. Extracting fault signals from multi-
source signals is the first condition for lift fault prediction and diagnosis. The frame-
work of fault recognition chooses fault characteristic parameters, and establishes basic
trust allocation function. The combination rules are used to calculate the fault. Finally,
the fault is predicted and judged according to the fusion results. In order to prevent
multi-source evidence information from being influenced by subjective factors, which
lead to large differences in forms, high conflict and poor real-time, it has a negative
impact on the realization of the fusion process and the rationality of the results. Using
the above data characteristic value with high degree of objectivity, strong real-time,
easy to implement and analyze and adjust, the evidence information generated can be
automatically calculated according to multi-source information data.

4 Fusion of Neural Network, DS and Sensor Data

The vibration signal of lift will change with the change of its running state. Therefore,
its time domain index will also change. There are many time domain indexes of
vibration signal, such as kurtosis index, peak value, pulse index, square root amplitude,
waveform index, effective value, mean value, margin index, etc. [10]. For different
Research on Lift Fault Prediction and Diagnosis 165

types of fault signals, different time-domain indicators show different advantages. For
example, kurtosis index, impulse index and margin index are more suitable for impulse
type faults. At the beginning of the fault, these indexes will change significantly, but
with the gradual aggravation of the fault, the sensitivity of sending indicators will
gradually decline, and the stability is poor, which is only applicable to early faults; the
peak value is the signal at one time. The maximum value in the section is sensitive to
the fault of instantaneous impact and has a good diagnostic effect. Mean value reflects
the average value of signal amplitude, which is relatively stable. Mean square root
represents the average energy of the signal, and peak amplitude is better for the signal
with slow time variation.
According to the four different states of car trajectory, an identification framework
is set up = {Normal, X-direction vibration signal, Y-direction vibration signal, wear
signal}to determine whether there is conflict between evidences before evidence
fusion. If there is conflict, BP neural network method is used to improve evidence
combination for conflict evidence. In order to validate the effectiveness of the algo-
rithm, vibration sensors and microphones are used to collect vibration and noise signals
in four different states of the car. In the experiment, multiple sets of data are collected in
each state, and multiple points are sampled in each group. Three of them are randomly
selected as training of the neural network set, the remaining two groups as test sets. The
specific sensor installation location is determined by the selected lift. The specific route
and scheme of extraction of fault features are as follows [11].
a. Fault feature extraction of vibration signals
According to the characteristics of non-stationarity of vibration signals, eight charac-
teristic parameters of time-domain statistics (including RMS, peak, skew, kurtosis,
peak index, margin index, impulse index and waveform index) and the energy char-
acteristics of the first eight IMF components decomposed by EEMD are selected as the
characteristic parameter space 1.
b. Fault feature extraction of noise signal
Using wavelet packet analysis, we can get the characteristics of low frequency and high
frequency noise signal decomposition. In this paper, we use the wpdencmp function of
the wavelet packet to de-noise the collected noise signal, and then decompose the three-
layer wavelet packet with coif5 as the basis function of the wavelet packet to get eight
frequency bands, and extract the energy characteristics of each frequency band as the
characteristic parameter space 2.
c. Construction of neural network model
The construction of BP neural network is determined according to the characteristics of
input and output data. For BP sub-network 1 and 2, the spatial dimension of feature
parameters is 16, and for BP sub-network 3, the spatial dimension of feature parameters
is 8. There are four classes of states to be identified. The BP algorithm is improved by
using additional momentum and variable learning rate.
166 X. Jiang et al.

d. Fusion diagnosis based on D-S evidence theory


According to the normalized basic reliability distribution of the diagnosis results of the
neural network, the fusion diagnosis results of the network output are obtained by using
Dempster combination rules and decision method based on basic probability assign-
ment. The information fusion fault diagnosis method based on neural network and D-S
evidence theory can get correct diagnosis results. When the two methods are combined,
especially when multi-sensor information fusion diagnosis is carried out, the diagnostic
accuracy and uncertainty reach the extreme value, that is, multi-sensor information
fusion can make full use of the redundancy and sum of sensors. Complementary
information can effectively improve the reliability of fault diagnosis.
A method of lift fault diagnosis based on DS data fusion and neural network as
shown in Fig. 3 is adopted. Multi-sensor fusion is used to extract fault signals from
multi-source signals. Then the initial reliability distribution of sensors is initialized
according to the membership degree of data corresponding to each diagnostic category.
The data collected by each sensor is taken as evidence body and combined. Rules are
calculated and the final diagnosis results are obtained based on the fusion results. Thus,
the uncertainty and inaccuracy of single sensor fault diagnosis can be overcome, and
the accuracy of lift fault diagnosis and prediction can be improved.

Fig. 3. Neural network based flow chart of evidence theory diagnosis and prediction

Before evidence fusion, judge whether there is conflict or not between evidences. If
there is conflict, BP neural network method is used to improve evidence combination of
conflict evidence. Otherwise, it is fused according to Dempster combination rule. The
fusion process is shown in Fig. 4. According to the normalized basic reliability dis-
tribution of the diagnosis results of the neural network, the fusion diagnosis results of
the network output are obtained by using Dempster combination rules and decision
method based on basic probability assignment.
Research on Lift Fault Prediction and Diagnosis 167

Fig. 4. Evidence-based fusion process

5 Conclusions

This research can provide new theory and technology for lift early monitoring, and has
positive significance for ensuring lift safety. It is also used for analysis and fault
diagnosis of analog data and part of actual data. The comprehensive research and
engineering application of monitorability design theory will help to improve the
development of condition monitoring and fault diagnosis technology of lift mechanical
system, improve the acquisition and identification of weak signals in fault diagnosis of
lift mechanical system, how to adapt the threshold according to the characteristics of
data flow is the next step of experimental research.

Acknowledgements. The work is supported by Natural Science Research Major Project of


higher education institution of Jiangsu Province (grant no. 17KJA460001).

References
1. Jiang, T., Liu, G.: Lift safety monitoring and early warning information platform based on
internet of things. China J. Constr. Mach. 162–167 (2015)
2. Li, J., Li, L., Guo, X., Liu, L., Fang, J.: Design and application of service platform for
emergency rescue and disposal of lift. J. Saf. Sci. Technol. 133–138 (2016)
168 X. Jiang et al.

3. Lin, C., Wang, X., He, B., Li, Z.: Lift condition monitoring device based on current features.
Autom. Inform. Eng. 6–10 (2013)
4. Wang, J., Zhang, L.: Lift operation monitoring system based on wireless sensor network.
Sens. World, 31–34 (2012)
5. Jiang, X.: Research on vibration control of traction elevator. In: International Industrial
Informatics and Computer Engineering Conference, pp. 2144–2147. Atlantis Press (2015)
6. Jiang, X.: Research on intelligent elevator control system. Adv. Materials Res. 605–607,
1802–1805 (2012)
7. GB T 31821-2015 Specifications for discard of the main parts of elevators
8. Han, C., Zhu, H., Duan, Z.: Multi-source Information Fusion, 2nd edn. Tsinghua University
Press, Beijing (2010)
9. Han, D., Yang, Y., Han, C.: Advances in DS evidence theory and related discussions.
Control Decis. 1–11 (2014)
10. Guo, L., Jiang, X.: Research on horizontal vibration of traction elevator. In: International
Workshop of Advanced Manufacturing and Automation, pp 131–140. Springer, Singapore
(2018)
11. Xu, J., Xu, L., Wang, H., Zheng, B., Tang, B.: Condition monitoring of elevators based on
vibration analysis. J. Mech. Electr. Eng. 279–283 (2019)
Design of Permanent Magnet Damper
for Elevator

Xinyong Li1,2, Jian Wu1,2,3(&), Jianfeng Lu1,2, Peijun Jiao1,2,


and Lanzhong Guo1,2
1
School of Mechanical Engineering, Changshu Institute of Technology,
Changshu, China
[email protected]
2
Jiangsu Elevator Intelligent Safety Key Construction Laboratory,
Changshu, China
3
Department of Mechanical and Industrial Engineering,
Norwegian University of Science and Technology, Trondheim, Norway

Abstract. With the continuous improvement of living standards, people have


put forward higher requirements for the reliability and safety of elevators. An
elevator damper based on the permanent magnetic field is designed, which uses
the magnetic field motion to form an eddy current on the surface of the con-
ductor (Cu), thereby decelerate the car in elevators, and the isotropic permanent
magnet supports the weight of the car. In this paper, the effects of falling speed,
magnetic field strength, magnetic field distribution on deceleration time and
deceleration distance are analyzed, and the fact of magnet spring also studied.
The results have shown the eddy current generated by the permanent magnetic
field on the surface of the conductor can effectively achieve the deceleration of
the elevator. The combination of a heterotrophic magnetic field and a discon-
tinuous conductor can achieve the best deceleration time and distance. The
deceleration time is proportional to the entry speed. The higher the speed, the
stronger the eddy current field generated, the greater the impact, and the longer
the deceleration time. After the speed is reduced to a minimum, the bottom
permanent magnet spring can effectively support the weight of the entire car.

Keywords: Elevator  Permanent magnet damper  Eddy current  Design

1 Introduction

With the increase in the number of elevators and the years of use, people have become
more and more demanding on the reliability of elevator safety systems. In general, the
elevator safety system includes speed limiters, safety gears, rope grippers, and dampers.
Once the elevator fails, the safety devices (speed limiters, safety gears, rope grippers)
ensure the safety of the elevator car. If three safety devices are failed, the car will
eventually fall, the damper installed in the ground is called the last “safety line” of the
elevator and is one of the necessary safety devices for the elevator [1].
The damper currently used in elevators come in two main forms: energy storage
and energy-consuming [2, 3]. Energy storage damper refers to spring damper, which is

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 169–175, 2020.
https://doi.org/10.1007/978-981-15-2341-0_21
170 X. Li et al.

dampened by the spring force of the spring. The spring can return to its original shape
after the end of the movement, which is the most commonly used buffering method.
Energy-consuming dampers generally refer to hydraulic dampers that are dampened by
the pressure of hydraulic oil when subjected to an impact [4]. The permanent magnet
reducer uses a permanent magnet to form an eddy current field on the surface of the
conductor, and the two magnetic fields are coupled to each other to generate a force
opposite to the motion, thereby achieving the purpose of deceleration. This feature is
widely used in the fields of brakes, buffers and so on [5, 6].
In this paper, a magnetic eddy current damper is designed. The deceleration effect
of different magnetic field combinations is analyzed, and the time and distance of
deceleration at different speeds are analyzed. In the last, the magnet spring also is
simulated.

2 Structure and Principle

The structure of the elevator permanent magnet damper is shown in Fig. 1(a). The
permanent magnet damper contains two parts: eddy current permanent magnet reducer
on both sides, and the permanent magnet spring in the car bottom, the 3D model is
shown in Fig. 1(b).

Fig. 1. The schematic of permanent magnet damper

As shown in Fig. 1, the permanent magnet damper contains a magnet and con-
ductor. The permanent magnets are respectively installed on the inspection layer, the
bottom of the car and the sidewall of the well. The conductors are mounted on the outer
wall of the car. When the car suddenly falls, the speed of the car is increasing. The
permanent magnet was placed upon the bottom. Under normal operating, the car speed
is low, at that time, the eddy current field in the conductor is weak and the force is
small. The elevator can be work normally. When a disaster occurs, the conductor
falling with the car, and getting in touch with each other before reach the bottom. The
conductor cuts the magnetic lines of the permanent magnet with high speed and
induces an eddy current on the conductor surface. The two magnetic fields are coupled
to each other to generate a force opposite to the direction of motion, forcing the car to
Design of Permanent Magnet Damper for Elevator 171

slow down. The isotropic permanent magnet at the bottom provides support and
eventually causes the car to stop.

3 The Design and Analysis of Magnet

3.1 The Design of Eddy Current Reducer


The permanent magnet damper includes permanent magnet and conduction. The per-
manent magnets are placed in the elevator pit, which can effectively reduce the impact
on the surrounding environment. The conduction moves up and down with the elevator
car. When the magnetizer receives the eddy current deceleration, the entire car is
decelerated to a standstill. The eddy current relationship between the permanent magnet
and conduction determines the effect of the damper. Eddy current reducer 3D model as
shown in Fig. 4. The size of the permanent is 300  50  50 mm, the material is
NdFeB, 10 pieces, arrange with the same size. The conduction is made with cu, size is
900  600  30 mm, 1 piece.

Fig. 2. The 3D model of eddy current reducer

3.2 Design of Permanent Magnet Spring


The permanent magnet spring is composed of a static ring, a moving ring, and an
auxiliary port, and adopts a symmetrical structure, and a single group permanent
magnet spring model is shown in Fig. 2. The small ring magnet is fixed to the non-
magnetic disc-shaped base, and the shaft is vertically fixed by the base of the non-
magnetic disc, which made with Al, and the permanent magnet is NdFeB which
remanence Br = 1.17T. The spring 3D model as shown in Fig. 3.

Fig. 3. The 3D model of permanent magnet spring


172 X. Li et al.

In order to verify the feasibility of the scheme, the permanent magnet buffer sim-
ulation device was designed and manufactured, and its main structural parameters:
weight is 30 kg, the size of the car is 300  300  300 mm; the car line is 1200 mm.

4 Results and Discusses

4.1 Effect of Magnetic Arrangement


The arrangement of the permanent magnets directly affects the stop time and distance
of the permanent magnet damper. The arrangement of a permanent magnet as shown in
Fig. 4. It contains (a) Horizontal isotropic and vertical heterotrophic magnetic field;
(b) Horizontal and vertical are heterotrophic magnetic field; (c) two pieces of Hori-
zontal isotropic and changed to a heterotrophic magnetic field; (d) Horizontal hetero-
trophic and vertical isotropic magnetic field. The conductor arranged in Fig. 5. It
contains 4 types: (a’) with one block; (b’) 4 pieces of blocks are arranged; (c’) several
pieces of block; (d’) 8 pieces of block. Different combinations are recorded as a–a’,
a–b’…

Fig. 4. Arrange of permanent magnet Fig. 5. Arrange of conduct

Figure 6 shows the change in the curve of buffer time and buffer distance for 16
different combinations. From the figure, the permanent magnets and the conductors in
different combinations can increase in different degrees as the distance of action
increases. The max value is 1500 N in a–b’,but unstable. It can be seen from the overall
picture that the overall rise of a–c’ and b–c’ is relatively flat and meets the requirements
for use. Considering the processing and installation costs, a combination of b–c’ (each
magnet isotropic, multiple vertical uniform distribution) is used here. The magnetic
field distribution during motion is shown in Fig. 7.
Design of Permanent Magnet Damper for Elevator 173

Fig. 6. The magnet force VS place Fig. 7. The magnetic field of magnet and
conductor

4.2 Effect of Falling Speed


The permanent magnet damper has different stopping times for different drop speeds as
shown in Fig. 8. The deceleration trend of different initial speeds is basically the same.
When the deceleration starts, the descending speed is faster. As the speed continues to
decrease, the trend tends to be gentle. The larger the initial speed, the greater the
acceleration that begins to decrease, and the longer it takes. Take out the time from
Fig. 8 when the speed is reduced to 0 m/s, and draw the graph as shown in Fig. 9. It is
found that the time it takes with the linear increase and decrease speed is basically in
line with the linear transformation relationship.

Fig. 8. Different speed versus time Fig. 9. Stopping time with the speed

4.3 Permanent Magnet Spring Support Effect


There are 9 pairs of isotropic permanent magnets at the bottom of the car to support the
car and be stationary and suspended. 9 pairs of isotropic permanent magnets were
simulated by finite element software, and the results are shown in the figure. The figure
shows that the relationship between any pair of the same sex is a repulsive force whose
magnitude is mainly related to the properties of the magnet itself and the distance
between a pair of magnets. In order to select the most suitable magnetic field
174 X. Li et al.

distribution and magnetic field size, the distance between the magnetic field size and
the coupled magnetic field is simulated and the results are shown in Fig. 10.

Fig. 10. The magnetic field of magnet spring Fig. 11. The spring force VS distance

Figure 11 shows the expulsive force generated by five sets of magnets with dif-
ferent diameters with increasing spacing. The repulsive force of any area with the same
magnetic field decreases with the increase of the spacing, and its trend is an exponential
decline, the initial decline is more obvious, and the later stage tends to be stable. When
the distance reaches a certain value, the repulsive force disappears. Increasing the
contact area is the best way to increase the repulsive force. It can be seen from the
figure that when the coupling area reaches 50 mm, the maximum repulsive force
generated when the spacing is 5 mm reaches 1800 N (180 kg), which is much larger
than the 30 kg requirement. In the specific use of the elevator, it can have a certain
assembly error, set to 20 mm, according to the above figure, when the contact area is
40 mm, it can meet the use requirements.

5 Conclusion
(1) A permanent magnet elevator damper is designed, which is decoupled by cou-
pling the side eddy current field with the permanent magnet, and the suspension of
the elevator car is realized by the permanent magnet spring installed at the bottom.
(2) The eddy current reducer has been optimized, and various combinations have been
designed. The eddy currents generated by different combinations are compared. The
best solution for the permanent magnets and the array of conductors is preferred.
(3) The stopping time of the eddy current buffer at different decreasing speeds is
analyzed. The falling speed of the car is non-linear. The faster the speed, the more
obvious the deceleration, and the longer the deceleration time of the speed. The
deceleration rest time is substantially linear with the initial speed.
(4) The permanent magnet spring installed at the bottom of the car is analyzed, and
the permanent magnets of different diameters are simulated and analyzed. The
supporting force of the permanent magnet spring decreases nonlinearly with the
increase of the spacing. The larger the magnetic field strength, the more the
supporting force.
Design of Permanent Magnet Damper for Elevator 175

Acknowledgments. The work is supported by Jiangsu Government Scholarship for Overseas


Studies, Open Project of Jiangsu Elevator Intelligent Safety Key Construction Laboratory
(JSKLESS201703), and The Doctoral Science Foundation of Changshu Institute of Technology
(No. KYZ2015054Z).

References
1. Husmann, J.: Elevator car frame vibration damping device. Google Patents (2005)
2. Uchida, T., Sato, S., Nakagawa, T.: A proposal of control method for an elevator emergency
stop device with a magnetic rheological fluid damper. IEEE Trans. Magn. 50(11), 1–4 (2014)
3. Qiong, Z.: Application of piston accumulator in oil buffer. Fluid Power Transm. Control 3,
25–28 (2006)
4. Yao, R., Zhu, C., Zhan, Y., et al.: Impact research on the oil buffer with air accumulator.
Zhendong yu Chongji/J. Vib. Shock 25(1), 153–155 (2006)
5. Gulec, M., Aydin, M., Lindh, P., et al.: Investigation of braking torque characteristic for a
double-stator single-rotor axial-flux permanent-magnet eddy-current brake. In: 2018 XIII
International Conference on Electrical Machines (ICEM), Investigation of Braking Torque
Characteristic for a Double-Stator Single-Rotor Axial-Flux Permanent-Magnet Eddy-Current
Brake, pp. 793–797. IEEE (2018)
6. Chen, Q., Tan, Y., Li, G., et al.: Design of double-sided linear permanent magnet eddy current
braking system. Prog. Electromagnet. Res. 61, 1–73 (2017)
Design and Simulation of Hydraulic Braking
System for Loader Based on AMESim

Junjun Liu1,2(&), Lanzhong Guo1,2, Jiaxin Ma1,2, Yang Ge1,2,


Jian Wu1,2, and Zhanrong Ma3
1
Changshu Institute of Technology, Changshu, China
[email protected]
2
Jiangsu Key Laboratory for Elevator Intelligent Safety, Changshu, China
3
China University of Geosciences, Beijing, China

Abstract. The loader develops towards the direction of heavy haul and high
speed intellectualization, and puts forward higher requirements for its braking
system. The power braking system is widely used in the field of engineering
equipment due to its superior braking performance and reliability. Based on the
analysis of the working principle of the hydraulic braking system, this paper
establishes the mathematical model of the hydraulic braking system, and
establishes the simulation model of the hydraulic braking system using the
AMESim simulation platform, and carries on the simulation analysis to it. The
results show that the pressure rise time is 0.1 s, the pressure drop time is 0.1 s
when the friction disc contacts with the brake disc, and the brake pressure rise
can be divided into three stages when the brake disc is unloaded; the accumu-
lator can perform about 4 brakes until the next filling, and the piston reaches the
limit position within 0.1 s. 0.8 mm..

Keywords: Loader  Hydraulic braking system  Mathematical modeling 


Simulation

1 Introduction

Construction machinery is an important part of the equipment industry. It is the nec-


essary mechanical equipment for earthwork construction, pavement construction and
maintenance, mobile crane loading and unloading operations and comprehensive
mechanized construction projects required by various construction projects. Loader is a
kind of earthwork construction machinery widely used in highway, railway, building,
hydropower, port, mine and other construction projects. With the development of
China’s industry, loader is developing in the direction of heavy load, high speed,
automation and so on. It puts forward higher and higher requirements for the con-
trollability, reliability, stability, fast response and control accuracy of its brake system.
The function of the braking system is to exert resistance on the machine so that it
can slow down or stop. Brake system plays an important role in ensuring personal
safety and mechanical safety. Wheeled machinery, in particular, sometimes needs to
operate on the highway, while people and cars on the highway often need to slow down
and stop, or even emergency parking. Therefore, it is required that the machinery has

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 176–184, 2020.
https://doi.org/10.1007/978-981-15-2341-0_22
Design and Simulation of hydraulic braking system for Loader Based on AMESim 177

good braking performance, otherwise, it may cause major accidents. In addition,


according to the requirements of operation and terrain, braking in long downhill and
parking on slope or long-term parking on site also require wheel braking or parking
braking devices with good mechanical performance. Hydraulic braking systems and
other power braking systems are widely used in construction machinery [2, 3] because
of their superior braking performance and reliability.
Because of its high reliability and good stability, full hydraulic brake system has
been widely used in engineering equipment. Full hydraulic braking system has the
following characteristics: pure hydraulic braking system does not use gas as medium,
so its structure has been greatly simplified compared with gas-hydraulic braking sys-
tem, which is convenient for maintenance and maintenance; the braking force and
moment generated are larger and easier to achieve electronic control; and because the
full hydraulic braking system is fully closed, oil is not easy to leak. With less pollution
to the environment; using accumulator to store energy, emergency braking after power
cut-off can be realized; the volume and quantity of components used should be reduced
relatively, which is beneficial to the arrangement of the original parts of the system in
space; the braking force and the force acting on the pedal are linear, which is conducive
to the normal operation of the driver; the system uses hydraulic oil as working medium,
because of hydraulic oil. Compressibility is very small, so it can transfer large braking
force in a very short time, fast and reliable action. However, the price of the full
hydraulic brake system is more expensive than that of the gas-hydraulic brake system,
so the sales of the full hydraulic brake system in the domestic market is not very good,
but in terms of the overall trend, the brake system of construction machinery is
developing towards the full hydraulic brake system [4, 5].
In order to prove the reliability and fast response of the hydraulic braking system,
starting from the design of the hydraulic system itself, the mathematical model of the
hydraulic braking system is established, and the simulation model of the hydraulic
braking system is established by using AMESim simulation platform to study the
problems of pressure rise and fall, time, piston stroke and time of the system.

2 Hydraulic Braking System Simulation

The principle of hydraulic braking system is shown in Fig. 1.


Its working principle is that the system pressure is maintained by accumulator, and
each braking circuit is equipped with accumulator separately. When the oil pressure in
the accumulator is lower than the set minimum working pressure of the system, the
filling valve inputs the hydraulic oil from the brake pump into the accumulator. When
the oil pressure in the storage gas reaches the set maximum working pressure of the
system, the filling valve stops supplying oil to the brake system and turns to supplying
oil to the next hydraulic system. When the brake pedal is depressed, the braking force
and braking moment will be generated by the high-pressure oil flowing out of the
accumulator and flowing through the brake valve and then to the brake. When the
hydraulic oil in the parking brake flows back to the tank, the piston in the parking brake
will stop when the friction plate is pressed on the brake disc under the force of the
spring. Removing the parking brake is that the high-pressure oil stored in the
178 J. Liu et al.

accumulator in the parking brake circuit enters the parking brake through the parking
brake valve, and pushes the piston backwards to compress the spring in the parking
brake, thus separating the friction disc from the brake disc. As far as the shoe brake
system is concerned, the piston of the parking brake cylinder is reset under the action of
spring tension, and the pull rod prompting the parking brake is driven to make the shoe
open and press tightly on the brake disc for braking. When the parking brake is
released, the high-pressure oil in the system will flow into the parking brake tank from
the accumulator and reverse-drive the piston to compress the bullet in the parking brake
cylinder. Reed, relax the pull rod of the shoe-type parking brake, so that the brake shoe
and the brake disc are separated [6].

Fig. 1. Principle of hydraulic wet brake system

3 Mathematical Model of Hydraulic Braking System

3.1 Simplification of Braking System


The structure and function of the front and rear axle circuit of the loader brake system
are the same. When building a mathematical model, only one loop should be con-
sidered. The accumulator is filled by a pump through a filling valve. When the accu-
mulator is filled with oil, the filling valve will stop supplying oil. Only when the
accumulator’s oil pressure is low to its set value will the filling valve re-supply oil.
Therefore, when simplifying the system, the influence of pump and filling valve on the
system pipeline after accumulator can be neglected, and accumulator can be regarded
as a device for direct function of the system. And because the pipeline of the system is
very complex, we need to ignore the influence of the pipeline on the system. The
simplified schematic diagram of the full hydraulic braking system is shown in Fig. 2.
Design and Simulation of hydraulic braking system for Loader Based on AMESim 179

Fig. 2. Simplified hydraulic braking system

3.2 Mathematical Model of Simplified System

(1) Exit pressure of accumulator


The accumulator is an air-bag hydraulic accumulator. The following assumptions
are made in the accumulator modeling: (1) The accumulator has a fast filling and
releasing process, and the change of gas pressure and volume is approximately adia-
batic; (2) Comparing with gas, the compressibility of oil is neglected; (3) Oil is laminar
flow in accumulator; (4) There is a connection between accumulator and system.
Pipeline and connecting pipeline have great influence on accumulator body. When
modeling, connecting pipeline is regarded as part of accumulator [7].
The force balance equation of accumulator is:
 
1 dQs
ðPs  Pa ÞAa ¼ ma þ Ba Qs ð1Þ
Aa dt

Derivation of time on both sides,


   
dPs dPa 1 d 2 Qs dQs
 ¼ 2 ma 2 þ Ba ð2Þ
dt dt Aa dt dt

Flow continuity equation of accumulator:

dVa
Qs ¼  ð3Þ
dt

From Boyle’s law of thermodynamics, the adiabatic equation is Pa Van ¼


n
Pa0 Va0 ¼ const, Take the reciprocal of time on both sides:

dVa dPa dPa nPa dVa


nPa ¼ Va or ¼ ð4Þ
dt dt dt Va dt

Substitute (4) and (3) into (2),


180 J. Liu et al.

 
dPs 1 d 2 QS dQS nPa
¼ 2 ma 2 þ Ba þ Qs ð5Þ
dt Aa dt dt Va
 1n
Pa0
Substitute Va ¼ Va0 Pa into (5),

   1
dPs 1 d 2 QS dQs nPa Pa n
¼ 2 ma 2 þ Ba þ Qs ð6Þ
dt Aa dt dt dt Pa0

Formula (6) is the relationship between the outlet pressure of accumulator and the
outlet flow rate.
(2) Flow rate of fluid in moving cylinder
Regardless of the influence of the pipeline from accumulator to brake valve and
brake valve to brake cylinder, it is assumed that the oil is incompressible and that the
pipeline, components and their interfaces are fully sealed. Represents the flow of oil
from the accumulator through the brake valve to the brake cylinder.
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2jPs  PL j
QS ¼ Cd Ab1  signðPs  PL Þ ð7Þ
q

sign—Symbolic function,
8
< 1 ða\0Þ
signðaÞ ¼ 0 ða ¼ 0Þ
:
1 ða [ 0Þ

(3) Flow from Brake Cylinder to Fuel Tank


Similarly, the flow equation from the brake cylinder to the oil tank is similar to that
flowing into the brake cylinder.
sffiffiffiffiffiffiffiffiffiffi
2j P L j
Qs ¼ Cd Ab2  signðPL Þ ð8Þ
q

The brake is a sliding valve structure, the control valve orifice is a circular orifice,
which is a non-circular valve orifice. The area gain of the valve orifice is non-linear, the
flow area is an arcuate area, and the calculation formula of the arcuate area is [8]:
"   rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi #
y y
2
d2 1 2y 2y
Ay ¼ cos 1  2  ð1  Þ   ð9Þ
4 d d d d
Design and Simulation of hydraulic braking system for Loader Based on AMESim 181

Therefore, the flow area from P port to T port is:


8     qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

<A Xb x1 2


¼ d4 cos1 1  2ðXbdx1 Þ  2  1  2ðXbdx1 Þ  Xb x
2
b1 d
1
 d ðXb  x1 [ 0Þ
:
Ab1 ¼ 0 ðXb  x1  0Þ
ð10Þ

Similarly, the flow area from T to A is as follows:


8   qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

<A x2 Xb 2


¼ d4 cos1 1  2ðx2dXb Þ  2  ð1  2ðx2dXb ÞÞ  x2 X
2
b2 d
b
 d ðx2  Xb [ 0Þ
:
Ab2 ¼ 0 ðx2  Xb  0Þ
ð11Þ

(4) Motion of Brake Valve Core


The moving parts of the brake valve are only the spool of the brake valve. Figure 3
is the force diagram of the spool of the brake valve. Ignoring the Coulomb friction
force, the spool is subjected to four forces: the force applied on the pedal is transmitted
to the spool through  spring (force transfer spring) Ki ðXi  XB Þ, the spool reset
 the upper
spring force Kf Xf 0 þ Xb , the feedback oil pressure PL Ab and the spool motion
damping force Bb vb . The dynamic characteristics of the spool can be obtained by two
first-order differential equations. Description.

Fig. 3. Force diagram of brake valve spool

dvb 1  
¼ Ki ðXi  Xb Þ  Kf Xf 0 þ Xb  PL Ab  Bb vb ð12Þ
dt mb

dXb
¼ vb ð13Þ
dt
182 J. Liu et al.

(5) Braking pressure in brake cylinder


Regardless of the role of the pipeline between the brake valve and the brake
cylinder, according to the pressure equation of the sealing chamber is:

dPL be
¼ ½Qs  Qt  AL VL  Ce PL  ð14Þ
dt VL

(6) Motion of piston in brake cylinder


Ignoring Coulomb friction, the piston of brake cylinder is subjected to three forces:
brake oil pressure, piston reset spring force and piston motion viscous damping force.
Its dynamic characteristics can be described by the following first order differential
equation.

dvL 1
¼ ½PP AL  BL vL  KL ðXL0 þ XL Þ ð15Þ
dt mL

dXL
¼ VL ð16Þ
dt

4 Results and Discussion

According to the working principle of the hydraulic braking system and the mathe-
matical model mentioned above, the simulation model of the hydraulic braking system
is established, as shown in Fig. 4.

Fig. 4. Simulation model of full hydraulic braking system


Design and Simulation of hydraulic braking system for Loader Based on AMESim 183

Fig. 5. Input signal (displacement of ejector Fig. 6. Brake pressure response curve in
under pedal) brake cylinder

The brake pressure curve in the brake cylinder is shown in Fig. 6. From the figure,
it can be seen that the maximum brake pressure of the brake cylinder is 91242.4 N.
From the contact between the friction pad and the brake disc, the pressure rise time is
0.1 s, and the pressure drop time is 0.1 s when the brake pedal is released. It can be
seen that when the brake pedal is pressed, the brake pressure rise is divided into three
stages (Fig. 5).
The pressure curve of the accumulator is shown in Fig. 7. It can be seen from the
diagram that the pressure of the accumulator decreases from 136 bar to 102.72 bar after
one brake. It can be seen that under ideal conditions, after the accumulator is filled with
oil, the accumulator can be braked about 4 times until the next filling.

Fig. 7. Pressure curve in accumulator Fig. 8. Displacement of brake cylinder piston

The displacement curve of the piston in the brake cylinder is shown in Fig. 8. It can
be seen from the figure that the limit displacement of the piston is 0.8 mm, and it
reaches the limit position in 0.1 s, which takes a very short time.

5 Conclusions

Based on the design and analysis of the working principle of the hydraulic braking
system of loader, this paper establishes a mathematical model, establishes a simulation
model with the aid of AMESim simulation platform, and studies the reliability and fast
response of the system. The hydraulic braking system has an enlightening and refer-
ence function for the design and research of the braking system of construction
machinery such as loader.
184 J. Liu et al.

References
1. Han, W., Lu, X., Yu, Z.: Braking pressure control in electro-hydraulic brake system based on
pressure estimation with nonlinearities and uncertainties. Mech. Syst. Sign. Process. 131(15),
703–727 (2019)
2. Li, H.C., Yu, Z.P., Xiong, L., Han, W.: Hydraulic control of integrated electronic hydraulic
brake system based on lugre friction model. SAE Technical Paper No. 2017-01-2513 (2017)
3. Zhong, L.: Hydraulic Drive Principle. Fault Diagnosis and Elimination of Construction
Machinery. Machinery Industry Press, Beijing (2018)
4. Xinwei, Z., Hong, Z.: Dynamic characteristic simulation of valve-controlled hydraulic
system. Mech. Electr. Eng. 28(1), 47–50 (2011)
5. Yinding, L., Pengwei, Q., Lei, W.: Design and analysis of loader hydraulic system based on
FluidSIM. Equipment Manuf. Technol. 5, 16–20 (2018)
6. Todeschini, F., Corno, M., Panzani, G., Fiorenti, S., Savaresi, S.M.: Adaptive cascade control
of a brake-by-wire actuator for sport motorcycles. IEEE/ASME Trans. Mechatron. 20(3),
1310–1319 (2015)
7. Yu, Z.P., Han, W., Xiong, L., Xu, S.Y.: Hydraulic pressure control system of integrated-
electro-hydraulic brake system based on byrnes-isidori normalized form. J. Mech. Eng.
52(22), 92–100 (2016)
8. Lv, C., Wang, H., Cao, D.P.: High-precision hydraulic pressure control based on linear
pressure-drop modulation in valve critical equilibrium state. IEEE Trans. Ind. Electron.
64(10), 7984–7993 (2017)
Dynamic Modeling and Tension Analysis
of Redundantly Restrained Cable-Driven
Parallel Robots Considering Dynamic
Pulley Bearing Friction

Zhenyu Hong, Xiaolei Ren(&), Zhiwei Xing, and Guichang Zhang

School of Aeronautic Engineering, Civil Aviation University of China Tianjin,


Tianjin 300300, China
[email protected], [email protected]

Abstract. Cable-driven parallel robots (CDPRs) have been widely used in


many industrial fields, especially in specialized industrial fields requiring high
precision control. This paper presents the dynamic modeling and cable tension
analysis of redundantly restrained cable-driven parallel robots (RRPRs). The
Coulomb friction and Dahl friction model were proposed to predict the friction
between the cable and the pulley. According to the friction models, a dynamic
equation of RRPR considering the dynamic pulley bearing friction is derived.
For the two friction models, the influence of various parameters on the tension
and friction are analyzed. It is demonstrated that the Dahl friction model has a
high accuracy when the moving platform is at a low speed or the speed direction
changes rapidly, the friction and the cable tension can achieve a smooth tran-
sition. In particular, the Dahl friction model can better describe the actual change
of friction.

Keywords: RRPR  Dynamic modeling  Coulomb friction model  Dahl


friction model  Tension analysis

1 Introduction

Cable-driven parallel robots (CDPRs) are a special class of parallel manipulators in


which the end-effector is connected to the base platform through a number of cables.
CDPRs have several advantages compared to traditional serial or parallel robot. They
have high ratio of payload to robot weight and larger workspace [1], appropriate for
high speed applications [2] and aircraft spraying [3].
The end-effector is negatively affected by the elasticity, flexibility and friction,
which will affect the speed and control accuracy of the end-effector [4]. Pott et al. [5]
used a force sensor to measure the tension on the cable and verified that friction is the
main factor to influence the cable tension. Miyasaka et al. [6] did experiments on
different pulleys and proved that Coulomb friction significantly affects the operational
accuracy of RAVEN-II robots. Heo et al. [7] established the relationship between the
cable tension and friction by using Coulomb friction model and based on this, the
dynamic workspace of the CDPR was calculated. However, in the state of motion, the

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 185–191, 2020.
https://doi.org/10.1007/978-981-15-2341-0_23
186 Z. Hong et al.

friction between the cable and the pulley cannot be simply described by the Coulomb
friction model. When the end-effector moves slowly or the direction of motion changes
rapidly, it will affect the pre-sliding range between the cable and the pulley [8].
Therefore, the Coulomb friction model is not suitable for high precision control [9].
Therefore, it is necessary to introduce a dynamic friction model to describe the friction.
In this paper, based on the Coulomb friction model and Dahl friction model [10], a
mathematical models of the friction is established, and the dynamic equations of the
RRPR considering the dynamic pulley bearing friction are derived. The influence of
various parameters on the tension and friction are analyzed.

2 Dynamic Modeling

Due to the cable actuation, CDPRs are not very similar to parallel-actuated robot in
dynamic modeling [11]. A RRPR model of 6-DOF in which the end-effector is con-
nected to the base through 8 cables is shown in Fig. 1 for model simplicity The cable
i (i = 1, 2, … 8) is connected to the end-effector through fixed pulley Bi after output
from the motor. Also, the coordinate system and parameters defining the system are
presented.

B3

B2

B4

Ze
meg B1
A2
A 3 B7
A6
A1
A4
A7 Oe
Z Xe
A5
A8
Ye
(B8) O X B6

B5

Fig. 1. 6-DOF cable robot with 8 parallel cables

The dynamic equation of the end effector based on force screw theory can be
written as follows:

M ( X ) X + N ( X, X ) − Wg − We = − J TT ð1Þ

where: MðXÞ is the mass term; N ( X, X& ) is the velocity term, Wg ¼ me gT is the gen-
eralized gravitational term; We is the external load term of end-effector.
Dynamic Modeling and Tension Analysis 187

3 Friction Model Construction and Simulation


3.1 Coulomb Friction Model
Before using the coulomb friction model to predict the friction between the cable and
the pulley, it is necessary to assume that the motion of the pulley is caused by the
friction.

Wrap
angle
Friction Tj

90°-
End-
Pulley j
effector
Tj+1

FN,j

Friction Tj+1
j
Tj Tj
Pulley j
Tj+1

Fig. 2. Friction model of pulley

As shown in Fig. 2, the tension on the cable can be expressed as Tj and Tj + 1. The
tension of the cable Tj can be expressed based on Coulomb friction as follows:

Tj ¼ Tj þ 1 þ ðlj FN;j ÞsgnðvÞ ð2Þ

where FN,j represents the force on the pulley and the normal force can be derived from
the cosine theorem:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
FN;j ¼ Tj2þ 1 þ 2Tj þ 1 Tj cosðaj Þ þ Tj2 ð3Þ

Fc:j ¼ lj FN;j ð4Þ

where aj is the wrapping angle between cables. According to Eq. (3), the friction
depends on the wrapping angle and the cable tension. Because of the moving end-
effector, the angle aj can be calculated based on the kinematics. Since the vector
triangle is 180°, the wrapping angle aj is 90° + d and

Li;x
d ¼ arctanð Þ ð5Þ
Li;y

where Li;x and Li;y are the components of the ith cable length vector along the X di-
rection and Y direction respectively.
188 Z. Hong et al.

In combination with Eqs. (3) and (4), the Coulomb friction model is:
h qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffii
Fc;j ¼ lj  g2j  2gj cosðaj Þ þ 1 Tj ¼ kj Tj ð6Þ

where Fc,j is the coulomb friction acting on the jth pulley. According to Eq. (6), the
dynamic equation based on the Coulomb friction model is:

M(X )X + N(X, X )X
ð7Þ
= We + Wg + J ⎡⎣1 + λ j sgn( v j ) ⎤⎦ T

3.2 Simulation of Coulomb Friction Model


According to Eq. (6), Coulomb friction is affected by friction coefficient l, the cable
tension and wrapping angle a. To analyze the influence of friction coefficient l and
cable tension on coulomb friction, the friction coefficient l was defined as 0.04, 0.06,
and 0.08. Figure 3 shows the variation of the tension with the failure factor vertical
η = 0.9. When the cable speed is positive, the actual cable tension is higher than that
without considering the friction. Figure 4 shows that the impact on the cable tension
and Coulomb friction with the friction coefficients l = 0.6 and the failure coefficients η
are 0.4, 0.6 and 0.8 respectively. The cable tension at both ends of the pulley decreases,
leading to the decrease of Coulomb friction, due to the decrease of the coefficient.
Cable Speed/(m/s) Coulomb Friction/N Cable Tension/N

No Friction
μ=0.04
μ=0.06
μ=0.08

μ=0.04
μ=0.06
μ=0.08

Time/s

Fig. 3. Influence of friction coefficient


Dynamic Modeling and Tension Analysis 189

Cable Speed/(m/s) Coulomb Friction/N Cable Tension/N


No Friction
η=0.8
η=0.6
η=0.4

η=0.8
η=0.6
η=0.4

Time/s

Fig. 4. Influence of failure coefficients

The Coulomb friction will change abruptly when the speed direction of the cable
changes, because the Coulomb friction model does not take into account the pre-sliding
between the cable and pulley. Since the Coulomb friction model can cause a large error
when the cable speed direction changes suddenly, and Dahl friction model should be
introduced to improve it.

3.3 Dahl Friction Model


When the velocity is close to zero, Dahl friction can be expressed by the non-linear
ordinary differential Eq. (6), and the expression is:
 b
dFd;j ðTj Þ Fd;j ðTj Þ
¼r 1 sgnðvÞ ð8Þ
dx Fc;j
rj xj
F
Fd;j ðTj Þ ¼ Fc;j ð1  e c;j ÞsgnðvÞ ð9Þ

where Fd,j is the Dahl friction between the cable and the pulley, r is the contact
stiffness coefficient determining the size of the pre-sliding area, x is the relative dis-
placement and b determines the shape of the pre-sliding region, generally (b = 1). The
dynamic equation based on Dahl friction model can be expressed as:

M(X )X + N(X, X )X
⎡ ⎛ −σ x
⎞ ⎤ ð10Þ
= We + Wg + J ⎢1 + ⎜ 1 − e j ⎟ λ j sgn( v ) ⎥ T
λT

⎢ ⎜ ⎟ ⎥
⎣ ⎝ ⎠ ⎦

Equation (10) indicates that the dynamic equation based on Dahl friction model is a
non-linear system equations, which will be solved by the least square method.
190 Z. Hong et al.

3.4 Simulation of Dahl Friction Model


The Coulomb friction model has limitations and does not take into account the pre-
sliding between the cable and the pulley. Therefore, the Dahl friction model is proposed
to be used for further correction. Dahl [10] found that when the maximum static friction
was not reached, there was a small displacement between the contact surfaces.
Cable Tension/N

No Friction
Coulomb Frction
σ=100
σ=200
σ=300

Coulomb Friction
Friction/N

σ=100
σ=200
σ=300
Cable Speed/(m/s)

Time/s

Fig. 5. Influence of contact stiffness coefficient (low speed)


Cable Tension/N

No Friction
Coulomb Friction
σ=100
σ=200
σ=300

Coulomb Friction
Friction/N

σ=100
σ=200
σ=300
Cable Speed/(m/s)

Time/s

Fig. 6. Influence of contact stiffness coefficient (high speed)

Figure 5 shows the effect of contact stiffness coefficient r on the tension and Dahl
friction of cable. The blue, green and purple lines represent the contact stiffness
coefficients r when the values are 100, 200 and 300, respectively, and the red circles
represent the changes of cable tension when the speed direction changes. Figure 6
shows the effect of contact stiffness factor r on the tension and Dahl friction of cable1
at high speed of end-effector. Compared with Figs. 5 and 6, Dahl friction at high speed
is closer to Coulomb friction, and the effect of contact stiffness coefficient r on Dahl
friction is not obvious. Whether the end-effector is in a low or high speed state, the
Dahl friction model can better describe the change of friction when the direction of
speed changes compared with the Coulomb friction model, so that the cable tension can
be smoothly transited.
Dynamic Modeling and Tension Analysis 191

4 Conclusion

The increases in the friction coefficient l and failure coefficient η cause the increase in
cable tension and Coulomb friction. The contact stiffness factor r has an obvious
influence on Dahl friction and cable tension when the end-effector is moving at a low
speed. The Dahl friction and the cable tension increase with the increase of contact
stiffness factor. The contact stiffness coefficient r influence the Dahl friction and cable
tension slightly when the end-effector is moving at high speed. In conclusion, Dahl
friction model can better describe the change of friction when the speed direction
changes, so as to smooth the transition of cable tension.

Acknowledgements. The work is supported by National Natural Science Foundation of China


Civil Aviation Joint Foundation Cultivation Project (U1833108).

References
1. Fortincôté, A., Faure, C., Bouyer, L., Mcfadyen, B.J., Mercier, C., Bonenfant, M., et al.: On
the design of a novel cable- driven parallel robot capable of large rotation about one axis
(2018)
2. Bak, J.H., Hwang, S.W., Yoon, J., Park, J.H., Park, J.O.: Collision-free path planning of
cable-driven parallel robots in cluttered environments. Intell. Serv. Robot. 12(3), 243–253
(2019)
3. Albus, J., Bostelman, R., Dagalakis, N.: The nist robocrane. J. Robot. Syst. 10(5), 709–724
(2010)
4. Tjahjowidodo, T., Ke, Z., Dailey, W., Burdet, E., Campolo, D.: Multi-source micro-friction
identification for a class of cable-driven robots with passive backbone. Mech. Syst. Sign.
Process. 80, 152–165 (2016)
5. Pott, A.: Pulley friction compensation for winch-integrated cable force measurement and
verification on a cable-driven parallel robot. In: IEEE International Conference on Robotics
and Automation. IEEE (2015)
6. Miyasaka, M., Matheson, J., Lewis, A., Hannaford, B.: Measurement of the cable-pulley
Coulomb and viscous friction for a cable-driven surgical robotic system. In: IEEE/RSJ
International Conference on Intelligent Robots and Systems (2015)
7. Heo, J.M., Choi, S.H., Park, K.S.: Workspace analysis of a 6-DOF cable-driven parallel
robot considering pulley bearing friction under ultra-high acceleration. Microsyst. Technol.
23(7), 2615–2627 (2017)
8. Xin, D., Yoon, D., Okwudire, C.E.: A novel approach for mitigating the effects of pre-
rolling/pre-sliding friction on the settling time of rolling bearing nanopositioning stages
using high frequency vibration. Precis. Eng. 47, 375–388 (2017)
9. Chen, H., Pan, Y.C.: Dynamic behavior and modelling of the pre-sliding static friction. Wear
242(1), 1–17 (2000)
10. Dahl, P.R.: A solid friction model. Aerospace Corp El Segundo Ca (1968)
11. Merlet, J.P.: Parallel Robots. Kluwer, Dordrecht (2006)
Industry 4.0
Knowledge Discovery and Anomaly
Identification for Low Correlation
Industry Data

Zhe Li(&) and Jingyue Li

Department of Computer Science, Norwegian University of Science


and Technology, 7491 Trondheim, Norway
{zhel,jingyue.li}@ntnu.no

Abstract. With the development of information technology, industry data is


increasingly generated during the manufacturing process. Companies often want
to utilize the data they collected for more than the initial purposes. In this paper,
we report a case study with an industrial equipment manufacturer to analyze the
operation data and the failure records of the equipment. We first tried to map the
working condition of the equipment according to the daily recorded sensor data.
However, we found the collected sensor data is not strongly correlated with the
failure data to capture the phenomenon of the recorded failure categories. Thus,
we proposed a data driven-based method for anomaly identification of such low
correlation data. Our idea is to apply a deep neural network to learn the behavior
of collected records to calculate the severity degree of each record. The severity
degree of each record indicates the difference of performance between each
record and all other records. Based on the value of severity degree, we identified
a few anomalous records, which have very different sensor data with other
records. By analyzing the sensor data of the anomalous records, we observed
some unique combinations of sensor values that can potentially be used as
indicators for failure prediction. From the observations, we derived hypotheses
for future validation.

Keywords: Knowledge discovery  Anomaly identification  Data correlation

1 Introduction

Recent advances in information and communication technologies have accelerated the


application of automated and systematic monitoring systems in the manufacturing
industry [1]. In the past few years, the dimensionality of monitoring data sets in the
manufacturing industries has severely increased [2, 3]. Hence, issues about how to
leverage those monitoring data to enhance the reliability and availability of the
equipment are getting more and more significant [4].
In this work, we want to help an industrial equipment manufacturer to analyze the
data they collected, namely the operation data and the failure log of the equipment. Our
initial target is to predict potential failures and working conditions of the target
equipment from its daily collected monitoring data, which has a total number of 197
parameters. We first applied a classification model to identify the difference between

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 195–202, 2020.
https://doi.org/10.1007/978-981-15-2341-0_24
196 Z. Li and J. Li

failure and normal records. According to our analysis results, the applied data-driven
model could not separate the failure records from the nonfailure records. We assume
that the sensor data have no strong correlation to the failures. We generated a new
parameter to indicate failure conditions and further validated and confirmed our
assumption that available sensor data is not strong enough to capture the phenomenon
of failures. Actually, how to extract valuable information and discover useful knowl-
edge from low correlation data is a common dilemma in many practical applications
[5]. To fill the gap, we proposed a method to discover knowledge and identify anomaly
for low correlation data. The applied data-driven model is constructed through a fully
connected deep neural network since this structure has an excellent performance in
discovering information and knowledge about failures [6]. The idea is to make the
constructed neural network learn the behavior of collected samples and output the
severity degree of each sample. The severity degree can indicate the difference of
performance between individual record and all other records. From the severity degree,
we can identify anomalous records. Through analyzing the sensor data of the identified
anomalous records, we could acquire knowledge about which sensor data could be
possible indicators or predictors of failures. Although the study is based only on one
equipment data, knowledge acquired from the case study could help us derive
hypotheses for validation by using other similar equipment.
The rest of this paper is organized as follows: Sect. 2 explains the process of data
correlation analysis and validation. Section 3 details the applied method for knowledge
discovery and anomaly identification for low correlation data. Discussion and con-
clusions are summarized in the last section of this paper.

2 Data Correlation Analysis

The data we used during the research is collected from the equipment of an industrial
equipment manufacturer1. The primary datasets leveraged during analysis includes two
parts: fault records and sensor data from the monitoring system. However, the sensor
data is mainly collected to help the user understand the working condition of the
equipment, instead of to indicate fault information. The target of our study is to
discover the potential correlation between the two databases and knowledge about
impending failures.

2.1 Data Integration and Normalization


According to the measurement system, the sensor database includes 247269 records
within 2931 timestamps. The number of recorded parameters varies with years, from 57
parameters in 2009 to 197 parameters in 2017. Forty-two monitoring parameters, which
have been collected all the time during the sampling period, are selected as the

1
Due to Non-Discloser Agreement, we are not allowed to give detailed information of the equipment
and the company in the paper.
Knowledge Discovery and Anomaly Identification 197

observation units for further analysis. The leveraged monitoring data mainly include a
total number of starts, running time, load, voltage, and so on.
There are 538 records from 2009 to 2017 in the provided fault dataset. The col-
lected data includes 315 days and 367 timestamps. Several failures may happen in one
timestamp, and several timestamps may be recorded in one day. During the sampling
period, most of the recorded failures are about faults in multi-hoisting.
To integrate monitoring data and failure information, we used the recorded
timestamps in each database as connections. Since both monitoring data and failure
information are necessary for fault identification, we included only the records which
have been recorded in both datasets as observations in the analysis.
The number of valid records is 2931 after the merge, in which 307 records are
labeled as failures, and 2624 ones are not labeled as failures. As mentioned above, the
number of collected timestamps in monitoring dataset and failure dataset are 2931 and
367, respectively. However, there are 60 records in fault dataset which are not recorded
in monitoring dataset. Thus, we discarded these 60 records without monitoring infor-
mation in the analysis.
To improve the performance of data mining and avoid potential inconvenience, we
applied standard normalization to adjust values measured on different scales to a
0
notionally common scale. The parameter Pi after normalization is Pi , which is shown in
Eq. (1):

0 Pi  Pmean
i
Pi ¼ ð1Þ
Pstd
i

Here, Pmean
i and Pstd
i are the mean value and standard deviation of the parameter Pi ,
respectively.
During normalization, we found that the standard deviations of seven parameters
are zero or very close to zero, which means those parameters rarely changed during the
sampling period. We removed these parameters from the merged dataset since constant
values hold no meaning for condition monitoring.
As mentioned above, since the currently available monitoring data is used for
operation management, there is probably no direct connection between the available
monitoring data and failure information. Therefore, our first research step focused on
answering whether the collected monitoring data is sensitive or strong enough to
predicate impending failures.

2.2 Data Analytics for Impending Failure Prediction


In this section, we will introduce the process and test result of impending failure
prediction. Our target of this step of analysis is to leverage the collected monitoring
data and data-driven models to map the recorded fault conditions. If the collected
monitoring parameters are sensitive or relevant enough to identify the recorded failures,
we can use them to predict whether there would be an impending failure. Thus, the
problem is transformed into a classification issue with two groups, i.e., a non-fault
group and a fault group. As mentioned above, our dataset includes 307 records under
198 Z. Li and J. Li

impending failure condition and 2624 records which are not labeled as failures. To
balance the number of samples in both classes, we expanded the number of failure
samples by repeatedly used them during the training stage.
The applied data-driven model is established through the fully connected deep neural
network with seven layers, Leaky Relu is used as activation functions of hidden layers,
and SoftMax is used as activation functions of the output layer. Since the dimension of
inputs is 35 (42 selected parameters minus seven constant values), the number of nodes in
hidden layers of the constructed deep neural network is 64, 32, 32, 16, 16, 8, 2 (i.e., 64
nodes in the first layer, 32 nodes in the second and third layer, 16 nodes in the fourth and
fifth layer, 8 nodes in the sixth layer, and 2 nodes in the last layer) to learn and represent
the inputs data smoothly. We selected Adam as the optimizer and categorical cross-
entropy as the loss function due to their broad applicability. The maximum number of
training epochs and dropout rate are 2000 and 0.3, respectively, to avoid overfitting.
Batch size has been set as 32 to accelerate the training process. Figure 1 shows the
training and validation error with training epochs. Figure 2 illustrates the prediction
result, in which values above 0.5 in y-axis can be considered as identified failure.

Fig. 1. Training errors with epochs Fig. 2. Impending failure prediction

Since we used SoftMax as the final output layer, records with higher values in y-
axis are more likely to indicate a failure condition. According to the result, we can
notice that only some extreme normal and failure conditions can be identified. Most
records have the prediction result close to 0.5, which means those records are difficult
to be separated by the sensor data. We assume the main reason is that current sensor
data is not sensitive enough to capture the phenomena of failures, or not strong enough
to identify the impending failures.

2.3 Correlation Validation


To validate this assumption, we set up a new experiment with more parameters, gen-
erated from failure information with certain random fluctuations (0.7–1 for failure
records and 0–0.3 for nonfailure records). The generated parameters can be considered
as direct indicators, which are sensitive to recorded failures and can capture the phe-
nomena right before an impending failure. Figure 3 shows the result of failure prediction
with the generated parameters using the same deep neural network. According to the
Knowledge Discovery and Anomaly Identification 199

result in Fig. 3, the collected parameters are sensitive enough to the failures, and the
data-driven model can identify the failure conditions. Results of Fig. 3 confirms that the
original sensor data cannot capture the phenomenon of impending failures directly.

Fig. 3. Failure prediction with generated parameters

3 Knowledge Discovery and Anomaly Identification

As the available sensor data is not sensitive enough to capture the phenomena of
failures, we tried to leverage the data to discover possible useful knowledge hidden in
the data. Since most of the records are collected without failures, i.e., only 307 in 2931
records are labeled as a failure, our core idea is anomaly identification. The idea is to
make a data-driven model learn behaviors of the equipment first. The second step is to
give scores to each record to describe the degree of difference from other records. The
high-level analysis process is shown in Fig. 4.

Fig. 4. Knowledge discovery from low correlation data

During data calibration, we labeled all the failure records as “1” and the nonfailure
ones as “0”. Therefore, the records with higher scores are more likely to have
impending failures or anomaly condition since their behaviors are different from others.
The target is to identify records with abnormal behaviors [7], which are the records
having very different sensor data with other records. The anomalous records may or
may not have been labeled as failure ones in the original dataset.
200 Z. Li and J. Li

The applied data-driven model for anomaly identification is very similar to the
failure identification model, which is deep neural works with seven fully connected
layers. The difference from the failure identification model is that the final layer is
replaced with a regression model to evaluate the anomaly degree and output severity
degree. Figure 5 shows the evaluation results of the trained network with a hyperbolic
tangent (Tanh) as the activation function of the final output layer.

Fig. 5. Evaluation results of Tanh function

During the evaluation phase, the first 307 records are the ones with failures, and the
rest records are the nonfailure ones. In Fig. 5, the red line indicates the actual severity
degree, and the blue line represents the prediction result. According to the evaluation
result, the severity degree of most records is around 0.2. Thus, we selected 0.22 as the
threshold. Twenty-nine out of 2931 records are identified as the anomalous ones from
the analysis. Among the anomalous 29 records, 15 records are labeled as nonfailure
ones in the original dataset, and 14 are labeled as failure ones. According to the results,
all records can be divided into four categories:
Category 1. 2609 nonfailure records are also identified by our data model as normal.
Those samples represent normal behaviors without failures.
Category 2. 293 records, which are recorded as failure ones in the original dataset, are
not identified as anomalous by our data-driven model. The data-driven model cannot
identify any difference between these records labeled as failure and records labeled as
nonfailure ones. We analyzed the sensor data of many of those records in details and
compared their sensor data with the ones labeled as nonfailure and find that their sensor
data are very similar to the sensor data of the nonfailure ones. As these records cover
95% of all the records labeled as failures, their sensor data may hide some inter-
esting correlations between the sensor data and the failures. Such interesting cor-
relations probably exist in some data but are not statistically significant. This
probably explains why our initial analysis reported in Sect. 2 did not find strong
correlations between the sensor data and the failure.
Knowledge Discovery and Anomaly Identification 201

Category 3. 14 records, which are recorded as failures in the original dataset, are
identified by our data-driven model as anomalies. Those samples are recorded with
failure and have very different sensor data with other records. Thus, they are captured
by our data-driven model. The 14 records in Category 4 are the real interesting records
to be analyzed further, because such record may contain implicit knowledge that can
potentially explain the reasons for the failures and can also possibly help us identify
real indicators from the sensor data to predicate failures. Thus, we analyzed the sensor
data of these records in details. Four out of the 14 records have similar conditions that
one or several vital parameters such as “total running time,” “Remaining Safe Working
Period of the brake in percentage,” and “a total number of starts” are recorded with
extremely high or low values. Those unusual high or low values indicate sensor failures
of the equipment. Six of the records are identified as an anomaly because the “actual
loads” of the equipment are higher than the average value, but values of other
parameters are different with most samples with high actual loads. Thus, our data-
driven model identifies them as anomalies since their conditions are different from
most. Among these 6 records, 4 records have high values in “actual load,” while one or
several “line voltages” are lower or close to the average values. As these 4 records are
also labeled as failures in the original dataset, these records may make one out of many
possible reasons for the failures stand out. Thus, we can hypothesize that “the target
equipment under high load without enough line voltages is more likely to have an
impending failure.” For the rest 4 records, our visual inspection could not find obvious
abnormal of the value of their individual sensor parameter. As each record has 35
sensor parameters, it is possible that some complex combinations of sensor parameter
values make them very different from the other records. More domain knowledge and
more data are needed to understand these 4 records in depth.
Category 4. Fifteen records, which are not categorized as failed one in the original
data set, are identified as abnormal by our data model. By analyzing the sensor data of
the 15 records in depth, we found 4 out of the 15 records have sensor failures that were
not recorded or noticed by the users. These indicate sensor errors. However, due to
unknown reasons, the sensor errors do not lead to actual failures or the actual failures
are overlooked. There are 9 records which have high “actual loads” as the 4 records,
which also have high “actual loads,” in Category 3, Although these 9 records are not
labeled as failures in the original dataset, their sensor data are very similar to the 4
failure records with high “actual loads.” That is probably why these 9 records are also
identified as abnormal by our data-driven model. Again, there are 2 records we cannot
figure out how different their individual sensor parameter values are from other records.
We need more in-depth domain knowledge and more data to explain the reasons why
our data-driven model classifies these 4 four records as abnormal.

4 Discussion and Conclusion

According to the result of failure identification in Sect. 2, the currently available sensor
data are not strong enough to predicate failures. Thus, we change our research focuses
on anomaly identification and proposes a method to evaluate severity degree by
202 Z. Li and J. Li

comparing the behavior of each record with the records which are recorded as non-
failures. As shown in Sect. 3, we first established a data-driven model to evaluate the
severity degree of each record. The core idea is to train the model and make it learn the
behaviors of the majority. Thus, the evaluated severity can indicate the degree of
anomaly condition of each record compared with all other records. A record with high
severity is more likely to have anomaly behaviors.
The proposed anomaly identification method can identify anomaly behaviors of the
target equipment and obtain hypotheses about machine fault from low correlation data
environment. In this case study, our approach filtered out most of the records which are
labeled as failures in the original dataset but are not able to differentiate themselves
from the nonfailure records by inspecting the sensor data. Our approach managed to
find out 4 records which have extremely high or low sensor values and are labeled as
failures. These 4 records indicate that sensor error is probably one reason for failure.
Our approach also highlighted other 4 records which have high “actual load” but “line
voltages” and are labeled as failures. Such 4 records may indicate that the parameters
“actual load” and “line voltages” can possibly be used as indicators for predicting
some categories of failures.
The limitation of the study is that the proposed method is only applied and tested
with data from one equipment. We, therefore, need to validate the method proposed in
this study and the hypotheses identified from this study with data from several similar
types of equipment.

Acknowledgment. The work described in this article has been conducted as part of the research
project CIRCit (Circular Economy Integration in the Nordic Industry for Enhanced Sustainability
and Competitiveness), which is part of the Nordic Green Growth Research and Innovation
Programme (grant numbers: 83144), and funded by NordForsk, Nordic Energy Research, and
Nordic Innovation.

References
1. Lee, J., Bagheri, B., Kao, H.-A.: A cyber-physical systems architecture for industry 4.0-based
manufacturing systems. Manuf. Lett. 3, 18–23 (2015)
2. Lee, C.K.M., Zhang, S.Z., Ng, K.K.H.: Development of an industrial Internet of things suite
for smart factory towards re-industrialization. Adv. Manuf. 5(4), 335–343 (2017)
3. Bodrow, W.: Impact of Industry 4.0 in service oriented firm. Adv. Manuf. 5(4), 394–400
(2017)
4. Li, Z., Wang, Y., Wang, K.-S.: Intelligent predictive maintenance for fault diagnosis and
prognosis in machine centers: industry 4.0 scenario. Adv. Manuf. 5(4), 377–387 (2017).
https://doi.org/10.1007/s40436-017-0203-8
5. Khan, A., Turowski, K.: A survey of current challenges in manufacturing industry and
preparation for industry 4.0. Paper presented at the Intelligent Information Technologies for
Industry, Cham (2016)
6. Li, Z., Wang, Y., Wang, K.: A deep learning driven method for fault classification and
degradation assessment in mechanical equipment. Comput. Ind. 104, 1–10 (2019)
7. Chandola, V., Banerjee, A., Kumar, V.: Anomaly detection: a survey. ACM Comput. Surv.
41(3), 15 (2009)
A Method of Communication State Monitoring
for Multi-node CAN Bus Network

Lu Li-xin1, Gu Ye1, Li Gui-qin1(&), and Peter Mitrouchev2


1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai 200072, China
[email protected]
2
University Grenoble Alpes, G-SCOP, 38031 Grenoble, France

Abstract. Multi-node CAN bus network tended to have higher requirements


for communication capability. In the multi-node CAN bus network which does
not need application layer protocol of CAN bus to develop, with the increase of
nodes, the difficulty of system maintenance and optimization increases. If a node
breaks down, it is necessary to check all the nodes one by one to find the fault,
which is inconvenient to troubleshoot. A method of communication state
monitoring for multi-node CAN bus network is proposed, for multi-node CAN
bus network, firstly, the program structure of communication state monitoring is
designed, each node ID is set, then test data is sent, finally the communication
state of all nodes is achieved and the error node information is displayed through
the upper computer, rapid and reliable positioning of the wrong nodes is real-
ized. The result shows that this method is an effective method for communi-
cation state monitoring.

Keywords: Multi-node  CAN bus  Communication  Monitoring

1 Introduction

In the distributed control system, CAN is generally used to communication between the
master and slave controllers. However, due to the large number of CAN nodes, as the
usage time of CAN bus system increases, cable fatigue, aging and damage are
inevitable.
In order to improve the communication performance of CAN bus network and
monitor them effectively, a lot of research has been done at home and abroad. Mat-
sudaira [1] proposed an oversampling, self-adaption and pre-emphasis technique to
achieve complete recovery of distortion caused by long transversals in a multi-node
topology. In the car driving control system, Wagh [2] studied the inter-node commu-
nication based on CAN protocol to realize the monitoring and control of vehicle speed,
temperature, seat belt and battery in case of emergency. Barranco [3–5] intro-duced an
active and star topology named CANcentrate, all nodes communicate with the bus
through the Hub module, preventing one node from malfunctioning and affecting the
communication of other nodes, which can have higher fault sensitivity and fault tol-
erant. Long [6] designed a simple, credible and versatile multi-node CAN bus com-
munication protocol. Slave nodes report their state information to master nodes in real

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 203–210, 2020.
https://doi.org/10.1007/978-981-15-2341-0_25
204 L. Li-xin et al.

time, the master node uploads or operates the slave node accordingly after receiving the
information, which can well meet the requirements of the monitoring system. Zhang [7]
applied node monitoring to the greenhouse control system. The monitoring host
checked the state of each node regularly. If a node is found to have failed to respond
several times, the node is considered closed, any data sent to the node is stopped, and
an alarm is given to the operator.
If a node of multi-node CAN bus network breaks down, it is necessary for the staff
to check all the nodes one by one to find the fault, this is inconvenient for trou-
bleshooting. In order to facilitate maintenance and optimization, a method of com-
munication state monitoring for multi-node CAN bus network is proposed in this paper,
which can quickly find the faulty node and greatly facilitates the system maintenance
and optimization.

2 Scheme Design of Monitoring


2.1 CAN Bus Network
The CAN bus network in this paper consists of upper computer, master controller and
seven slave controllers. As shown in Fig. 1. The upper computer is connected to the
master controller through TCP, and is mainly responsible for data transacting and
analyzing, device management, and human-computer interaction. The lower computer
includes a master controller and seven slave controllers, and the controllers commu-
nicate through the CAN bus. The main control chip of the master controller is
STM32F407, and the other child nodes of the slave controller all use STM32F103 chip.

Fig. 1. System overall structure block diagram

2.2 Principle of Monitoring


The principle of monitoring is shown in Fig. 2. For communication state monitoring,
the traditional way is to set a timer in the main node, and the main node sends an
inquiry to the child node one by one within a certain time interval. There is a failure on
the child node if no reply is received from the child node within the specified time, and
A Method of Communication State Monitoring for Multi-node CAN Bus Network 205

the main node will be alerted through PC. However, the disadvantage of the method of
polling the state of child nodes one by one by the main node is that it takes a long time.

Start

The master node


sends queries to the
slave node

Receive request from Communication


Yes
slave node? condition is normal

No

Generate error
information and
send it to the upper
computer

The upper computer


judges the error
information and
obtains the error
node

End

Fig. 2. Principle of monitoring

In this paper, the main node sends an inquiry to the child node one by one, and the
child node replies immediately when receives. The main node takes down the reply
information of all child nodes and sends it to the PC to judge.

3 Software Design of Lower Computer

In order to respond to the query of the master controller immediately, the software
architecture of the slave controller is to judge the received instructions in the main
function in a loop and execute corresponding actions according to the instructions,
which are executed in the function of CAN receive interrupt and communicate with the
main function through global variables.
The block diagram of CAN receive interrupt function is shown in Fig. 3. First, the
structural variable CanRxMsg is defined, used to store the data ID, check code, eight
bytes of data, and then call the system function CAN_Receive, the data in the buffer
FIFO0 is copied into the structure, and the eight bytes of instruction data are assigned
to the global variable, and the flag variable start is set. The main function loops are to
determine whether the variable start is set. If it is, the controller receives a frame of
206 L. Li-xin et al.

CAN command, and if it is judged as a test command, it returns a frame of feedback


information.

Start

Define structural
variables:
CanRxMsg

Receive data from buffer FIFO0:


CAN_Receive

Assign 8 bytes of data to the


global array CAN_receive[]

The global variable start


is set

End

Fig. 3. Block diagram of CAN receiving interrupt function in the system

The main program diagram in the master controller is shown in Fig. 4. The first
step is to set the ID of each node in the CAN network. The settings are as follows: the
ID of the master node is set to 0x08, and the IDs of the seven sub-nodes are set to 0x01,
0x02, 0x03, 0x04, 0x05, 0x06, 0x07 respectively, next we create an eight-bit variable
named error indicating error status flag of CAN. The 0–6 bits of error represent the
communication state of the slave controller 1–7 respectively, and 0 indicates the
communication is normal. 1 means communication is abnormal. Then one frame
instruction is sent to the seven slave controllers respectively. If the feedback infor-
mation is received, the communication is normal, otherwise the communication is
abnormal, and the corresponding bit is set to 1, and all possible conditions can be
combined by the bitwise operation.
After sending the instructions to the seven slave controllers separately, a final value
of error is generated according to the feedback of each slave controller. This is the
value that is sent to the upper computer and contains all the error information.
A Method of Communication State Monitoring for Multi-node CAN Bus Network 207

Start

Send instructions to slave


No
controller i (1-7) in sequence

Receive feedback
from slave Yes i=7
controller?

No

Switch(i)control instruction

Case1 Case2 Case3 Case4 Case5 Case6 Case7


Yes

error = 0x01 error |= 0x02 error |= 0x04 error |= 0x08 error |= 0x10 error |= 0x20 error |= 0x40

Send the value of error to


PC

Communication condition is
End
normal

Fig. 4. Main program diagram of master controller

4 Software Design of Upper Computer

The upper computer is implemented by LabVIEW on the computer, and the state of
seven slave controllers is judged according to the value of error in the received data.
There are two schemes here.
(1) Judge one by one according to the value of error
The traditional way is to use the conditional structure to judge one by one, as shown
in Fig. 5, but there are 27 = 128 kinds of possible communication states in the system,
and the upper computer programming workload is huge, so it is obviously inappro-
priate to judge one by one according to the value of error.

Fig. 5. Partial situation of judging one by one


208 L. Li-xin et al.

(2) Extract the wrong bit information displays directly in the dialog box
Since the error message is displayed in the dialog box, and the format is ‘block X
communicate abnormally’. Therefore, the 0–6 bits of error are respectively corre-
sponding to the numbers 1–7, and the bit information of error is directly extracted. If a
bit of error is 1, the corresponding number is filled to X. The advantage of this is that it
takes only 8 judgments to find all the cases, which is 120 times less than the one-by-one
judgment.

Fig. 6. The block diagram of communication status monitoring

The upper computer block diagram is shown in Fig. 6. First, the value of error is
converted to a binary string form, next converted to a byte array and inverted, and then
each element of the array is judged. If it is 1, the corresponding number of this bit is
displayed in the message of the dialog box.

5 Experimental Results

Using the CAN bus network of this article, if the slave controller 6, 7 are disconnected,
the upper computer displays the result as shown in Fig. 7.
A Method of Communication State Monitoring for Multi-node CAN Bus Network 209

Fig. 7. Communication state test

The master controller detects that the communication of the 6th and 7th boards is
abnormal, 0x20|0x40 = 0x60 is assigned to error and sent to the upper computer. The
upper computer first converts 0x60 to binary and gets 1100000, then converts to
unsigned byte array and reverses, the array {48, 48, 48, 48, 48, 49, 49} is obtained. In
the ASCII code, 48 represents the character 0, and 49 represents the character 1,
indicating that the communication state of the 6th and 7th boards is abnormal, which is
displayed in the dialog box. Test results show that this method can effectively locate the
wrong node.

6 Conclusion

Aiming at the maintenance and optimization of distributed system based on CAN bus,
a method of communication state monitoring for multi-node CAN bus network is
proposed, which can quickly locate faulty nodes. The method can be applied to a
distributed system of a CAN bus, and this communication state monitoring can be
added to a distributed system of a CAN bus with a PC as an upper computer.

References
1. Matsudaira, N., Chen, C., Ohtsuka, S., et al.: An over-sampling adaptive pre-emphasis
technique for multi-drop bus communication system. In: IEEE International Symposium on
Circuits and Systems, pp. 2338–2341. IEEE (2016)
210 L. Li-xin et al.

2. Wagh, P.A., Pawar, R.R., Nalbalwar, S.L.: Vehicle speed control and safety prototype using
controller area network. In: 2017 International Conference on Computational Intelligence in
Data Science, pp. 1–5 (2017)
3. Barranco, M., Proenza, J., Navas, G.R., Almeida, L.: An active startopology for improving
fault confinement in CAN networks. IEEE Trans. Ind. Inf. 2, 78–85 (2006)
4. Barranco, M., Proenza, J., Almeida, L.: Quantitative comparison of the error-containment
capabilities of a bus and a star topology in CAN networks. IEEE Trans. Ind. Electron. 58(3),
802–813 (2011)
5. Barranco,M., Proenza, J.: Towards understanding the sensitivity of the reliability achievable
by simplex and replicated star topologies in CAN. In: 2011 IEEE 16th Conference on
Emerging Technologies & Factory Automation(ETFA), pp. 1–4. IEEE (2011)
6. Long, L., Xu, H.: Communication protocol design of CAN bus real-time monitoring system.
Chin. Instrum. 02, 105–2852 (2015)
7. Zhang, K., Fang, Z.: Development of CAN protocol for greenhouse control system. Ind.
Instrum. Autom. 01 (2007). 1000-0682
Communication Data Processing Method
for Massage Chair Detection System

Lixin Lu1, Yujie Jin1, Guiqin Li1(&), and Peter Mitrouchev2(&)


1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai 200072, China
[email protected]
2
University Grenoble Alpes, G-SCOP, 38031 Grenoble, France
[email protected]

Abstract. Based on the master-slave information interaction of the massage


chair detection system, a new data processing method is proposed in this paper.
First, the frame identifier is used to describe the source or destination node of the
information frame, and the message broadcast information is marked. Secondly,
the message broadcast information is filtered by determining whether the frame
identifier is the same as the expected filter identifier. Afterwards, in order to
smoothly transmit long messages, it is split into short messages and reassembled
for transmission. Finally, the CAN bus transmission rate with the optimal frame
space of 500 kbps is calculated, which enhances the real-time performance of
the massage chair dummy detection. Through this method, the speed, quality
and efficiency of information transmission can be effectively improved. This
method is easy to generalize and can be applied to all CAN buses.

Keywords: Detection system  CAN bus  Data processing  Message


broadcast

1 Introduction

With the steady development of the world economy, the development of the health
industry is becoming more and more rapid. Everyone is paying more and more
attention to their own health conditions. Health products are emerging one after
another. Among them, the massage chair industry which combines traditional Chinese
medicine methods with modern Technology is getting more and more popular, so the
detection of massage chairs is becoming more and more important. Hiyamizu proposed
a massage chair detection technology based on human sensory sensors to determine the
massage effect of the massage chair by measuring the temperature changes of the
human skin during the massage process in real time [1]. Teramae proposed to acquire
the characteristics of the brain electrical signals of the subjects during the massage
process to evaluate the discomfort of the massage chair with acupressure [2]. Jaafar
proposed a detection technique based on the evaluation of the characteristics of the
subject’s body to evaluate the effect of the massage effect of the massage chair on the
health of the subject [3]. Jin proposed a massage chair detection technology based on
EMG (electromyography signal) of the massage site [4]. This technique is quite

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 211–218, 2020.
https://doi.org/10.1007/978-981-15-2341-0_26
212 L. Lu et al.

feasible, but the degree of response varies from person to person, resulting in unstable
test results. Different from the above massage chair detection technology, Sun proposed
a remote control and fault detection system based on neural network model to realize
automatic fault detection of multifunctional massage chair [5]. This detection technique
pioneered the use of a uniform sample to replace the subject’s role in the detection
process, facilitating the formation of uniform detection criteria. Zhang proposed the
design and implementation of a variety of communication methods based on the
massage chair detection system [6]. Finally, it was verified that the detection system
could operate normally according to the design flow and communication protocol.
This paper mainly analyzes the data flow of the detection system, studies the
implementation of the master-slave information interaction of the detection system
from the marking and recognition of information frames, and proposes the method of
disassembling and reorganizing long messages for the remote transfer of long mes-
sages. The factors affecting information collision in the process of bus information
transmission are analyzed, and the optimal frame space which can reduce the collision
probability of information is proposed.

2 Detection System Master-Slave Information Interaction

The detection system places the robot covered with the detection point on the massage
chair of the object to be detected, acquires the data information through the detection
point sensor on the robot body, and submits the data to the upper processor for
processing. The system is mainly composed of 9 functional modules, including 7 data
acquisition modules, 1 peripheral control module and 1 flow control module. The data
acquisition module is based on the intelligent terminal and has an independent
microprocessor and sensor interface, which can handle the work independently. The
peripheral control module is responsible for detecting most of the peripheral drive and
part of the signal acquisition of the system. It’s based on non-deprivable real-time core,
which is responsible for controlling and coordinating the detection system. Each
module works according to the specified process. It has data interaction with the data
acquisition and peripheral control modules, including control instruction flow, storage
instruction flow, control instruction feedback flow, and storage instruction feedback
flow, state feedback flow, data packet flow, as shown in Fig. 1 is the data interaction
between the various functional modules of the detection system, the flow control
module is the core of the entire detection system communication network.
The CAN bus provides non-destructive bus arbitration based on frame priority.
Node messages compete for bus usage rights without being destroyed. At the same
time, the bus uses short frame transmission to reduce the time of data on the bus,
improve the reliability of data transmission and stability without compromising the
real-time nature of communication. The detection system adopts the CAN bus com-
munication method to construct the internal communication network, and realizes one-
to-many interaction between the host module and the slave module and one-to-one
information exchange between the slave module and the host module in the form of
Communication Data Processing Method for Massage Chair Detection System 213

message broadcast, and does not allow information exchange between slave modules.
In a CAN fieldbus-based communication network, the master and slave modules act as
bus nodes, and can actively send broadcast messages to the bus at any time, and other
nodes on the bus can listen to the broadcast messages.

Fig. 1. Detecting the direction of data flow between system function modules

3 Message Broadcast Information Mark

The detection system consists of one host module and eight slave modules. The slave
module is equipped with four mechanical DIP switches, and the key value of the dial
code is read by the GPIO as the module MAC address. By changing the dialing to open
the key value, the detection system can assign up to 16 different MAC addresses to the
slave module. The uniqueness of the MAC address of the slave module is strictly
required, and the behavior of the MAC address to be arbitrarily changed is not allowed.
The detection system CAN bus message adopts the standard data frame type, and
the arbitration field in the message structure provides a frame identifier of 11 bits
length, which is used to indicate the priority of the current information frame. The
smaller the value of the identifier, the higher the priority. The frame identifier is used
creatively to describe the source or destination node of the information frame based on
the frame identifier providing the bus contention priority. For a bus message sent by the
host module, its frame identifier indicates the slave module MAC address that the
message frame is expected to receive, and the bus message frame identifier sent by the
slave module should mark the sender MAC address. The slave module MAC address
acquisition process is shown in Fig. 2.
214 L. Lu et al.

Fig. 2. Obtaining the slave module MAC address

4 Message Broadcast Information Filtering and Identification

When the CAN bus node receives data, it can listen to all the information on the bus,
including some unneeded information, which is not desired by the node, so it is
necessary to filter the bus information frame. The CAN bus provides an identifier filter
to filter the bus message, and saves the valid message through the filter into the level 3
mailbox depth FIFO waiting for interrupt processing. The detection system application
configures the CAN bus filter group operating mode to be an identifier mask bit pattern,
sets a 32-bit filter identifier, and masks the desired message frame identifier by key bit
masking. The filter compares the detected bus message information frame identifier
with the expected identifier. If it is consistent, the decision is valid and stored in the
FIFO. Otherwise, no response is made to the message and the bus continues to be
listened to.
The filtering and identification of the bus broadcast information is as shown in
Fig. 3, the node first determines whether the frame identifier of the bus message is the
same as the expected filter identifier. If it is different, it is determined that the node is
invalid, and the re-listening bus is directly discarded. Otherwise, the information is
valid for the node, and the message is stored in the FIFO mailbox, and the message
information is received and processed through the ISR.
Communication Data Processing Method for Massage Chair Detection System 215

Fig. 3. Broadcast information filtering and identification4 long message transmission

5 Transmission of Long Message

The detection system master-slave information interaction uses the standard data frame
format provided by the CAN bus for message transmission, that is, the bus provides
short frame transmission that can carry at most 8 Bytes length data segments. However,
the transmission of long messages is also essential during transmission. For long
message transmission, it must be disassembled into a short frame of 8 Bytes length and
then transmitted continuously. The principle of disassembling long messages is to
transmit in 8 Bytes short frames, and to automatically fill zeros in less than one frame.
The reassembly process of the long message is as shown in Fig. 4, and the received
short frame information is processed sequentially. First, the first frame of the long
message is determined according to the message storage protocol format, and is stored
in different containers according to the category, and the tag is also marked. The flag of
the long message type attribute is set to one. The subsequent content of the long
message is stored in the corresponding container according to the message type. If the
end of the frame is checked, the corresponding type flag is assigned 2 to notify the
application that the message is reorganized.
216 L. Lu et al.

Fig. 4. Long message reorganization

6 Real-Time Transmission of Message Information

The priority arbitration of the CAN bus is essentially to provide real-time guarantee for
high priority messages. In the case of low network load, the probability of information
collision on the CAN bus is small, and the real-time performance of the network is very
good. As the network load increases, the probability of information collision on the bus
increases. If only the underlying CAN function is used, the real-time performance of
low-priority message transmission will be affected, which may result in sending these
low-priority reports. The node of the text exits the bus because of multiple transmission
errors.
The detection system implements master-slave information interaction based on the
CAN bus. The longest transmission period Ti from the transmission of the bus message
information frame i to the processing completion mainly includes several time periods
as shown in Fig. 5.
Communication Data Processing Method for Massage Chair Detection System 217

Fig. 5. CAN message frame maximum transmission period

The CAN bus can transfer at a rate of up to 1 Mb/s, in which case the commu-
nication distance can reach 40 m. The communication distance of the lower-position
CAN bus system does not exceed 15 m, which satisfies the communication distance
requirement of the highest transmission rate. The faster the communication rate, the
shorter the time the message is transmitted on the bus, and the less likely the message is
to be interfered. At the same time, the instability of the CAN bus system is increased.
The bus is easily disconnected due to a node or Crash when accessing. The bus
transmission rate B takes values of 250 Kbps, 400 Kbps, 500 Kbps, and 1 Mbps,
respectively, and obtains the relevant transmission time parameters of the information
frame i at different transmission rates, as shown in Table 1.

Table 1. Relationship between bus transmission rate and information frame transmission time
Bus transfer rate B 250 kbps 400 kbps 500 kbps 1 Mbps
Information frame longest period Ti 1132 ls 762 ls 590 ls 320 ls
Longest transmission time Ri 541 ls 338 ls 270 ls 135 ls
Longest blocking time Bi 541 ls 338 ls 270 ls 135 ls

The detection system adopts the CAN bus transmission rate of 500 kbps. The bus
does not collapse due to the sudden change of the bus suspension node, and at the same
time shortens the transmission time of the message on the physical link as much as
possible, thereby reducing the probability of data being interfered. In the worst case,
two consecutive frames of message information are received and processed by the same
node module. The frame space Fi is the longest blocking time Bi plus the time Ii + Ci
for the system reserved data processing, namely:

F i ¼ S  ðB i þ Ii þ C i Þ ð1Þ

Where S is the safety factor and its value interval is [1, 2]. The detection system
CAN bus adopts 1.5 safety factor and the frame space is 500 ls frame space, which can
effectively avoid bus information collision and ensure the real-time performance of all
information.
218 L. Lu et al.

7 Conclusion

This paper studies the implementation of the master-slave information interaction of the
detection system from the perspective of information frame marking and recognition.
A new data processing method is proposed. Through this method, the efficiency of
information transmission can be effectively increased, the accuracy of information
transmission can be enhanced, the real-time performance of information transmission
can be improved, and the speed and quality of information transmission can be made
better. This data processing method changes the way data is transmitted and received. It
also can be extended to the control field using the CAN bus.

References
1. Hiyamizu, K., Fujiwara, Y., Genno, H., et al.: Development of human sensory sensor and
application to massaging chairs. In: Proceedings 2003 IEEE International Symposium on
Computational Intelligence in Robotics and Automation. Computational Intelligence in
Robotics and Automation for the New Millennium (Cat. No. 03EX694), Kobe, Japan, vol. 1,
pp. 140–144 (2003)
2. Teramae, T., Kushida, D., Takemori, F., et al.: Estimation of feeling based on EEG by using
NN and k-means algorithm for massage system. In: Proceedings of SICE Annual Conference
2010, Taipei, pp. 1542–1547 (2010)
3. Jaafar, H., Fariz, A., Ahmad, S.A., Yunus, N.A.Md.: Intelligent massage chair based on blood
pressure and heart rate. In: 2012 IEEE-EMBS Conference on Biomedical Engineering and
Sciences, Langkawi, pp. 514–518 (2012)
4. Jin, H., Shen, L., Song, J.: Massage effect evaluation of massage chairs based on EMG
signals. Packag. Eng. 35(02), 28–31 (2014)
5. Sun, L., Wang, D.: The development of fault detection system based on LabVIEW. In: 2018
5th International Conference on Electrical and Electronic Engineering (ICEEE), Istanbul,
pp. 157–161 (2018)
6. Zhang, H., Deng, T., Li, G.: Design and implementation of multiple communication methods
in massage chair detection system. Inf. Syst. Eng. (01), 35–37 (2019)
Development of Data Visualization Interface
in Smart Ship System

Guiqin Li1(&), Zhipeng Du1, Maoheng Zhou1, Qiuyu Zhu1, Jian Lan1,
Yang Lu2, and Peter Mitrouchev3(&)
1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai 200072, China
[email protected]
2
No.2 Qingdao Road New Hi-Tech Industry Park, Changshu, Jiangsu, China
3
University of Grenoble Alpes, G-SCOP, 38031 Grenoble, France
[email protected]

Abstract. With the rapid development of shipping technology, the intellectu-


alization of ships has become an inevitable trend in the field of shipbuilding and
shipping. In order to collect data and monitor the various systems of the ship in
real time, the scheme of data visualization interface for smart ship system is
proposed with the design of the ship condition monitoring platform architecture
which adopts MVC (Model–view–controller) design pattern. The scheme
combines ship management service network into integrated platform, thus
constructing the data visualization interface of intelligent ship system by using
the web development techniques, front-end frameworks, and javascript plugins
such as AJAX (Asynchronous JavaScript and XML), Boostrap, layui and
Highcharts. This overcomes the shortcomings of the traditional ship data
monitoring system and lays a solid foundation for the subsequent functional
development of the intelligent ship.

Keywords: Intelligent ship  Data visualization  Dynamic web design

1 Introduction

The shipbuilding industry, as a modern comprehensive and strategic industry that


provides technical equipment for water transportation, marine resource development
and national defense construction, is an important part of the national traditional
industry. With the development of shipping technology, the degree of ship information
has gradually increased, and the degree of integration of ship systems has become more
and more high. In the past, the traditional models of “modeling through mechanism,
monitoring through thresholds, and management through experience” have been
gradually marginalized. Non-intelligent ships will not be able to meet the needs of the
future ship market, and ship intelligence has become an inevitable trend in the
development of shipbuilding and shipping.
In 2012, the “MUNIN” (Network Intelligent Offshore Navigation) project, jointly
organized by eight research institutes, Fraunhofer CML, MARINTEK and Chalmers,
was the first to study unmanned large unmanned vessels [1]. The Norwegian

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 219–226, 2020.
https://doi.org/10.1007/978-981-15-2341-0_27
220 G. Li et al.

Classification Society DNV proposed “Shipping 2020” in 2012 and “2020 Shipping” in
2015, adding two major technological development trends: “Mixed Ship Propulsion”
and “Connectivity” [2]. In September 2015, Lloyd’s Register of Shipping (LR), Quintic
Group and Southampton University collaborated to release the 2030 Global Marine
Technology Trends (GMTT 2030) report, which listed smart boats as 18 major One of
the marine technologies [3]. The rapid development of intelligent ship technology has
made the Chinese shipbuilding industry also pay attention to the application of Internet
of Things, big data and intelligent services in the shipbuilding field. In 2015, China
Classification Society (CCS) formulated the “Smart Ship Code” based on the experi-
ence of domestic and foreign intelligent ship applications and the future development
direction of ships, which clearly stipulated the intelligent ship in the hull, engine room,
navigation, energy efficiency control, cargo management, Specific requirements for
data integration platforms, etc. [4]; On November 28, 2017, the “Dazhihao”, which was
praised by the domestic media as the world’s first smart ship, was officially delivered. It
was developed by China State Shipbuilding Corporation. The ship group) is financed
and built, and the “iDolphin” 38,000-ton intelligent bulk carrier jointly developed by
many intelligent ship research and development units at home and abroad [5].
Data visualization technology is a means of graphically using collected data,
computer graphics, human-computer interaction, and computer-aided tools to transform
data, information, and knowledge into visual representations, making it more efficient
and clearly convey information [6]. The data visualization process can be regarded as
the process of transforming and displaying the data stream. The basic process model is
shown in Fig. 1. The mission of data visualization is to clearly and intuitively present
the value of the data so that it can really play a role in decision support. Data visu-
alization focuses on top-down processing of data, information, and knowledge.
The goal is to create a chart that clearly and effectively communicates key features
in combination with aesthetics, enabling deeper and more inconspicuous data sets.
Insight [7].

Fig. 1. Process model of data visualization


Development of Data Visualization Interface in Smart Ship System 221

At present, the development of ship intelligence information in China is lagging


behind. The domestic general hull not only does not carry out the data acquisition and
processing of each control unit and power unit of the hull itself, and there is no unified
monitoring platform, which leads to relatively independent information maintenance
management and single function. At the same time, the independent design of each
system also increases the complexity of data modeling and protocol transmission, and it
is difficult to integrate interaction. Based on the requirements of the intelligent ship
integration platform, this paper develops the intelligent ship human-computer inter-
action management interface. Starting from the effective processing of the ship’s
various system parameters and main data, the data is visually monitored for real-time
monitoring according to actual requirements.

2 System Architecture and Functional Structure Analysis

2.1 Design of System Architecture Design


The architecture of intelligent ship system based on the idea of MVC design pattern is
shown in Fig. 2. It consists of four parts: resolution server, database server, business
server and WEB server. The resolution server and the business server are responsible
for the processing of the business logic; the database server is used for accessing the
data information; the WEB server is responsible for the platform front-end page, and
the user request is processed by the controller.

Fig. 2. Smart ship system architecture diagram

This article focuses on the front-end view layer page design of the platform on the
WEB server. The terminal user is provided with a view page for accessing the WEB
server side, and the view layer page can provide the user with relevant ship driving
state information display, and the user can know the relevant equipment of each system
of the user ship according to the real-time ship driving state monitoring information
provided by the view page. State, to achieve intelligent control of the ship. The plat-
form front-end page is composed of specific functional modules, which are responsible
for uploading data display, command uploading, command feedback, etc. Users can
select corresponding operations according to their own needs.
222 G. Li et al.

2.2 System Function Requirements Analysis and Communication


Protocol
The system functional structure diagram shown in Fig. 3 is designed through the
analysis of the functional requirements of the system. The user accesses the system
front-end page through the browser, registers the relevant login information at the
corresponding position of the homepage, obtains the login name and password
belonging to the unique identifier of the user, and enters the user-related interface for
operating the main functions of the system. Different operating ranges are set according
to user privilege levels to control the operating range of users with different privilege.
The main functions of this system are composed of four parts, namely user information
management, ship driving state management, terminal management and version
information description. The system is mainly used for ship driving state management,
and other functional interfaces are reserved, which can be expanded according to new
application requirements.

Fig. 3. Smart ship system function structure diagram

(1) User information management is divided into LEVEL 1, LEVEL 2, and LEVEL 3.
According to the different permission levels, the user can operate differently. The
higher the permission, the wider the range of operations. The main function of the
user whose privilege level is LEVEL 3 is to operate the historical data of each
system and equipment of the ship, such as the specific functions of querying,
deleting and modifying historical data. Users with the privilege level of LEVEL 2
can view the real-time data display of each device on the basis of LEVEL 3, and set
the alarm value according to the actual situation. Users with the highest user-level
permission level of LEVEL 1 can implement any operation within the page, such
as adding, deleting, checking, and changing the information of any user.
(2) Ship driving state management is divided into ship condition management, fault
handling, and configuration management. Ship condition management consists of
real-time ship status information, including basic status, cabin status, equipment
operating status, and safety status. The basic state mainly queries the ship speed,
Development of Data Visualization Interface in Smart Ship System 223

engine speed, engine torque, total mileage, fuel consumption, water temperature,
lubricating oil and other information; the cabin state is used to monitor the air
pressure, temperature and flammable gas concentration in each cabin of the ship;
equipment operating status monitoring includes equipment air pressure Informa-
tion such as temperature, hydraulic pressure and normal operation; safety status
monitoring includes alarms and other conditions exceeding the set threshold; fault
handling is mainly for the ship to record the fault number, fault classification, fault
occurrence time, fault content description when the ship encounters a fault. Such
information makes it easy for users to analyze and locate the ship’s faults.
Configuration management is used to maintain the ship status management data
items and fault query items, and record the operation behavior of the equipment,
including information such as the operator, operation time and operation content.
(3) Terminal management is divided into user terminal information display and user
terminal information management. The user terminal information is mainly used
to display the current terminal device number, name, model, specifications, basic
parameters, related configuration equipment, production date, manufacturer,
maintenance record, and service life. User terminal information management is
mainly used to add, delete, check, and change the operational information of the
terminal device.
(4) OPC UA (OPC Unified Architecture) is a new generation of OPC. OPC UA does
not rely on programming languages and operating systems to run applications, it
can run regardless of the manufacturer and platform. The system unifies different
protocols into OPC UA standard protocols and performs unified information
modeling and secure transmission. OPC UA uses a semantics-based and service-
oriented architecture. With a unified architecture and mode, it can realize hori-
zontal information integration such as data acquisition and device interoperability
at the bottom of the device, and vertical information integration from device to
SCADA to MES, device and cloud. As long as the upper layer software is
designed according to the OPC UA specification, the device data can be obtained
without knowing the communication protocol of the underlying device. Data
modeling and transmission using the OPC UA standard. As an OPCUA server,
the terminal converts different protocols into a unified information model
description, and then transfers the data from the collection terminal to the data
interface server through the OPC UA transmission standard. Finally, the software
uniformly stores the data to the database.

3 Design and Implementation of System Dynamic Webpage

The mainstream technology based on the MVC design pattern platform architecture
design and functional requirements analysis is used to complete the optimization design
and implementation of the platform front-end function page. The implementation of the
system back-end webpage logic mainly adopts the Django framework. The main data
of the front-end page is displayed in a dynamic loading manner. According to the
user’s specific access operation, it is divided into partial and overall dynamic loading.
224 G. Li et al.

Compared with the early front-end page, the foreground data information is loaded
more. Flexible and reduced front page code rewrites. In the specific implementation
process, we mainly adopt Django back-end framework, Bootsrap front-end framework,
JS plug-in library and AJAX technology, and use ORM (Object Relational Mapping)
framework to achieve various operations on the database using simpler statements.
Figure 4 shows the data visualization interface of the intelligent ship system.

Fig. 4. Smart ship system data visualization interface

(1) DIV blocking technology to achieve page regionalization


As shown in Fig. 4, the platform is divided by Div, the page is mainly composed of 4
areas, located in the top header area; in the middle part, the navigation area on the left
side. The version information description area at the bottom. The size and position of
each area are controlled by CSS code to realize the overall layout of the page. Within
the 4 main area div blocks, embed the div block again to refine the layout of each area.
For example, in the navigation bar div area, the function navigation div block is
embedded again, and four div blocks are embedded in the function navigation div
block for displaying user information, vehicle driving state management, terminal
management, version information, and specific function management module. Inside,
the div block is embed again to implement specific function descriptions.
(2) JS plugin library to achieve page specific functions
As shown in Fig. 4, in the Header area, the user’s registration, login and logout
functions are designed. In addition, the Search box is added with the layui plugin,
which enables search and query of page elements. In the navigation area, the folding
menu plug-in that comes with the Boostrap frame is used. By clicking on the relevant
area, the expansion and contraction of the sub-div block can be realized. For example,
the user clicks on the left host propulsion system to display or hide the ship condition
Development of Data Visualization Interface in Smart Ship System 225

management, fault query, and terminal management sub-block information. The related
JS file also adds a click event to these specific sub-function modules. The user clicks on
the location of the module to display the data of the related function in the functional
area through dynamic loading. The main language in the function display area realizes
the display of various status modules in the ship state management, and further refines
the specific state of the ship management. The specific status information is displayed
in the main function area, and different variable names and ids are selected through the
drop-down box. of Different status information is switching displayed. Figure 4 shows
the real-time display of the current parameter information, which mainly converts
various types of information data in the database into a dashboard form by referring to
the view plug-in in Highcharts, when the data in the dashboard exceeds the threshold
set by the variable. The alarm box will pop up and data will be stored in the history
information table.
(3) Ajax asynchronous dynamic submission
In the form submission process, the system partially adopts the Ajax asynchronous
submission mode. Ajax asynchronous submission can interact with the server in the
background, asynchronous loading to achieve page refresh. This will significantly
improve the performance of the business server, and the speed at which information is
loaded into the database will be faster. The specific code is as follows. This section of
Ajax code implements the submission of the user registration form, the url attribute
overrides or specifies the ‘action’ attribute of the form, the type attribute specifies the form
submission method, and the data attribute is used to specify the specific parameter value.
After the commit, the callback function is called to feed the information back to the user.

4 Conclusions

The data visualization interface and information architecture based on requirement


analysis of data integration for smart ship are proposed and designed. Through a large
number of tests, the basic functions of the front page of the system are realized. The test
results show that the data visualization through MVC design mode and dynamic
webpage technology can achieve the expected results, such as real-time delivery of
information to users, diversified page display effects, modular page writing, fast page
loading speed, and improved user experience. In the future, the data visualization
interface in the intelligent ship system will be further improved from the aspects of
function expansion and interface beautification.

References
1. Burmeister, H.-C., Bruhn, W., Rødseth, Ø.J., Porathe, T.: Autonomous unmanned merchant
vessel and its contribution towards the e-navigation implementation: the MUNIN perspective.
Int. J. e-Navig. Marit. Econ. 1, 1–13 (2014)
2. Rolls-Royce Testing Drone Technology for Unmanned Cargo Ships (2014). www.rolls-royce.
com
226 G. Li et al.

3. Global Marine Technology Trends 2030. Lloyd’s Register, QinetiQ and University of
Southampton (2015)
4. Smart ship specification. China Classification Society (2015)
5. Liu, C., Chu, X., Xie, S., Yan, X.: Review and prospect of ship intelligence. Ship Eng. 38(3),
77–84 (2016)
6. Zhang, L.: The visualization of topologies and the fast metamorphosis of polyhedral models
in 3D. Master’s thesis, Jiangnan University (2007)
7. Sun, B.: Research and implementation of data visualization based on microsoft newest graph
system WPF and silverlight. Master’s thesis, Northeast Normal University (2009)
Feature Detection Technology
of Communication Backplane

Guiqin Li1(&), Hanlin Wang1, Shengyi Lin2, Tao Yu2,


and Peter Mitrouchev3
1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai 200072, China
[email protected]
2
Shanghai Zhaohong Aviation Technology Co., Ltd., Building 5,
No. 500, Huapu Road, Qingpu District, Shanghai 201700, China
3
University Grenoble Alpes, G-SCOP, 38031 Grenoble, France

Abstract. The infrastructure of the telecommunication service is the commu-


nication base station, which is the basis for the operation of the entire com-
munication network. In this paper, a detection system is designed to solve the
problem, such as too much uncertainties, unable to obtain uniform quality
inspection standard, which caused by testing the products manually. This system
is developed based on the image enhancement processing and image recogni-
tion. Through deep learning and data processing, the accuracy of feature
detection is achieving 100%. This system meets the needs of large-scale
manufacturing.

Keywords: Communication base station  Feature detection  Image


enhancement  Image recognition  Deep learning

1 Introduction

Since 2010, the total volume of China’s telecom services has generally increased. With
the development of mobile communications and 5G, it is expected that the proportion
of mobile communications revenue will continue to rise in the future. The communi-
cation base station is the basis for the operation of the entire communication network.
According to the requirements of the automation industry, it is of great significance to
use the image recognition method to automatically detect the backplane of the com-
munication base station [1, 2].
Wang studied the sharpening method of digital image, using one-direction first-
order differential and non-directional first-order differential algorithm to sharpen the
image [3]. Cheng and Wei studied the use of K-Means clustering algorithm to merge
similar colors of images, in order to reduce color types and improve processing effi-
ciency [4]. Yang studied digital image enhancement processing method [5]. In terms of
deep learning, Xing and Wang, Zhou, Yang studied the method of using convolutional
neural network algorithm to extract image features and enrich the image separately
[6, 7], which greatly improved the accuracy of image recognition. Ma studied the deep

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 227–233, 2020.
https://doi.org/10.1007/978-981-15-2341-0_28
228 G. Li et al.

learning algorithm based on convolutional neural network for handwriting recognition,


which provides a reference for image recognition in this paper [8].
In order to complete the detection of key features and details of the communication
base station backplane, a image recognition detection system is designed which can
meet the needs of large-scale manufacturing.

2 System Design

The whole system is designed to realize the feature detection of the communication
base station backplane, including image acquisition, image analysis processing and
feedback of processing results. It mainly consists of industrial computer (control sys-
tem), image acquisition system and database system.
The image acquisition system can detect multiple products, which needs to com-
plete the automation indicators: the single product detection tempo is less than 4 min,
the image processing time is less than 45 s. The system can detect and identify the
feature on the surface of the product: whether the black Pad is pasted, whether the
positioning pin is installed, whether the label is or not, and whether it is reversed. The
detection accuracy of the communication backplane can reach 100%.
The image acquisition system is constructed as shown in Fig. 1. A detection ter-
minal and a central control PLC are included. The detection terminal can be divided
into three parts: the frame, the light source system and the industrial camera. A three-
axis coordinate system is attached to the frame which can control the movement of the
CCD camera and light. The light source selected with the best illumination condition
for the actual situation can assist the CCD camera to capture the three-dimensional
structure precisely.

Fig. 1. The structure of the image acquisition system.

The workflow of the image acquisition system is shown in Fig. 2. When the
product is loaded manually and the start button is pressed, the product will be clamped
and transport to the detection position. After detecting its position and boundary by the
position sensor, a signal is sent to the industrial computer to control the light to be
turned on to assist the CCD camera to obtain the color image of the product; To
Feature Detection Technology of Communication Backplane 229

enhance the image, the system is designed to use several methods to process it, which
includes reducing the color of the image, then converting it into a gray image and
sharpening it. Feature extraction is performed on the enhanced image, and the image is
classified by machine learning method. The result is marked on the original image; the
result is output to the display terminal and saved in the database for later analysis. The
above steps are repeated several times until the detection of the entire product to be
tested is completed.

Fig. 2. The workflow of the image acquisition system.

3 Image Enhancement Processing

For the original image captured by the camera, the system first performs pre-processing
to remove image interference caused by factors such as illumination, parameters and
positional direction that camera set during image acquisition, and is used to enhance
certain features in the image.

3.1 Image Color Merging Using K-Means Algorithm


Color quantification can combine the less important colors of the image into a relatively
important color to reduce the type of color. The K-Means algorithm is an unsupervised
clustering algorithm. It is simple and easy to implement, and can be automatically
clustered according to the similarity of data points in the image.
The K-Means algorithm steps are as follows:
(1) Select k data points from the data set containing n data points x(n) as the initial
cluster center Ci, where the data points are the pixel points in the image.
230 G. Li et al.

(2) Calculate the distance between each data point and the cluster center separately:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
d¼ ðx1  x2 Þ2 þ ðy1  y2 Þ2 þ ðz1  z2 Þ2

The data point with the smallest distance from the cluster center is classified into
the same category as the cluster center.
(3) Calculate the mean according to the existing data points of each category, and re-
select the cluster centers of each category according to the following formula.

1X
Ci ¼ xj
nj xi 2Ci

(4) Repeat the above steps until the cluster center Ci does not change.

3.2 Spatial Domain Image Enhancement


Let the color of a certain pixel be RGB (R, G, B), and the gray image obtained by the
simplified color image can be obtained by the following algorithm.

Gray ¼ R  0:3 þ G  0:59 þ B  0:1

A grayscale image of the original image can be obtained by changing the color of
the pixel to RBG (Gray, Gray, Gray).
For the obtained grayscale image, the image can be processed in a two-dimensional
space using spatial domain image enhancement. The steps for spatial domain image
enhancement are as follows:
(1) Image enhancement
Use the histogram homogenization method to convert the grayscale interval with
greater concentration into a uniform distribution over the entire grayscale range.
(2) Image smoothing
The median filtering method is used to smooth the image in order to eliminate or reduce
noise and improve image quality. The median filtering can’t blur the edges when
eliminating random noise.
(3) Image sharpening
Image sharpening is an important step in image preprocessing to highlight the edges
and details of the image and the outline of the image. The principle is that the gradation
difference between the pixel point of the edge portion of the image and the pixel point
of the surrounding image is made larger, so that the edge of the image is protruded, and
the sharpness of the image is improved as a whole.
Feature Detection Technology of Communication Backplane 231

The image is sharpened using non-directional first-order differential sharpening


according to the characteristics of the detected image. The principle and implementa-
tion process of using the Sobel algorithm for non-directional first-order differentiation
is as follows.

Fig. 3. Non-directional first-order differential sharpening template and grayscale changes.

(1) Sobel template selection


A Sobel operator is selected and its center is determined. The template consists of two
parts, a horizontal template Sx and a vertical template Sy, and the two templates are
transposed:
 12
Sðx; yÞ ¼ S2x þ S2y

(2) Sum calculation


The template needs to be aligned with the upper left corner of the image, and then
calculate the sum of the coefficients of the template and the gray value of the corre-
sponding pixel, sum1 and sum2, respectively, and the gray value of the template center
point can be calculated as follow:
1
sum ¼ ðsum12 þ sum22 Þ2

(3) Result processing


Replace the center point pixel gray value with sum obtained above, as shown in Fig. 3,
then move the template horizontal direction by one pixel unit. The above process is
repeated until the pixel points covered by the center point in the pixel value matrix are
replaced, and the non-directional first-order differential sharpening algorithm ends.

4 Deep Learning and Data Processing

The obtained image features are extracted and classified by a convolutional neural
network.
232 G. Li et al.

4.1 Image Recognition Based on CNN


The convolution kernel obtained by neural network learning can be used to enrich the
image. In the convolutional neural network, the two-dimensional convolution can be
expressed by the following formula.

nXin
sði; jÞ ¼ ðX  WÞði; jÞ þ b ¼ ðXk  Wk Þði; jÞ þ b
k¼1

Where X is the input and W is the convolution kernel. N_in is the number of input
matrices. X_k represents the kth input matrix. W_k refers to the kth convolution kernel
matrix.
The convolution kernel is represented by a two-dimensional matrix, and each number
in the matrix is called a weight. In the convolution calculation, the convolution kernel
matrix is aligned with the upper left corner of the image matrix, and then the convolution
kernel is shifted to the right by one pixel, each weight is multiplied by the pixels of the
corresponding image, and these products are added together as a result of the convolution.

4.2 Label the Results on the Original Image


The convolutional neural network is used to process the digital image, and the result is
output. The result is shown in Figs. 4, 5, 6, 7, 8 and 9.

Fig. 4. Image with a dowel pin Fig. 5. Image without a dowel pin

Fig. 6. Image with PAD Fig. 7. Image without PAD


Feature Detection Technology of Communication Backplane 233

Fig. 8. Image with correct orientation of Fig. 9. Image with wrong orientation of the
the barcode barcode

It can be seen that the designed system can accurately identify the items to be
detected on the backplane and output the results on the display interface.

5 Conclusions

The communication base station backplane feature detection system is designed with
image enhancement technology to improve the clarity of the image acquisition of the
communication base station backplane. By using deep learning, the characteristics of
the acquired image can be identified and detected with an accuracy of 100%. This
method eliminates the uncertain factors generated during the artificial processing and
improves the efficiency of mechanical automation production and processing.

References
1. Wang, T.: Application and details of computer image recognition technology. Electron.
Technol. Softw. Eng. 19, 128 (2017)
2. Zou, X.: Research on computer image recognition technology and application problems.
Electron. Test 23, 46–47 (2017)
3. Wang, Y.: Sharpening algorithm for digital image processing. Communications 26(02), 275–
276 (2019)
4. Cheng, G., Wei, Y.: Application of color quantization algorithm based on k-means in rock
image preprocessing. J. Xi’an Shiyou Univ. Nat. Sci. Edn. 34(03), 114–119 (2019)
5. Yang, C.: Research and implementation of digital image enhancement technology. Comput.
Program. Skills Maintenance 09, 138–139 (2018)
6. Xing, Z.: Application of convolutional neural network in image processing. Softw. Eng. 22
(06), 5–7 (2019)
7. Wang, M., Zhou, S., Yang, Z., et al.: A brief introduction to deep learning techniques. Autom.
Technol. Appl. 38(05), 51–57 (2019)
8. Ma, L.: Research on deep learning algorithm about handwriting based on the convolutional
neural network. Comput. Sci. Appl. 08(11), 1773–1781 (2018)
Research on Data Monitoring System
for Intelligent Ship

Guiqin Li1(&), Xuechao Deng1, Maoheng Zhou1, Qiuyu Zhu1,


Jian Lan1, Hong Xia2, and Peter Mitrouchev3
1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai 200072, China
[email protected]
2
No. 2 Qingdao Road New Hi-Tech Industry Park, Changshu, Jiangsu, China
3
University of Grenoble Alpes, G-SCOP, 38031 Grenoble, France

Abstract. Ship data monitoring systems are critical to the safety of ship
operations. They provide important information for the maintenance and man-
agement of equipment in the cabin. The network system of intelligent ship
monitoring system, as well as hardware systems such as computing, storage and
terminal, are designed to provide the underlying hardware support for the
intelligent ship monitoring system. The cloud computing platform and database
cluster are the basic software platforms to provide the operational basis for the
intelligent ship monitoring system. The intelligent ship data monitoring system
includes unified user authentication, data monitoring, data acquisition and
transmission, and warehousing software. The format of unified data interaction
provides intelligent support for ship operational efficiency and navigation safety
through the comprehensive sharing of various data, which makes up for the
deficiencies of traditional ship data monitoring systems and provides a basis for
the deployment and application of the follow-up functions of intelligent ships.

Keywords: Intelligent ship  Data monitoring system  Underlying hardware 


Cloud computing platform

1 Introduction

At present, ship navigation system, engine monitoring and remote control system,
maintenance management is relatively independent, single function, information is
closed; and each system is designed independently, the transmission protocol is
complex, and it is difficult to integrate interaction [1]. With the rapid development of
shipping technology, the degree of ship informationization has gradually increased. In
the navigation of ships, it is necessary to reduce the navigation and maintenance costs
of ships, improve energy consumption utilization, and optimize navigation routes [2].
The degree of integration of ship systems is becoming more and more demanding, and
ship intelligence has become an inevitable trend in the development of shipbuilding
and shipping.
The intelligent ship data monitoring system collects a large amount of data through
the underlying data nodes. Based on this information, the operation and management

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 234–241, 2020.
https://doi.org/10.1007/978-981-15-2341-0_29
Research on Data Monitoring System for Intelligent Ship 235

personnel can properly control the equipment to ensure the normal operation of the ship
[3]. The ship has a wide variety of professional systems and a large amount of data
processing tasks. The traditional C/S structure data monitoring system is responsible
for both the presentation layer and the business logic layer. It requires a large amount of
software to be installed, and the program is difficult to deploy and maintain. And
upgrades are frequent, and it is difficult to meet the demand. Through the Django
framework, this paper builds a smart ship data monitoring system based on B/S
architecture [4], which solves the standardization of data acquisition, transmission and
storage. Not only can it significantly improve the real-time, security and maintainability
of the system, but also easy to achieve centralized monitoring, unified management,
etc. [5]. At the same time, the establishment of a cloud computing platform, a large
number of limited computer resources are concentrated in a certain distribution [6]. The
remote network cloud computing host runs the intelligent ship data monitoring system,
which can rapidly expand computing power, storage space and provide various soft-
ware tools as needed.

2 Hardware Platform Architecture

The network system of the intelligent ship monitoring system, as well as the hardware
systems of computing, storage and terminal, provide the underlying hardware support
for the intelligent ship monitoring system.
The overall design of the hardware system is a three-layer network architecture,
information layer, network layer and application layer. The structure of the system
network communication system is shown in Fig. 2. The information layer collects the
physical information of various devices of the ship through various ship intelligent
sensing devices, and converts various signals into digital information that can be
processed. The network layer realizes the connection between devices and the com-
munication of the system, and the network connection form of the ship is Ethernet. The
application layer includes a workstation, a database server, a storage server, and the
like, and privatizes computing resources and storage resources in a private cloud
manner.
The network topology adopts a dual redundant star-type loop-free Ethernet topol-
ogy, and the network protocol and the virtual private network adopt the international
standard network protocol TCP/IP. Virtualizing the server reduces the interdepen-
dencies between the operating system and the hardware, enabling dynamic adjustment
of server resources on demand across resource pools. The storage network is built using
IPSAN [7], with at least two storage nodes to improve storage reliability and virtualize
with cloud computing technology. Compute nodes use multiple high-performance
computers and are virtualized in a private cloud to meet the dynamic allocation of
resources (Fig. 1).
236 G. Li et al.

Storage Node Mobile


Devices

Computer

Core
Switch Access
Client
Switch
Computer

Access
Switch PAC

PAC

Field Device
Calculate Node

Fig. 1. Schematic overview of key monitoring components of hydropower plant

3 Basic Software Platform Architecture

The basic software includes the cloud platform and database cluster, which is the basis
for the operation of the entire system.
The cloud computing platform pools the server clusters, performs unified man-
agement and scheduling, and implements flexible deployment of virtual machines
based on resource requirements and service priorities to improve resource utilization.
Build a private ship cloud computing platform, provide IaaS type cloud host or PaaS
type container computing service to the user layer, and all upper layer application
software is built on the platform. The cloud computing platform chose Xen, which is
more mature, as the main virtualization solution [8] for large-scale infrastructure
deployment. IPSAN is a unified storage solution, network design, and core equipment
connections are dual redundant. Network isolation and IP allocation. Devices accessing
the network are isolated based on the way that ports are combined with subnets to
divide VLANs. The IP addresses of the terminal devices and the servers, storage, and
network devices are allocated by the private address of the internal network. Devices
between different subnets and VLANs are interconnected through internal routes and
gateways. A schematic diagram of the system architecture of the Citrix XenDesktop
desktop virtualization solution is shown in Fig. 3.

Fig. 2. Xendesktop system architecture


Research on Data Monitoring System for Intelligent Ship 237

The overall solution of the database cluster is implemented by the MySQL database
management system and the My CAT distributed database [9]. Due to the adaptability
of My CAT, all kinds of different databases can be accessed smoothly through a unified
interface, and only need to propose the table structure and storage scale of the database,
and the rest of the work is solved by the cluster. Multiple physical databases can be run
on different physical machines or on virtual machines of the cloud platform. The
system scale can be added or deleted according to actual needs.

4 Application Software Platform

The application software platform includes unified user authentication, data monitoring
system, data collection and transmission, and warehousing software. It is the top-level
application that directly interacts with users. It is developed by B/S architecture and it
supports applications on the mobile side. The application software platform architec-
ture is shown in Fig. 4.

Fig. 3. Application software architecture

(1) Unified user authentication


Unified User Authentication uses the Role-Based Access Control Model (RBAC) [10,
11] to achieve a logical separation of users and access rights, reducing the complexity
of authorization management. Provide unified user and organization information,
unified authentication and authorization services, and use RSA encryption algorithms
to process sensitive information.
(2) Data collection transmission and storage software
The OPC UA Client is used to connect to the OPC UA Server side [12], and data
communication can be performed in real time through reliable encryption. The OPC
238 G. Li et al.

UA Client can perform real-time data collection and storage, historical data access,
instant alarm information reception, device control and other functions…
(3) Data monitoring system
The intelligent ship data monitoring system reads various state parameters of the
equipment from the database, and is responsible for providing an interactive interface
to the user, displaying various data of the ship in real time on the monitoring interface
in various presentation forms, and realizing the monitoring function of the ship running
condition. When an instant message such as an alarm is stored in the database, the
information of the HTTP communication program is invoked by the trigger of the
database to push the information to the monitoring software in real time to realize
timely display of the information.

5 Development of Data Monitoring System

The data monitoring system is based on the B/S architecture and is developed using the
Django framework. For data with low volume and low update requirements, the
transmission request method of the back-end data uses Ajax polling query [13], which
realizes partial data content update without re-requesting access; for large-scale and
real-time requirements High data, the back-end data transmission request method
adopts Websocket protocol [14] to realize one connection and long-term communi-
cation to save server resources.
(1) Analog signal interface design
Taking the speed and temperature simulation of the ship as an example, the speed is
displayed in the form of an instrument panel, and the temperature is displayed in the
form of a histogram. When the analog value exceeds the rated threshold, an alarm will
be given to alert the user. At the same time, the user can click the drop-down box to
select, or search in the input box, select the analog quantity to be monitored, and then
monitor the equipment to be monitored. The interface design is shown in Fig. 6.

Fig. 4. Analog signal interface


Research on Data Monitoring System for Intelligent Ship 239

(2) Digital signal interface design


The digital signal interface is displayed in the form of a signal light. When the signal is
normal, the signal light is always green. When an abnormal situation occurs, the signal
light turns red to remind the user (Fig. 5). At the same time, the user can click the drop-
down box to select, or search in the input box, select the digital quantity to be mon-
itored, and then monitor the equipment to be monitored. The interface design is shown
in Fig. 6.

Fig. 5. Digital signal interface

(3) Threshold setting interface design


The threshold setting interface allows the user to set the unit, range and alarm value of
the analog value, and the set value will be updated synchronously to the analog signal
interface. At the same time, you can add new analog information or delete analog
information. The interface design is shown in Fig. 7.

Fig. 6. Threshold setting interface


240 G. Li et al.

(4) Historical data interface design


Historical data interface for analog with the method of discount figure, can effectively
response the analog numerical trend over a period of time, at the same time can view
the discount of each point on the drawing detailed numerical information, as shown in
Fig. 7 for digital quantity with the method of form, can be a detailed record information
of digital equipment, convenient user query, as shown in Fig. 8.

Fig. 7. Digital historical data interface

Fig. 8. Analog historical data interface

6 Conclusions

A certain improvement has been made in view of the inadequacies of the existing ship
data monitoring system. Based on the overall cloud platform architecture of the ship’s
private cloud, providing a series of services such as virtual machines, virtual desktops,
virtual storage, virtual networks, etc., can greatly improve the reliability, security,
flexibility and economy of the system, and build and grow the system. Data intelligence
Research on Data Monitoring System for Intelligent Ship 241

analysis provides solid data and computing resources support. At the same time,
complex and diverse equipment terminals are protocol-converted to form a standard-
ized and unified data format, and gradually access, integrate and integrate various types
of ship subsystems with different levels of intelligence to achieve comprehensive
monitoring and intelligent management of ships. It is very practical and valuable for the
shipping personnel to have convenient and quick access to the data and management of
the ship’s operation.

References
1. Liu, S., et al.: Ship information system: overview and research trends. Int. J. Naval Architect.
Ocean Eng. 6(3), 670–684 (2014)
2. Kaminaris, S.D., et al.: An intelligent data acquisition and transmission platform for the
development of voyage and maintenance plans for ships. In: The Fifth International
Conference on Information, Intelligence, Systems and Applications (IISA 2014). IEEE
(2014)
3. Yang, C.S., Jeong, J.: Integrated ship monitoring system for realtime maritime surveillance.
In: IGARSS 2016 – 2016 IEEE International Geoscience and Remote Sensing Symposium.
IEEE (2016)
4. Li, X., Zhu, M., Zhan, X.: Exploration of the building model about theme_based learning
and shared community based Python add Django. In: Advances in Computer Science and
Education, pp. 129–134. Springer, Berlin (2012)
5. Li, S., Si, Z.: Information publishing system based on the framework of Django. In: China
Academic Conference on Printing & Packaging and Media Technology. Springer, Singapore
(2016)
6. Shen, Z., et al.: Cloud computing system based on trusted computing platform. In: 2010
International Conference on Intelligent Computation Technology and Automation, vol. 1.
IEEE (2010)
7. Lee, H.-J., Lee, K., Won, D.: Protection profile of personal information security system:
designing a secure personal information security system. In: 2011IEEE 10th International
Conference on Trust, Security and Privacy in Computing and Communications. IEEE (2011)
8. Singhal, R., Bokare, S., Pawar, P.: Enterprise storage architecture for optimal business
continuity. In: 2010 International Conference on Data Storage and Data Engineering. IEEE
(2010)
9. Tsuchlya, M., Mariani, M.P.: Performance modeling of distributed database. In: 1984 IEEE
First International Conference on Data Engineering. IEEE (1984)
10. Shen, H., Fan, H.: A context-aware role-based access control model for web services. In:
IEEE International Conference on e-Business Engineering (ICEBE 2005). IEEE (2005)
11. Liu, S.: Task-role-based access control model and its implementation. In: 2010 2nd
International Conference on Education Technology and Computer, vol. 3. IEEE (2010)
12. Bruckner, D., et al.: An introduction to OPC UA TSN for industrial communication systems.
In: Proceedings of the IEEE (2019)
13. Mahemoff, M.: AJAX design patterns: creating web 2.0 sites with programming and
usability patterns. O’Reilly Media, Inc. (2006)
14. Imre, G., Mezei, G.: Introduction to a WebSocket benchmarking infrastructure. In: 2016
Zooming Innovation in Consumer Electronics International Conference (ZINC). IEEE
(2016)
Research on Fault Diagnosis Algorithm Based
on Multiscale Convolutional Neural Network

Xiaolong Li1(&), Lilan Liu1, Xiang Wan1, Lingyan Gao1,


and Qi Huang2
1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai, China
[email protected], [email protected],
[email protected], [email protected]
2
Shanghai Baosight Software Corporation, Shanghai, China
[email protected]

Abstract. The traditional method of fault diagnosis is essentially looking for


the optimal combination of feature extractor and classifier. In this process, it is
necessary to manually extract the expert knowledge of features and related
fields, which greatly limits the versatility and generalization of the algorithm.
The convolutional neural network has the characteristics of “end-to-end”, which
can directly perform the whole process of feature extraction, feature dimension
reduction and classifier classification on the original signal through a neural
network. The traditional convolutional neural network model has the same
convolution kernel shape per convolutional layer, and the feature extraction
ability is relatively limited. However, the fault signals of equipment or com-
ponents are often complex and variable, and data features are difficult to mine.
In view of the above problems, this paper proposes a fault diagnosis method
based on multi-scale convolutional neural network. Based on the traditional
convolutional neural network, the diversity of convolutional layer convolution
kernels is increased. Finally, the feature data extracted by each scale convolution
kernel is merged. The experiment proposed in this paper has a high fault
recognition rate by experimenting with the bearing fault public data set.

Keywords: Deep learning  Convolutional neural network  Fault diagnosis 


Multiscale convolution  Feature fusion

1 Introduction

With the advancement of intelligent manufacturing, the rapid development of com-


puters, sensors and related communication technologies, the state monitoring of elec-
tromechanical equipment has entered the era of “big data”, and this has brought new
challenges to mechanical fault diagnosis. In the era of electromechanical big data,
intelligent diagnosis algorithms are used to automatically mine information from raw
data of equipment, instead of diagnostic experts for feature extraction, real-time
diagnosis of critical position and component status of equipment, and ensuring accu-
racy and efficiency of fault diagnosis, which has become a hot topic of current research.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 242–249, 2020.
https://doi.org/10.1007/978-981-15-2341-0_30
Research on Fault Diagnosis Algorithm Based on Multiscale CNN 243

The fault diagnosis of mechanical equipment is usually to install a sensor at a key


position of the equipment to collect the vibration signal of the equipment, and to judge
the fault category of the equipment or component by performing feature mining on the
original vibration signal [1]. Traditional fault diagnosis algorithms include principal
component analysis (PCA) [2], manifold learning [3], BP neural network (BP-NN) [4]
and support vector machine (SVM) [5]. These algorithms require manual extraction
features and expert knowledge in the related fields. In recent years, deep learning has
developed rapidly in academia and industry, and has achieved remarkable results in
many traditional identification tasks. The deep learning model relies on its multiple
hidden layers to achieve a good representation of complex high-dimensional functions
such as high-variable functions. The convolutional neural network has powerful feature
extraction and expression capabilities, which can mine deep features in the original
signal and filter unwanted noise information. Moreover, the convolutional neural
network greatly reduces the number of network parameters through local weight
sharing, and can avoid the network from over-fitting to a certain extent when the
number of samples is insufficient. Zhao and others from Xi’an Jiaotong University
compared the effects of convolutional neural networks with different structures in
bearing fault diagnosis [6]. Chen proposed a gearbox fault diagnosis algorithm based
on CNN, which uses time-frequency domain features as input of CNN to realize fault
identification [7]. Li and others from Shanghai University proposed a fault diagnosis
algorithm based on structural optimization for convolutional neural networks, which
uses the first layer of wide convolution kernel and adds adding Batch Normalization
layers [8].
Although the fault diagnosis algorithm based on convolutional neural network
proposed above has higher accuracy, it still has a certain rising space in terms of fault
recognition rate and convergence speed. The convolution kernel of each convolution
layer they set is too single and the feature extraction ability is limited. Therefore, this
paper proposes a fault diagnosis algorithm based on multi-scale convolutional neural
network. The feature extraction of the original vibration signal is performed by setting
multiple convolution kernels to increase the diversity of the feature data. Finally, multi-
scale and multi-channel high-dimensional feature data are fused to construct an end-to-
end fault diagnosis model with the original vibration signal as input and fault type as
output.

2 Multiscale Convolutional Neural Network Model

2.1 Multiscale Convolutional Neural Network Model Structure


Convolutional neural network is multi-level neural network that include filtering and
classification levels. The filtering stage is used to extract the characteristics of the input
signal, and the classification level classifies the learned features. The filtering stage
consists of two basic units, a convolutional layer and a pooling layer. The classification
level is generally composed of a fully connected layer. In this paper, the traditional
convolutional neural network model structure is improved. In the first convolutional
layer, the multi-scale convolution kernel is used to convolve the original data, and
244 X. Li et al.

feature fusion is performed on the dimensions of the output channel to mine the rich
and varied features of the original signal. In this paper, three scale convolution kernels
are used. The main function of large convolution kernel is to increase the receptive field
and extract global features. The small convolution is mainly used to extract local
features. The network structure model is shown in the Fig. 1 below.

Fig. 1. Multi-scale convolutional neural network structure

In the convolutional layer, the convolution kernel convolves the input signal or
feature vector of the previously output to generate a corresponding feature map. The
most important feature of the convolutional layer is weight sharing, which means the
same convolution kernel will traverse an input in a fixed step size. The weight sharing
effectively reduces the network parameters of the convolutional layer, avoids over-
fitting, and then performs nonlinear transformation through the activation function. The
mathematical model is expressed as:
X
xlj ¼ xl1
i  kijl þ blj ð1Þ
i2Mj

xlj is the output of the l-th layer; xl1


j is the output of the (l-1)th layer; k is the
convolution kernel, and b is the network offset. When the neural network performs the
first layer convolution, the output results of the three convolution kernels are feature-
fused in the dimension of the output channel. This operation increases the thickness of
the feature map and mines the deeper features in the original data.
In this paper, a batch normalization layer (BN layer) is added after the convolutional
layer and the fully connected layer. The position of BN layer is between the convolutional
layer and the active layer and between the fully connected layer and the active layer. Its
role is to reduce the internal covariate transfer, improve the training efficiency of the
network, and enhance the generalization ability of the network. The main operation step is
Research on Fault Diagnosis Algorithm Based on Multiscale CNN 245

to subtract the average of the mini-batch from the input of the convolutional layer or the
fully connected layer, and then divide by the standard deviation, which is similar to a
standardized operation, so the process of training can be accelerated.
The pooling layer performs the downsampling operation, and the main purpose is
to reduce the parameters of the neural network. There are two pooling methods,
maximum pooling and average pooling. The maximum pooling is used in this paper.
The mathematical model of the largest pooling is expressed as:

Pli þ 1 ðjÞ ¼ max fqli ðtÞg ð2Þ


ðj1ÞW þ 16i6jW

qli ðtÞ indicates the value of the tth neuron in the i-th feature vector of the first layer,
t 2 ½ðj  1ÞW þ 1; jW, W is the width of the pooled area, Pli þ 1 ðjÞ indicates the value
corresponding to the l + 1th layer of neurons.
The fully connected layer classifies the extracted features. The mathematical
expression is:
Xn
zl þ 1ðjÞ ¼ i¼1
Wijl alðiÞ þ blj ð3Þ

Wijl is Weights of the i-th neuron in the first layer and the j-th neuron in the l + 1th
layer. zl þ 1ðjÞ is the logits value of the jth output neuron of the l + 1th layer. blj is the
offset value of all neurons in the first layer to the jth neuron in the l + 1th layer.
Since the output of the neural network model in this paper has multiple categories
of fault types, softmax is selected as the final classifier.

3 Rolling Bearing Fault Diagnosis Algorithm Based


on Multiscale Convolutional Neural Network

3.1 Selection of Data Sources and Network Parameters


The experimental data in this paper is from the Case Western Reserve University
(CWRU) Rolling Bearing Data Center. The fault vibration signal of the drive end
bearing is selected. The bearing type to be diagnosed is the deep groove ball bearing
SKF6205.
In this paper, 9 kinds of fault vibration signals and normal vibration signals of 1hp
lower drive end bearing are selected, and the overlap ratio of samples is 50%. There are
235 samples in each category in the experiment. The data set was split into 75% as the
training set and 25% as the test set. The sample length is 1024, and the label is encoded
by one-hot encoding.
In this paper, Adam optimization algorithm is chosen as the optimizer of neural
network. The learning rate is set to 0.001. However, this article uses the dropout
method and L2 regularization of the weights to prevent parameters from over-fitting.
The batch_size is set to 128, and the loss function is defined as cross entropy. The main
parameters of the network structure are shown in the following Table 1.
246 X. Li et al.

Table 1. Network structure parameters


Network layer Convolution kernel size or Number of Output
number of neurons/step size convolution size
kernels
Convolution layer 1 64  1/1  1 32 1024 
32  1/1  1 32 (32  3)
16  1/1  1 32
Pooling layer 1 4  1/8  1 128  96
Convolution layer 2 5  1/1  1 128 128  128
Pooling layer 2 2  1/8  1 16  128
Fully connected 500 500  1
layer
Softmax 10 10

3.2 Visualization of the Network Structure


The red curve represents the training set and the blue represents the test set. Their
accuracy is 100% (Figs. 2 and 3).

Fig. 2. Multiscale convolutional neural network model structure


Research on Fault Diagnosis Algorithm Based on Multiscale CNN 247

Fig. 3. Loss function and accuracy of training sets and test sets

3.3 Model Comparison


Compare the convergence speed of the model with CNN (without BN layer), WCNN
(with BN layer) and WCNN (without BN layer) (Fig. 4).

Fig. 4. Convergence curve comparison

The number of iterations in the graph is the number of batch_size in the network
training. It can be seen from the above figure that the multiscale convolution and the
BN layer can effectively accelerate the convergence speed of the neural network.
The neural network must ensure the accuracy of the network model while having
fast convergence. The algorithm is compared with the results of several current fault
diagnosis algorithms, such as BP neural network (BP-NN), support vector machine
(SVM).), stack denoising encoder (SDA), etc. (Table 2).
248 X. Li et al.

Table 2. Comparison of algorithm accuracy


Algorithm Accuracy
BP-NN 0.83
SVM 0.90
SDA 0.95
CNN 0.98
WCNN 0.998
Algorithm of this paper 1.0

It can be seen from the above table that the algorithm proposed in this paper has the
highest fault recognition accuracy while ensuring convergence speed.

4 Conclusions

The fault diagnosis algorithm based on multiscale convolutional neural network pro-
posed in this paper does not need to manually extract features and expert knowledge to
achieve an “end-to-end” fault diagnosis mode. What’s more, based on the classical
convolutional neural network model, the first layer of multiscale convolution and
feature fusion is used to mine the deeper features of the original signal. And by adding
the BN layer to further improve the convergence speed of the network, the experiment
proves that the proposed algorithm has higher accuracy and faster convergence speed
than other network models.

Acknowledgements. The authors would like to express appreciation to mentors in Shanghai


University and Shanghai Baosight Software Corporation for their valuable comments and other
helps. Thanks for the support from the Ministry of industry and information technology for the
key project “The construction of professional CPS test and verification bed for the application of
steel rolling process” (No. TC17085JH). Thanks for the pillar program supported by Shanghai
Economic and Information Committee of China (No. 2018-GYHLW-02020).

References
1. Miao, Z.H., Zhou, G.X., Liu, H.N., et al.: Tests and feature extraction algorithm of vibration
signals based on sparse coding. J. Vibr. Shock 33(15), 76–81 and 118 (2014)
2. Hu, B., Li, B.: A new multiscale noise tuning stochastic resonance for enhanced fault
diagnosis in wind turbine drivetrains. Measur. Technol. 27(2), 025017 (2016)
3. Su, Z.Q., Tang, B.P., Yao, J.B., et al.: Fault diagnosis method based on sensitive feature
selection and manifold learning dimension reduction. J. Vibr. Shock 33(3), 70–75 (2014)
4. Wang, H., Chen, P.: Intelligent diagnosis method for rolling element bearing faults using
possibility theory and neural network. Comput. Ind. Eng. 60(4), 511–518 (2011)
5. Jiao, W.D., Lin, S.H.S.: Overall-improved fault diagnosis approach based on support vector
machine. Chin. J. Sci. Instrum. 36(8), 1861–1870 (2015)
Research on Fault Diagnosis Algorithm Based on Multiscale CNN 249

6. Jing, L., Zhao, M., Li, P., et al.: A convolutional neural network based feature learning and
fault diagnosis method for the condition monitoring of gearbox. Measur.: J. Int. Measur.
Confederation 111, 1–10 (2017)
7. Gan, M., Wang, C., Zhu, C.: Construction of hierarchical diagnosis network based on deep
learning and its application in the fault pattern recognition of rolling element bearings. Mech.
Syst. Sign. Process. 72–73(2), 92–104 (2016)
8. Li, X.L.: Research on fault diagnosis algorithm based on structure optimization for
convolutional neural network. In: Information Technology & Artificial Intelligence Confer-
ence. IEEE (2019)
Proactive Learning for Intelligent Maintenance
in Industry 4.0

Rami Noureddine(&), Wei Deng Solvang, Espen Johannessen,


and Hao Yu(&)

Department of Industrial Engineering, Faculty of Engineering Science


and Technology, UiT the Arctic University of Norway, Narvik, Norway
{rno034,wei.d.solvang,espen.johannessen,
hao.yu}@uit.no

Abstract. Manufacturing companies require efficient maintenance practices in


order to improve business performance, ensure equipment availability and
reduce process downtime. With the advent of new technology, manufacturing
processes are evolving from the traditional ways into digitalized manufacturing.
This transformation enables systems and machines to be connected in complex
networks as a collaborative community through the industrial internet of things
(IIoT) and cyber-physical system (CPS). Hence, advanced maintenance strate-
gies should be developed in order to ensure the successful implementation of
Industry 4.0, which aims to transform traditional product-oriented systems into
product-service systems (PSS). Today, machines and systems are expected to
gain self-awareness and self-predictiveness in order to provide management
with more insight on the status of the factory. In this regards, real-time moni-
toring along with the application of advanced machine learning algorithms
based on historical data will enable systems to understand the current operating
conditions, predict the remaining useful life and detect anomalies in the process.
This paper discusses the necessity of predictive maintenance to achieve a sus-
tainable and service-oriented manufacturing system and provides a methodology
to be followed for implementing proactive maintenance in the context of
Industry 4.0.

Keywords: Anomaly detection  Predictive maintenance  Industry 4.0 


Adaptive learning  Data analytics

1 Introduction

Maintenance plans and policies are one of the most important decisions for all pro-
duction and manufacturing processes. Companies have been implementing different
maintenance activities and strategies in order to improve their overall performance in
terms of production costs, wastes, flexibility, time and reliability.
The traditional ways of maintenance have evolved over time with the introduction
of new technologies. The earliest maintenance activities are known as reactive main-
tenance activities where the management or workers deal with problems only when
they occur. With the development to a higher maturity level, companies switch to

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 250–257, 2020.
https://doi.org/10.1007/978-981-15-2341-0_31
Proactive Learning for Intelligent Maintenance in Industry 4.0 251

preventive maintenance where frequent visualization by team members are scheduled


and routine inspections of the system are essential to prevent the failure of equipment.
With the introduction of electronics and the widespread use of sensors and pro-
cessors, many companies are able to adopt a higher level of maturity in maintenance by
using rule-based predictive maintenance strategies. In rule-based maintenance, sensors
are installed in some areas to measure specific parameters. A condition or a rule is
coded to the sensor so that the system can send an alert to notify the management of the
present status if the monitored parameters reach a predefined point.
However, with today’s new technologies and the concept of a smart factory for
Industry 4.0, maintenance is expected to reach much higher dimensions in terms of
both maturity and efficiency. The main concept behind maintenance in Industry 4.0 is
the ability to make use of historical and live data and the ability to make predictions of
future states of processes. Besides, data visualization, digital twins, and augmented
reality are also new technologies and concepts that provide companies with highly
advanced efficient systems in maintenance and other activities in a production process.
The rest of the paper is organized as follows. Section 2 defines the concept and
enabling technologies of industry 4.0. Section 3 discusses the benefits and the structure
needed for predictive maintenance. Section 4 presents an approach for adaptive
learning and predictive maintenance. Section 5 concludes the paper.

2 Industry 4.0: Enabling Technologies

Industry 4.0 refers to the fourth industrial revolution with the introduction of big data,
internet of things (IoT) and cyber-physical system (CPS). Today, many companies are
competing to apply industry 4.0 methodologies in order to enhance their business
performance. Companies and manufacturers are investing significantly in building up
global networks to connect their machinery, factories and facilities in order to enable
efficient communication and application of CPS.
The industrial internet of things (IIoT) in industry 4.0 has become attractive to
many businesses due to the reduction in costs of computations, storage and network
systems by the use of the cloud-computing model. The recent developments of an IoT
framework and the advancements in the sensor technologies have established an
integrated network that tightly connects systems and humans together [1]. IIoT systems
can be effectively used to create operational smart factories in which a higher level of
efficiency can be reached. IIoT objects, i.e., sensors, can be widely used to gather data
in real-time from the field for improving the productivity through advanced automatic
processes [2], improving the safety through a deeper knowledge of workers’ position
[3] and reducing the equipment failure through fast event detection capabilities [4].
Civerchia et al. [5] describes one architecture for an IIOT system, as shown in
Fig. 1. Sensors are used to measure the desired data from the environment. The
Gateway is a node in the network, which is able to interact with the monitoring sensors
and the mind of the system. Information is then passed wirelessly to the RCSR in the
main control room to store data, which can be used for data analysis with advanced
252 R. Noureddine et al.

algorithms, pattern recognition, and data visualization. Finally, the OPC server stores
all the data for authorized client access.

Fig. 1. IIOT NGS-PlantOne architecture [5].

In IIOT systems, big data can be analyzed online via a cloud with advanced
analytics at a very high speed, which can then be used by process engineers to obtain
valuable information. As a result, future industry will be able to reach a high intelli-
gence level by being capable of sharing this useful information across different nodes in
the network and reacting to different conditions and events in an optimal way through
CPS.
The concept behind CPS in Industry 4.0 is that they are intelligent systems con-
taining embedded circuits that are connected to their environment. They do not only
respond to the predefined stimulus, but also are able to communicate and interact with
the surrounding environment. CPSs are networked and are thus able to send and
receive data from different locations. CPS allows the construction of applications that
can autonomously interact with environment and execute actions accordingly. Figure 2
shows the CPS framework for self-maintenance machines.

Fig. 2. CPS framework for self-aware and self-maintenance machines [1].

Finally, it is important to know that the cloud in Industry 4.0 provides everything as
a service. Three main categories are given as follows.
Proactive Learning for Intelligent Maintenance in Industry 4.0 253

• Infrastructure as a service (IaaS): in which the hardware needed and the server
rooms are presented as a service rather than buying it.
• Platform as a service (PaaS): gives access to the development languages, libraries,
APIs, etc.
• Software as a service (SaaS): provides services by providing users with web server-
based shared application instead of accessing a copy of the application hosted in a
local private server.

3 Predictive Maintenance Supportive Structure

Predictive maintenance in Industry 4.0 is a method that enables just-in-time (JIT)


maintenance. It can be used to prevent failure in process or machines by analyzing the
operational data and identifying the patterns in order to predict issues before they occur.
Using appropriate sensor installations, various signals, i.e., vibration, pressure, etc., can
be extracted. In addition, historical data can be harvested for further data mining [1].
Value fetched from efficient data mining can then be utilized to drastically reduce the
cost of maintenance and to improve the product quality, productivity, and profitability
of production plants [6].
Information gathered in real-time can not only be applied to detect anomalies and
predict future behavior by discovering the trends and the patters but also be used for
improving the maintenance through the enhancement of design, installation, schedul-
ing, and work procedures [7].
Some benefits obtained with the efficient use of predictive maintenance include:
(1) Reduced maintenance time: maintenance is only performed when necessary.
(2) Increased efficiency: unnecessary maintenance is reduced and root cause analysis
becomes easier and automatic.
(3) Improved customer satisfaction: alerts are sent to customers to inform the product
status and to give suggestions regarding the product health.
(4) Competitiveness: companies will gain a competitive advantage in the market by
differentiating the products and brand.
In order to transform a traditional production plant into a smart factory, companies
have to prepare an appropriate structure that can achieve and sustain the goal of it. As a
result, several basic components and tools should be invested in order to make the
manufacturing systems become more intelligent.
These components and tools include:
• Sensors: should be installed in the system in order to monitor the behavior and
encode the system performance, efficiency, and status.
• Data-analysis tools: are needed to enable a root cause analysis
• Analytic algorithms: should be used to enable the predictive maintenance and smart
diagnostics
• A communication system: is needed for the safe data storage and data transfer
among different machines and teams.
254 R. Noureddine et al.

• A central place for data storage: is needed, which can be either indoors or cloud
based.
The structure allows data to flow from the production process to the central data
storage area where the data from different systems and devices are gathered. After-
wards, the data is sent into machine learning algorithms for extracting knowledge,
features, patterns, classes, and relations. After the data has been processed by machine
learning algorithms, the results are sent to dashboards for visualizing the system status
and predicting the future behavior. In addition, the messages or alarms are sent to
respective people at the right time in order to notify an even that has happened or is
about to happen in the production process. Data also flows in the reverse direction
where the output of the machine learning algorithms can be used as the inputs for
autonomous decision-makings. Figure 3 shows the predictive maintenance structure in
Industry 4.0. It is an updated structure of the existing models, which only consider the
one-way forward flow of information between different levels within a company.

Fig. 3. Industry 4.0 predictive analytics structure.

Machine learning algorithms used for predictive maintenance in Industry 4.0 vary
in their characteristics and functions. There is not a specific algorithm type, which is
applicable to all conditions. Each situation requires a specific algorithm depending on
the characteristics, i.e., known or unknown functional form of the system, labeled or
unlabeled data, deterministic or stochastic data, etc. For instance, a good algorithm for
detecting the sequence behavior or time related behavior may consider the use of
recurrent neural networks or even a combination of several algorithms, as done by
Yuan et al. [8], in order to predict anomalies in rotating bearings behavior of a
hydropower plant.

4 Adaptive Learning Approach and Applications

The ability of continuous learning in real-time monitoring allows for reliable appli-
cation of intelligent maintenance systems in manufacturing companies. This requires
CPS to use adaptive learning techniques to continuously update the knowledge base for
data mining.
Proactive Learning for Intelligent Maintenance in Industry 4.0 255

When considering predictive maintenance and health assessment of machines or


systems in real life application, the systems need to be able to adapt to new emerging
conditions that are prone to happen rather than the traditional way of feeding the
information to the algorithms prior to operations. As shown in Fig. 4, an approach to
enable such adaptive learning is discussed by Lee et al. [1].
The proposed approach considers the use of unsupervised machine learning algo-
rithms, i.e., SOM or GMM, to read the input data in a certain state of the system and
measure the difference comparing to the existing states in the knowledge base. If the
state is measured to be close to an existing state or cluster in the system, the knowledge
base is updated with the new data. While if no similar cluster is found, a new cluster is
added to the knowledge base in order to account for the new behavior.

Fig. 4. Adaptive learning methodology [1].

With the increase on the complexity of machines and systems, the root cause
analysis for undesirable events or failures becomes a very difficult task. Several factors,
i.e., personnel related tasks and operating conditions of equipment, etc., may all be the
influencing factors of certain outcomes. In this regards, one highlighted benefit of
applying the predictive maintenance in industry 4.0 is the ability to discover the
underlying patterns in the gathered data and to model the systems using the data-driven
approach in order to provide solutions for an effective root cause analysis and an
automated response.
Our solution suggests the manufacturing process can be continuously improved by
using an adaptive learning predictive maintenance approach with constant updates of
the knowledge base of the system based on detected undesirable or abnormal states,
maintenance personnel information, and troubleshooting data.
The first step is to gather the relevant data that is expected to have an effect on the
system. Expert opinions are needed in order to decide which data should be included
256 R. Noureddine et al.

and how they are measured. The second step requires data cleansing and outlier
removal for consistent analysis. In the third step, missing data is estimated by different
techniques, i.e., moving average. In the fourth step, detected faults are gathered and
added to the historical data. Based on which, machine learning algorithms i.e., the
Naïve Bayesian filter, neural networks, etc., can be used to detect how the outcome is
affected by the change of data. These algorithms can provide helpful solutions due to
their capabilities to model complex nonlinear processes with a high level of confidence.
Finally, the algorithm performance should be validated before they can be applied in
the real operations.
Since the reaction of different personnel to different problems is also gathered and
stored for analysis. The CPS is able to assign different problems to the right person by
mobile or email notifications based on smart decisions. These decisions may consider
the expertise, speed, distance as well as other factors for each individual.

Fig. 5. Updates of the control limits based on the real-time data (left) and the data measured by
an ultrasonic sensor using a new approach (right).

The ability to include adaptive learning in manufacturing processes using the


technologies in Industry 4.0 can also result in better designing and operational pro-
cedures, which will further lead to a continuous improvement of the process. Figure 5
shows how control limits have been updated in real-time and reflects how the process
variation have been reduced by different practices. The system can then send alerts to
the relevant person or take autonomous decisions in order to ensure the process is
operated in an optimal way.

5 Conclusion

New technologies in Industry 4.0 are paving the way for new practices and techniques
to be used in the maintenance operations. The proposed concept of Industry 4.0
maintenance is characterized by the ability to derive knowledge from historical data
and by the ability to learn from new live data in real-time.
To successfully realize the goals of industry 4.0, equipment in a smart factory
should possess self-awareness for current-condition evaluation and future-condition
prediction. Thus, the idea of proactive and predictive maintenance becomes extremely
Proactive Learning for Intelligent Maintenance in Industry 4.0 257

important for reducing the maintenance time and unnecessary disrupting checkups,
improving efficiency by increasing the asset life, enhancing working environment, and
creating new revenue streams by supporting product-service systems [9].
One successful example of product-based servitization is the “Total Care” scheme
established by Rolls-Royce [10], where a proactive maintenance strategy has enabled
both parties to lower down the costs while increasing effectiveness. Technicians are
able to monitor the key performance indicators in aircraft engines and use proactive
maintenance techniques to support a continuous and sustainable value creation process.
This research provides a framework for the implementation of predictive mainte-
nance in Industry 4.0 and describes the structure needed to support predictive main-
tenance. Furthermore, the methodology of adaptive learning is introduced in order to
enable the identification of the root causes and the optimization of alerting notifications.
Last but not the least, the paper also shows how proactive learning can help manu-
facturers to optimize their performances by monitoring the process variations and
updating control limits in order to improve the standardization in the system.

Acknowledgement. This research is supported by the OptiLog 4.0 project financed by the
Research Council of Norway (Grand no. 283084).

References
1. Lee, J., Kao, H.A., Yang, S.: Service innovation and smart analytics for industry 4.0 and big
data environment. Procedia Cirp 16, 3–8 (2014)
2. Breivold, H.P., Sandström, K.: Internet of things for industrial automation–challenges and
technical solutions. In: 2015 IEEE International Conference on Data Science and Data
Intensive Systems, 11 December 2015, pp. 532–539. IEEE (2015)
3. Petracca, M., Bocchino, S., Azzarà, A., Pelliccia, R., Ghibaudi, M., Pagano, P.: WSN and
RFID integration in the IoT scenario: an advanced safety system for industrial plants (2013)
4. Wang, J., Zhang, L., Duan, L., Gao, R.X.: A new paradigm of cloud-based predictive
maintenance for intelligent manufacturing. J. Intell. Manuf. 28(5), 1125–1137 (2017)
5. Civerchia, F., Bocchino, S., Salvadori, C., Rossi, E., Maggiani, L., Petracca, M.: Industrial
Internet of Things monitoring solution for advanced predictive maintenance applications.
J. Ind. Inf. Integr. 7, 4–12 (2017)
6. Mobley, R.K.: An Introduction to Predictive Maintenance. Elsevier, Amsterdam (2002)
7. Dhillon, B.S.: Engineering Maintenance: A Modern Approach. CRC Press, Boca Raton
(2002)
8. Yuan, J., Wang, Y., Wang, K.: LSTM based prediction and time-temperature varying rate
fusion for hydropower plant anomaly detection: a case study. In: International Workshop of
Advanced Manufacturing and Automation, 20 September 2018, pp. 86–94. Springer,
Singapore (2018)
9. Ferreiro, S., Konde, E., Fernández, S., Prado, A.: Industry 4.0: predictive intelligent
maintenance for production equipment. In: European Conference of the Prognostics and
Health Management Society, no. June 2016, pp. 1–8 (2016)
10. Yu, H.: Solvang WD. Enhancing the competitiveness of manufacturers through Small-scale
Intelligent Manufacturing System (SIMS): a supply chain perspective. In: 2017 6th
International Conference on Industrial Technology and Management (ICITM), 7 March
2017, pp. 101–107. IEEE (2017)
An Introduction of the Role of Virtual
Technologies and Digital Twin in Industry 4.0

Mohammad Azarian(&), Hao Yu, Wei Deng Solvang, and Beibei Shu

Department of Industrial Technology, Faculty of Engineering Science


and Technology, UiT Arctic University of Norway, Narvik, Norway
[email protected],
{hao.yu,wei.d.solvang,beibei.shu}@uit.no

Abstract. It is inevitable that technological improvements have affected


human’s life in any aspect to a large extent. Automation, artificial intelligence,
robotics, etc., are some advances contributed to the fourth industrial revolution:
Industry 4.0. Despite there are still many arguments, Internet of Things
(IoT) and Cyber Physical Systems (CPS) have been widely acknowledged as the
main fundaments of Industry 4.0. This paper introduces the concept of CPS by
providing an explicit framework that unifies the existing theories in this regard.
Nine exquisite technologies attributed to Industry 4.0 are investigated, among
which virtual technology (VT) and digital twin (DT) are considered as two of
the core criteria and are thus focused on in this paper. However, for providing an
acceptable level of integration and intelligence, noticeable gap between the
virtual world and the real factory is still a significant challenge. Holistic
approaches addressing this issue suggest VTs and DT have the potential to form
the fundaments for more improvements in terms of both interoperability and
consciousness. Furthermore, they may pave the way for achieving the highest
level of CPS and Industry 4.0.

Keywords: Industry 4.0  Cyber Physical System  Virtual Technology 


Digital twin

1 Introduction

The history of manufacturing industry has shown that the emergence and application of
new technologies is the most important driving force in determining the turning point
and structural alterations. The first industrial revolution occurred thanks to the inven-
tion of steam engine; the use of electricity drove the second industrial revolution by
enabling mass and standardized production. The combination of IT technologies and
electronic devices contributed to the automation in the third industrial revolution [1, 2].
The increasing rate of customization and demand diversity has led factories and
manufacturing companies to become more specialized and smaller in many regions [3].
In order to survive in today’s competitive global market, manufacturers have to focus
on the advances of both technologies and manufacturing theories, which further leads
to the shift to a novel manufacturing paradigm: Industry 4.0.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 258–266, 2020.
https://doi.org/10.1007/978-981-15-2341-0_32
An Introduction of the Role of VT and DT in Industry 4.0 259

Since the introduction of Industry 4.0 (I4.0 in this paper) at Hannover fair in 2013,
different manufacturing ideas and technologies have been developed, among which
Intelligent Analytics and Cyber Physical Systems (CPS) are unified as two major
technical drivers of I4.0 [4]. An evaluation on CPS dimensions indicates that it
internally encompasses most of the existing technologies, which are believed as the
nine fundamental elements of I4.0 [5]. Furthermore, research works related to those
nine elements suggest that Internet of Things (IoT) not only is committed to provide a
widespread connectivity but also forms the basis for boosting the overall intelligence
and integration level [6–8]. As a result, CPS and IoT can be considered as two major
fundaments to form the technological structure of I4.0.
The elements of the CPS mainly contribute to increase the interoperability and
consciousness of a manufacturing system. More precisely, they focus on incorporating
all machines and components in a cyber environment and then add consciousness to the
system in order to increase the intelligence level within the unified factory [9]. In this
regards, this paper investigates the interoperability aspect and evaluates the confronting
challenges. Section 2 presents a structural framework for clearly demonstrating the
steps to be taken in order to achieve I4.0. Through providing a case study, Sect. 3
discusses the role of virtual technologies and simulation. In order to tackle the most
significant challenge, Sect. 4 puts forward the concept of digital twin so that a bi-
directional connection between the virtual world and the physical factory can be
established. Section 5 concludes the paper.

2 Cyber Physical System

CPS consists of physical components and machines, which are interconnected. The
associated data is collected in an integrated cyber environment, which brings the
opportunity for remote control and data processing [10, 11]. Many believes that the
implementation of CPS is the key element to achieve I4.0. In a comparison between the
current status of manufacturing and the future production system in I4.0, component,
machine and production systems are the three key factors at the factory level. The
advancement of their attributes and technologies provides opportunities of the real-
ization of I4.0 [4]. Based on this comparison, which focuses mainly on the intelligence
and consciousness aspects, a structural model is given in Fig. 1. It considers five levels
of the implementation of CPS: connection, conversion, cyber, cognition and configu-
ration [12].
260 M. Azarian et al.

Configuratio • Supervision on the concluded decisions in the


n cognition level and making the best decision.

• The system integration brings the possibility


Cognition of making analysis and optimization.

• Connect equipment in a cyber


Cyber environment which brings the
possibility of self-compare.
• Receive data from physical
Conversion asset and convert them to
the intended type.
• Connect
Connection components and
machines.

Fig. 1. 5C architecture of CPS implementation [12].

According to the 5C model, a comparison between I4.0 and current manufacturing


systems where reconfigurable manufacturing is regarded as the most advanced method
reveals that not only the maturity level of interoperability needs to be improved, but
also the intelligence aspect is still a missing element. With the aim of addressing this
shortage, a categorical framework in a matrix form consisting nine generic steps is
provided, which considers three levels of intelligence (control, integration and intel-
ligence) as well as three dimensions of automation (machine, process and factory) in
order to achieve I4.0. This idea suggests that, within each dimension of automation, the
improvement of intelligence level should be done in a systematic way following their
sequence [9].

Fig. 2. A new approach of the categorical framework for CPS.

Intelligence is the common intellectual factor considered by both aforementioned


ideas. In this paper, another approach has been derived, which categorizes the five
An Introduction of the Role of VT and DT in Industry 4.0 261

levels of CPS within the three stages of intelligence and provides more elaboration of
this aspect. This idea converts the categorical framework of CPS into a 15-step matrix
with almost the same principles, as shown in Fig. 2. The main purpose of this model is
to make a more perceptual roadmap of CPS and to provide a convergent framework for
the realization of I4.0.

3 The Role of Virtual Technology

The study of the nine technological components of I4.0 highlights simulation and
Augmented Reality (AR) as two of the most important elements. Simulation provides a
virtual, and mostly 3D, environment to establish more opportunities for comparing
different setups and optimizing the overall process. The general purpose of AR is to
create an interface between factory and human or software [5]. A combination of both
technologies leads to a more generic category: Virtual Technology (VT), which may
potentially contribute to both aspects of CPS. With this perspective, VT provides a
functional production system in a virtual environment where all the machines and
components work together in an integrated and intelligent manner. Moreover, VT
focuses on the interoperability aspect and opens up the possibility to test and imple-
ment new ideas, designs and algorithms.
The general attitude towards VT has been argued in many research works and been
emerged in a variety of forms and terms within the scope of I4.0. One study regarding
the intelligence aspect in the manufacturing sectors, mainly SMEs [8], demonstrates the
significance of artificial intelligence for meeting the main features of an intelligent
manufacturing system, i.e., flexibility and reconfigurability. This idea emphasizes the
role of VTs, i.e., Virtual Reality (VR) and Virtual Manufacturing (VM), as the tools to
improve the intelligence level and to achieve I4.0 [8]. Another novel paradigm is
Virtual Factory (VF), which suggests to situate not only a part but also the entire
factory in a software equipped with simulation module in order to incorporate all active
elements in a factory. This idea emphasizes some features such as agility, scalability,
and so forth. It considers four main elements for the implementation: reference model,
virtual factory manager, functional modules and integration of knowledge [13]. Thanks
to the possibility of providing a tangible virtual 3D environment, the widespread
domain of VTs, i.e., VR, not only supports the simulation attribute but also discovers
some critical issues of the new ideas before the implementation phase. In this regards, a
research divides the dimensions of VR into three functional categories or phases:
design, operation management and manufacturing processes [14]. This idea is one of
the most important driving forces for manufacturers to give suggestions to software
developers so that the focus of each simulation software can be specified.
Given the essence of VT in the field of I4.0, a simulation project is done in order to
demonstrate a unified factory in a virtual environment by using Visual Components 4.1
simulation software (link: https://youtu.be/11Ax0OZUrEU). According to the afore-
mentioned classification, this example focuses on the manufacturing process where all
the equipment including machines, sensors, robots and other technical and generic
facilities are integrated in a 3D graphical environment in order to form a fully auto-
mated factory. The main task of this factory is to produce four cylindrical parts
262 M. Azarian et al.

Fig. 3. The logical flowchart of the devised factory.

considering quality control and reprocessing unit. After the packaging process, the
products will be delivered to the customers at the intended batch size. Figure 3 rep-
resents the sequential flowchart of the factory’s logic.
The very first matter to be considered in a manufacturing process is the plant layout,
which strikingly affects the internal logistics, i.e., flow of material. In this example, the
layout is opted for U-Shape, as shown in Fig. 4(A). The integration and cooperation
among modules and equipment is accomplished by python programming, which is not
only used to control and unify facilities but also to provide the fundaments of
increasing flexibility and agility. From a logistic point of view, this example provides
some astonishing approaches. One example is the transportation of reprocessed parts,
which is performed by using an AGV equipped with a universal robot, as shown in
Fig. 4(B). This idea assists a company in decreasing some of the cost drivers, i.e.,
purchasing cost of the conveyors, maintenance expenses, etc. Moreover, this idea
provides companies with opportunities to test different layout configurations.

(A) Factory layout and flow of material (B) AGV with a universal robot

Fig. 4. Simulation environment.


An Introduction of the Role of VT and DT in Industry 4.0 263

Among many advantages of using a simulation software, this example contributes


to outline the following ones:
• Modifying the plant layout and testing different designs in a quick and flexible
manner.
• Altering the batch size and benchmarking different manufacturing strategies.
• Making decision on whether to produce a specific product and evaluating the
corresponding consequence.

4 The Role of Digital Twin

An ideal CPS connects all components and machines in order to provide a unified
system in a cyber environment. This condition integrates the real factory with its virtual
model in a simulation environment and provides a bi-directional connection and data
flow between them in order to control the physical system while, at the same time,
reflect the real change in the virtual environment [15]. The improvements on VTs are
capable to provide such a unified system and enable the engineers to test and imple-
ment new ideas and intelligent algorithms. However, these advances are provided in a
virtual environment, and the connection between the real world and the virtual envi-
ronment is the most significant challenge. A new concept: Digital Twin (DT) aims to
address this issue. It provides a digital representation of the physical
components/machines of a real factory in a virtual environment where a real time and
instant synchronization is established between them [16].
Inspired from the concept of DT, one research suggests improving the simulation
softwares is an essential step in this regard [17]. It proposes a five-tier logic to establish
a multi-layer simulation software based on the Model-View-View-Model (MVVM)
paradigm and follows a sequence to simulate the DT of a simple process. Although it is
only adaptable to a limited number of processes, the optimization feature can be
considered as a noticeable achievement of this research [17]. Another approach pro-
poses a concept: Versatile Simulation Database (VSD), which incorporates DT and
Virtual Testbed in order to provide the possibility to achieve improved flexibility and
functionality through integrating different types of simulation. Driving assistance
systems, i.e., automatic parking system, can be referred to as a close instance in this
regard [18]. Another industrial research presents a DT of a robotized press-brake
processing line of a factory in South Korea with the use of ROS Gazebo simulation, as
shown in Fig. 5. In this process, the information related to the conveyors, parts,
machines and robots is received by the sensors and robot controller, which is then
reflected in the simulation environment. In order to establish the connection to control
the physical elements, FlexGui 4.0 offers an online page to control the robots, machines
and conveyors [19].
264 M. Azarian et al.

(A) Real Factory (B) Virtual environment

Fig. 5. DT of a robotized press-brake processing line.

Based on the discussions above, DT is a promising technology in the realization of


the concept of I4.0. Some advantages of DT are given as follows.
• Realizing a real-time synchronization between the physical elements and the cor-
responding virtual model.
• Bringing the possibility of controlling the physical factory through the simulation
environment and monitoring the alterations of the physical system in the virtual
model.
• Tracing the modifications of the product throughout the process by establishing the
DT of the product as well as all equipment.
Despite all achievements of the concept of VTs, I4.0 is still in its infancy and much
more development should be accomplished in order to realize this concept. In this
regards, DT is essential to pave the way for this ambition and provides the possibility of
bringing all novel ideas into the real world by satisfying the interoperability phase of
CPS.

5 Conclusion

In this research, the role of VTs and DT in realizing the concept of I4.0 is thoroughly
discussed. First, the concept of I4.0 is studied, where CPS and IoT are considered as the
fundamental bases of it. Inspired from the recent development on the architecture of
CPS, a categorical approach is proposed and elaborated to a higher degree in order to
incorporate existing ideas and provide a unified and convergent framework for the
realization of I4.0.
The study of the fundamental elements of I4.0 has revealed that VTs, i.e., Simu-
lation and AR, etc., are among the most crucial technologies to realize the I4.0 concept,
which provide an easy, flexible, visualized and cost effective way to evaluate and
optimize different manufacturing scenarios, decisions, and process integration. How-
ever, establishing a bi-directional connection between the virtual environment and
physical systems is still a significant challenge, which is currently under the focus of
both academicians and practitioners.
An Introduction of the Role of VT and DT in Industry 4.0 265

This issue represents a huge gap between the current status of the manufacturing
systems and the ideal condition in I4.0. In this regards, the concept of DT is introduced
as the core module to address this challenge. DT provides a real-time and bi-directional
connection, visualizes the product and equipment, and offers an integrated interface to
control the physical system through the virtual environment. Consequently, DT not
only enables the interoperability phase but also provides opportunities for improving
the consciousness and intelligence.

Acknowledgment. This research is supported by TRINITY Project, which is financed under EU


Horizon 2020 Programme.

References
1. Thangaraj, J., Lakshmi Narayanan, R.: Industry 1.0 to 4.0: The Evolution of Smart Factories
(2018)
2. Schwab, K.: The fourth industrial revolution, Currency (2017)
3. Krämer, W.: Mittelstandsökonomik: Grundzüge einer umfassenden Analyse kleiner und
mittlerer Unternehmen [SME Economics: Principles of a comprehensive analysis of SMEs],
München (2003)
4. Lee, J.: Industry 4.0 in big data environment. Ger. Harting Mag. 1, 8–10 (2013)
5. Rüßmann, M., Lorenz, M., Gerbert, P., Waldner, M., Justus, J., Engel, P., Harnisch, M.:
Industry 4.0: the future of productivity and growth in manufacturing industries, vol. 9,
pp. 54–89. Boston Consulting Group (2015)
6. Erboz, G.: How to define industry 4.0: main pillars of industry 4.0. In: 7th International
Conference on Management (ICoM 2017), At Nitra, Slovakia (2017)
7. Rahman, H., Rahmani, R.: Enabling distributed intelligence assisted Future Internet of
Things Controller (FITC). Appl. Comput. Inform. 14(1), 73–87 (2018)
8. Huang, T., Solvang, W.D., Yu, H.: An introduction of small-scale intelligent manufacturing
system. In: 2016 International Symposium on Small-scale Intelligent Manufacturing
Systems (SIMS), Narvik, Norway (2016)
9. Qin, J., Liu, Y., Grosvenor, R.: A categorical framework of manufacturing for industry 4.0
and beyond. Procedia Cirp 52, 173–178 (2016)
10. Baheti, R., Gill, H.: Cyber-physical systems. Impact Control Technol. 12(1), 161–166
(2011)
11. Molina, E., Jacob, E.: Software-defined networking in cyber-physical systems: a survey.
Comput. Electr. Eng. 66, 407–419 (2018)
12. Lee, J., Bagheri, B., Kao, H.-A.: A cyber-physical systems architecture for industry 4.0-
based manufacturing systems. Manuf. Lett. 3, 18–23 (2015)
13. Sacco, M., Pedrazzoli, P., Terkaj, W.: VFF: virtual factory framework. In: 2010 IEEE
International Technology Management Conference (ICE) (2010)
14. Mujber, T.S., Szecsi, T., Hashmi, M.S.: Virtual reality applications in manufacturing process
simulation. J. Mater. Process. Technol. 155, 1834–1838 (2004)
15. Lee, J.: Smart factory systems. Informatik-Spektrum 38(3), 230–235 (2015)
16. Negri, E., Fumagalli, L., Macchi, M.: A review of the roles of digital twin in CPS-based
production systems. Procedia Manuf. 11, 939–948 (2017)
17. Tavares, P., Silva, J.A., Costa, P., Veiga, G., Moreira, A.P.: Flexible work cell simulator
using digital twin methodology for highly complex systems in industry 4.0. In: Iberian
Robotics Conference (2017)
266 M. Azarian et al.

18. Schluse, M., Rossmann, J.: From simulation to experimentable digital twins: simulation-
based development and operation of complex technical systems. In: 2016 IEEE International
Symposium on Systems Engineering (ISSE) (2016)
19. Shu, B., Sziebig, G., Solvang, B.: Introduction of cyber-physical system in robotized press-
brake line for metal industry. In: International Workshop of Advanced Manufacturing and
Automation (2017)
Model Optimization Method Based on Rhino

Mengyao Dong, Zenggui Gao(&), and Lilan Liu

Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,


Shanghai University, Shanghai, China
[email protected]

Abstract. At present, digital twins is an important form of modeling and


simulation application in the digital age. Based on the equipment based on
intelligent factory, combined with the key technology of digital generation, the
surreal virtual real-time digital simulation of the workshop production line.
Utilize Unity 3D to realize the virtual roaming display of the intelligent pro-
duction line running process. However, the virtual model caused by the com-
plexity of the production line is too large, so that the scene roaming is too heavy
and the rendering rate is low. This paper studies the model optimization based
on the three-dimensional design software Rhino. Through the comparative
analysis of two major methods, the optimal optimization method is obtained,
and the improvement is made according to the actual operation, which makes
the optimization process more automated and efficient.

Keywords: Workshop line  Unity 3D  Rhino  Model optimization

1 Introduction

With the acceleration of the new digital era characterized by networking, information
and intelligence, the development of modeling and simulation technology has entered
the digital age. Digital twins is an important form of modeling and simulation appli-
cation in the digital age. It has been widely used in smart manufacturing, smart fac-
tories, intelligent buildings, smart cities and many other fields, showing strong vitality.
Based on the equipment based on smart factory, combined with the key technology
of digital twins, the virtual and real data synchronous communication and virtual real
mapping technology are applied to the workshop production line to realize the virtual
and real digital simulation of the physical entity objects of the production line.
According to the actual application development, a virtual model of the complete
mapping of the smart factory can be built, and the Unity 3D can be used to realize the
virtual roaming display of the intelligent production line running process.
However, due to the complexity of the production line of the workshop, the virtual
model will be too large, too many texture materials will cause the entire system file to
be too large, the computer is overloaded during virtual roaming, the loading is too
slow, and even the problem of jamming appears. In this way, it is necessary to optimize
the roaming scene and eliminate some non-essential models or materials to simplify the
system model.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 267–274, 2020.
https://doi.org/10.1007/978-981-15-2341-0_33
268 M. Dong et al.

2 Software Basic Introduction


2.1 3D Modeling Software—Rhino
Unity 3D supports third-party software modeling and can import models from outside.
Commonly supported formats include FBX, DAE, 3DS, DXF, OBJ and other file
formats. The general virtual roaming display uses 3ds Max as a model building tool,
assigning materials and lights, exporting the 3D model of the completed scene to the
FBX file format, and importing it into the Unity 3D project file. Rhino can support
DWG, DXF, 3DS, LWO, STL, OBJ, AI, RIB and many other formats, and is suitable
for almost all 3D software. Therefore, this study uses Rhino as a model building tool,
which is well compatible with Unity 3D, in order to study the method of using Rhino
optimization model in the early stage of importing Unity 3D.
The main difference between Rhino and 3ds Max is that the Rhino modeling
method is NURBS modeling and the 3ds Max modeling method is Polygon modeling.
NURBS means non-uniform rational B-spline, and each point on the surface model
constructed is calculated by equations, and the constructed surface model is absolutely
smooth. The meaning of Polygon is used for the current brush fill polygon drawing
function, which is polygon modeling, which uses a combination of small quadrilateral
patches to simulate a real smooth surface. 3ds Max performs topological fractals on
graphics, which means that objects are graded into points, lines, faces, and bodies. All
parts of an object can be edited. Compared to 3ds Max, Rhino applies a mathematical
and rational way of thinking in modeling, which is to form lines from points, generate
faces from lines, and finally form a body from faces. It can be used to create and edit
surface entities, as well as to analyze and convert NURBS curve models without any
limitations in terms of complexity, angle and size.

2.2 3D Engine—Unity 3D
Unity 3D is a game engine developed by Unity Technologies. It is an easy-to-use
interactive graphical and comprehensive game development tool that enables us to
create 3D computer games, visual roaming of buildings, and real-time interactive 3D
animation. The game engine supports a variety of system platforms, and its develop-
ment works not only on PC, MAC and other computer platforms, but also on mobile
devices such as Android and IOS, and can be directly posted to web pages.
As a good 3D engine, the main features of Unity 3D are as follows:
(1) Excellent visual effects and rendering power. Unity 3D supports the Windows
DirectX 11 graphics API. During the run, using Shader Model 5.0, the high-
precision restore model reveals the details of the model without sacrificing per-
formance. Unity 3D uses gamma-corrected illumination and HDR rendering to
create good lighting effects.
(2) Powerful and concise editor. A powerful working set internally speeds develop-
ment and allows developers to work on a variety of different applications with as
little repetitive work as possible.
Model Optimization Method Based on Rhino 269

(3) Visual operation function. The encoding and testing functions can be performed at
the same time, and the Game window and the Scene window can be played
synchronously, which is convenient for developers to modify in time.
(4) Cross-platform features. Unity 3D has powerful cross-platform capabilities. You
can use a set of code to post to Windows, Mac, iPhone, Android and other
platforms on a small amount of modifications.
Because of these characteristics, Unity 3D has a wide range of applications. It has
considerable advantages in realizing the virtual roaming display of the intelligent
production line operation process.

3 Model Optimization Method Based on Rhino

In the process of creating virtual roaming in Unity 3D, there may be too many or too
large models, too many texture materials, causing the entire system file to be too large,
causing the computer to be overloaded, loading too slow, and rendering rate decreasing
(Frame rate FPS reduction) and even the problem of jamming. In this way, you need to
optimize the roaming scene, eliminate some models and materials, and simplify the
system model. Since the model affects the scene rendering rate in Unity 3D mainly
because of its mesh number (the number of triangle faces), the main goal of the
optimization model studied in this paper is to reduce the number of mesh faces of the
scene model.

3.1 Optimization Method Based on Plug-in


Meshing surfaces is a challenging problem, and many methods have been proposed
over the past three decades to address the increased quality and scalability requirements
of modern applications. Semi-regular meshes with uniform element shapes and similar
to triangular or quadrilateral meshes are of particular interest due to their structure,
which is ideal for solving PDEs and controlling meshes that define high-order surfaces.
Two existing tools, InstantMeshes and QuadriFlow, are widely recognized in mesh
topologies.
InstantMeshes was proposed by Wenzel Jakob et al. which uses a uniform partial
smoothing operator to re-divide the surface into an isotropic triangle or a quadruple
dominant mesh that optimizes the edge and vertex positions in the output mesh. The
algorithm can generate meshes with high isotropic while naturally aligning edges and
capturing sharp features. This method is easy to implement and parallelize, and can
handle various input surface representations such as point clouds, range scans, and
triangular meshes. The entire process real-time (less than one second) is executed on a
mesh with hundreds of thousands of faces in order to achieve a new type of interactive
workflow. Because the algorithm avoids any global optimizations and its key steps are
linearly proportional to the input size, it can handle extremely large meshes and point
clouds that are over hundreds of millions of elements in size.
QuadriFlow is an extensible algorithm that generates a quadrilateral mesh with
fewer singularities based on the InstantMeshes algorithm described by Jakob et al.
270 M. Dong et al.

Because singularities in quadrilateral meshes can cause problems in many applications,


including parameterization and rendering using Catmull-Clark subdivision surfaces.
Singularities are difficult to completely eliminate, but they can be kept small. And
partial optimization algorithms usually produce meshes with many singularities, and
the best algorithms often require non- partial optimization, but this can be slow.
Jingwei Huang et al. proposed an efficient method to minimize singularities by com-
bining InstantMeshes targets with linear and quadratic constraint systems, and imple-
menting these constraints by solving global minimum cost network flow problems and
local Boolean satisfiability problems. Their evaluation shows that the quality of the
quadrilateral generated by this method is as good as other methods, and if not better,
the singularity achieved is four times less than InstantMeshes. In general, other algo-
rithms that produce very few singularities are much slower, and this method takes less
than ten seconds to process each model.
The plug-in CreateQuadMesh integrates InstantMeshes and QuadriFlow tools to
make them callable in Rhino, automatically creating triangle or quadrilateral meshes,
and performing new structural topologies on complex models to simplify the model.
The flow of the plug-in is shown in Fig. 1.

Fig. 1. Flowchart for optimizing the model using plug-ins

The process of optimizing the model using the plug-in CreateQuadMesh is as


follows:
(1) Import the model into Rhino. In the early stage, you need to ensure that the model
file is in the format supported by Rhino, such as: .stp, .igs, .3dm, etc.;
(2) Execute the CreateQuadMesh plug-in command on the imported model;
(3) Select the InstantMeshes tool, set the mesh parameters, such as the number of
mesh faces, select the output scheme, such as the triangle mesh, click Create Mesh
to output the selected type of mesh; or select QuadriFlow, set the mesh param-
eters, click Create Mesh Output a quadrilateral mesh;
(4) The reduced model is judged against the threshold value of the point-line-surface
to determine whether there is an extra point-line-surface. If it exists, it needs to
reconstruct the model, and then convert the reconstructed NURBS surface model
into mesh surface by using the Mesh command. If it does not exist, the model is
optimized;
(5) Finally, export the optimized model and ensure that its file format is supported by
Unity 3D.
Model Optimization Method Based on Rhino 271

3.2 Optimization Method Based on Rhino’s Own Tools


Rhino 6 software is the latest version of Rhino, which adds and optimizes many
features. For mesh tools, the mesh input, output, creation and editing tools are
enhanced at all stages of the design, including: transforming NURBS surfaces into
meshes (Mesh command): Select the NURBS surface model and execute the command
to convert the model to a mesh, where you can roughly adjust the number of mesh faces
of the converted model; reduce the mesh (ReduceMesh command): There are now
more control options to reduce the input mesh; fill the mesh hole (FillMeshHole
command): Fill the mesh hole by selecting the edge of the mesh hole, and so on.
From the above, the Rhino 6’s tools are very powerful and efficient. It can be used
to optimize the model by reconstructing the model and its own reduced mesh tool. The
operation process is shown in Fig. 2.

Fig. 2. Flowchart for optimizing the model using Rhino’s own tools

The process of optimizing the model using Rhino’s own tools is as follows:
(1) Import the model into Rhino. In the early stage, you need to ensure that the model
file is in the format supported by Rhino, such as: .stp, .igs, .3dm, etc.;
(2) Determine whether the imported model is a mesh. If yes, directly use the Redu-
ceMesh command to reduce the mesh. If not, need to execute the Mesh command
first, convert the NURBS surface into a mesh surface, and then reduce the mesh;
(3) The reduced model is judged against the threshold value of the point-line-surface
to determine whether there is an extra point-line-surface. If it exists, it needs to
reconstruct the model, and then convert the reconstructed NURBS surface model
into mesh surface by using the Mesh command. If it does not exist, the model is
optimized;
(4) Finally, export the optimized model and ensure that its file format is supported by
Unity 3D.

3.3 Rhino’s Own Tools Improvements


Through comparative analysis, the same model, with the same degree of reduction,
Rhino’s own tool has less change in appearance and less surface damage than the
CreateQuadMesh plug-in. Therefore, choose Rhino’s own tools for model optimiza-
tion. However, when using Rhino’s own reduced mesh tools to optimize the model, the
model is composed of multiple meshes. As the number of reductions is superimposed,
some of the meshes will reach their maximum reduction. If these meshes is reduced,
they will be broken. However, there are still meshes that can be reduced in the model.
272 M. Dong et al.

At this point, if you continue to optimize the model, you can only select these re-
reduced meshes individually and perform the reduce mesh command. This is a time-
consuming and labor-intensive work to a certain extent.
Rhino is more convenient for designers, including many auxiliary design plug-ins
such as vray TSpline, sketchup, hypershot, Keyshot, Grasshopper and more. Among
them, Grasshopper is a plug-in based on the Rhino platform to generate a model by
building a program algorithm. It can be built into a whole with a logical algorithm by a
simple command combination, and a desired model can be generated by a single
element group. And Grasshopper also opened script editing based on Visual Basic, C#,
Python, and also has a number of auxiliary plug-ins to improve work efficiency.
According to Rhino’s features and tools, using the Rhino to optimize model
requires a metric to measure the extent to which the given reduction will change the
shape and topology of the original mesh. Here are a few possible solutions:
(1) Reduce the number of any mesh, but not exceed the constraints of a certain
measure that changes the number of meshes.
(2) Identify the mesh that is at or near the “maximum reduction” concept, in case any
subsequent reduction may result in mesh breakage.
(3) Some “adaptive reduction” process in which each mesh of the model is reduced as
much as possible, but does not change the shape too much.
According to the above several solutions, using Grasshopper, edit the script as
follows to achieve the purpose of filtering the mesh when selecting.
(1) Start with no choice;
(2) Traverse all visible meshes in the scene;
(3) For each mesh, calculate its surface area and the number of mesh faces it contains;
(4) Set the parameter “density” to the ratio of the number of mesh faces to the surface
area;
(5) If the “density” is higher than a given parameter, it is the mesh to be reduced,
otherwise it is a mesh that does not need to be reduced;
(6) Perform the ReduceMesh command on the meshes that need to be reduced, and
then bake to Rhino’s “reduction surface” layer; the meshes do not need to be
reduced to bake to Rhino’s “no reduction” layer.

Fig. 3. Script diagram of Grasshopper improved model optimization


Model Optimization Method Based on Rhino 273

After editing the script according to Fig. 3, you can achieve the purpose of filtering
the meshes before reducing the meshes. The meshes that need to be reduced are used in
the “reduction surface” layer after the ReduceMesh command, and the meshes without
the reduction surface exists in the “no reduction” layer. Therefore, using Grasshopper
not only retains the original model, but also makes the whole optimization model
process more automated and efficient.

4 Comparative Analysis

A common method of constructing a virtual reality roaming scenario is to combine


Unity 3D with 3Dmax and use 3Dmax to create a 3D model. For a scenario consisting
of multiple single models, it is necessary to make each model lighter and minimize the
amount of resources of each model. In 3Dmax, the countermeasure for reducing the
number of triangular faces of the model is to use the reducer plug-in Polygoncruncher.
The main function of the Polygoncruncher face-down plug-in is to reduce the number
of faces of the model and minimize the number of polygons in the model while
ensuring normal structure. In the case of high optimization ratio, the texture infor-
mation, node color and polygon symmetry of the original model can be preserved
without losing details.
For model files with a large number of mesh faces, the Polygoncruncher plug-in is
difficult to reduce the face, takes a long time, is inefficient, and requires high perfor-
mance on the computer configuration. Therefore, a partial line body model is selected
as an example, which initially has 45 meshes and the number of mesh faces is 438764.
The model is reduced by using the Polygoncruncher plug-in in 3Dmax and the method
described in this paper. Figure 4(a) and (b) show the appearance of the model after the
same degree of reduction, Fig. 4(c) shows the effect of using the Polygoncruncher
plug-in to optimize the model on the premise of ensuring the appearance of the model.
As can be seen from these figures, the Polygoncruncher plugin also does not have
the ability to determine the nature of the mesh and cannot filter the meshes that have
reached the maximum extent, resulting in a continuous face reduction that causes the
model to break. Compared with the commonly used 3Dmax reducer plug-in Poly-
goncruncher, the method described in this paper has a small degree of change on the
appearance of the model, and has the ability to filter the meshes. It can operate on files
with a large number of mesh faces to achieve maximum. The file with a large number

(a) (b) (c)

Fig. 4. Comparative analysis


274 M. Dong et al.

of mesh faces can be operated to maximize the optimization of the large scene model,
so that the loading time after loading Unity 3D is reduced, the delay is low, and the
roaming speed is also improved.

5 Conclusion

This paper studies the Rhino-based model optimization method, which mainly
describes the process of reducing the mesh surface by the two tools of InstantMeshes
and QuadriFlow in Rhino plug-in CreateQuadMesh and Rhino’s own tools, and ana-
lyze the pros and cons of the two types of tools by comparing the same model with the
same degree of mesh reduction, it is concluded that Rhino’s own tools have better
effect on the model optimization with too many mesh faces, and have little effect on the
appearance of the model. In addition, in the actual operation, it was found that the
complexity of Rhino’s own tools to reduce the mesh, and proposed improvement
measures. The Grasshopper plug-in was used to edit the script to achieve the purpose of
filtering the mesh. Based on this, the ReduceMesh command was used to improve the
efficiency of the actual work.

Acknowledgements. The work is supported by Shanghai Municipal Commission of Economy


and Informatization of China (No. 2018-GYHLW-02009).

References
1. Changpo, F., Yufei, H.: Discussion on modeling method of 3dmax software by taking library
digital model as an example. Bol. Tec./Tech. Bull. 55(6), 478–487 (2017)
2. Yang, C.-W., Lee, T.-H., Huang, C.-L., Hsu, K.-S.: Unity 3D production and environmental
perception vehicle simulation platform. In: Proceedings of the IEEE International Conference
on Advanced Materials for Science and Engineering: Innovation, Science and Engineering,
IEEE-ICAMSE 2016, pp. 452–455 (2016)
3. Lu, G., Xue, G., Chen, Z.: Design and implementation of virtual interactive scene based on
unity 3D. In: Advanced Materials Research, vol. 317–319, pp. 2162–2167 (2011). Equipment
Manufacturing Technology and Automation
4. Tian, F., Luo, L.: Roaming of large urban scenes based on Unity 3D. In: 2018 International
Conference on Electronics Technology, ICET 2018, pp. 438–441 (2018)
5. Jakob, W., Tarini, M., Panozzo, D., et al.: Instant field-aligned meshes. ACM Trans. Graph.
34(6), 1–15 (2015)
6. Huang, J., Zhou, Y., Nießner, M., et al.: QuadriFlow: a scalable and robust method for
quadrangulation. In: The Eurographics Symposium on Geometry Processing (SGP) 2018
(2018)
Construction of Equipment Maintenance
Guiding System and Research on Key
Technologies Based on Augmented Reality

Lingyan Gao1(&), Fang Wu2, Lilan Liu1, and Xiang Wan1


1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai, China
[email protected], [email protected],
[email protected]
2
Huayu-Intelligence Technology Co., Ltd., Shanghai, China

Abstract. In order to solve the problems such as the complicated principle


structure existing in the process of equipment maintenance, the inability of
visual guidance with the maintenance technical manual, and the difficulty of
ensuring the quality and safety of maintenance, a kind of maintenance method
for the virtual-real fusion of equipment in the intelligent factory is proposed.
Firstly, on the basis of analyzing the advantages of augmented reality technol-
ogy applied to equipment maintenance, a system of equipment maintenance
guiding based on augmented reality is constructed, and secondly, the key
technologies involved in the maintenance guiding system are introduced,
focusing on the key technologies such as 3D tracking registration technology,
virtual-real masking technology and human-computer interaction technology.
Finally, the research content of equipment maintenance guiding is introduced,
and the operation process of the maintenance guiding system based on aug-
mented reality is given.

Keywords: Augmented reality  Virtual-real fusion  Maintenance guiding 


Smart factory

1 Preface

In the era of Industry 4.0, new hardware equipment as an infrastructure is essential,


with the increasing kinds and numbers of equipment, it will produce a large number of
engineering drawings, manuals, maintenance manuals and other technical materials, for
maintenance-workers, the use and search of these materials to troubleshoot equipment
is very difficult, At present, the maintenance of fault equipment mostly depends on the
paper manual and the staff experience, which seriously affects the maintenance effi-
ciency of the equipment.
Augmented reality technology [1] is a kind of technology that combines the virtual
information generated by computer with the real environment, so as to realize the
extension and enhancement of the information of the real environment. Applying
augmented reality technology to equipment-assisted maintenance, the real maintenance

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 275–282, 2020.
https://doi.org/10.1007/978-981-15-2341-0_34
276 L. Gao et al.

scene and virtual maintenance scene can be integrated, when the real environment of
the staff changes, the virtual information presented in front of the user will also change,
and maintain the consistency of virtual information, compared with other maintenance
systems, more visual, intuitive and real-time, so as to support worker to obtain rich
information and assist them to complete the work.
Boeing first applied augmented reality technology to the connection of power
cables and the assembly of wiring wires in aircraft manufacturing. It is reported that the
DART 510 aviation engine can reduce service time by 56% when it is used with the
augmented reality service system [2]. Samit et al. of the Algerian Centre for High
Technology and Development used AR for pump maintenance and developed the
ARPPSM system to solve the maintenance problems of complex photovoltaic pump
systems [3].
Domestic Cui et al. [4] built an equipment guiding maintenance system based on
augmented reality, adopted a modular design method, and induced the engine piston
damage and clutch plate replacement appearance maintenance task, Zhao et al. [5]
aimed at the key technical problems of augmented reality auxiliary maintenance system
realization, starting from the practical application needs of the maintenance, the aug-
mented reality auxiliary service prototype system is constructed, optimized and
improved.
According to the above background, this paper carried out the research on the
maintenance guiding technology based on the digital equipment of smart factory,
which is of great significance to improve the quality of equipment maintenance and
improve the production efficiency of the factory. And the research content and key
technology involved in it are expounded. Based on the analysis of the advantages of
augmented reality technology applied to equipment maintenance, this paper puts for-
ward the equipment maintenance guiding system based on augmented reality, and
expounds the research content and key technologies involved.

2 Equipment Maintenance Guiding System Based


on Augmented Reality

The framework of equipment maintenance guiding system based on augmented reality


technology is shown in Fig. 1.
In order to realize the virtual-real fusion guiding of the equipment, it is necessary to
combine the virtual object with the real object precisely, the breakthrough of its key
technology needs to solve the following technical problems: ① 3D tracking registra-
tion technology. ② Virtual and real masking technology. ③ Human-computer inter-
action technology.
Construction of Equipment Maintenance Guiding System 277

Fig. 1. The framework of equipment maintenance guiding system based on augmented reality

3 Key Technologies of Equipment Maintenance Guiding

Augmented reality technology superimposes virtual objects into the physical world,
relying on computer technology to build maintenance guiding text, maintenance
interpretation voice, 3D virtual model, maintenance guiding animation sequence, and
make them combine with physical world, so that worker can achieve human-computer
interaction by gestures and voice, and finally complete the virtual-real fusion mainte-
nance operation.

3.1 Tracking Registration Technology


Tracking registration technology is to accurately overlay virtual repair objects into real-
world scenes. The technology maintains the correct matching relationship by accurately
measuring the position and posture of the camera relative to the real world, and dis-
plays the virtual object to the correct position of the video streaming on AR glasses in
real time based on those data. This process involves the conversion between multiple
spatial coordinate systems. The conversion relationship between each coordinate sys-
tem is shown in Fig. 2.
278 L. Gao et al.

Fig. 2. Conversion relationship of spatial coordinate system

(1) The conversion from the virtual space coordinate system to the real space coor-
dinate system is used to determine the position and posture of the virtual object in
the real world.
(2) The conversion from the real space coordinate system to the camera space coor-
dinate system is used to determine the relative position and posture between the
real world and the camera, and the feature points of the video stream are acquired
by the camera in real time and matched to obtain a conversion relationship.
(3) The conversion of the camera space coordinate system to the imaging plane
coordinate system is to correctly display the generated virtual object on the
projection surface to achieve virtual-real fusion.

3.2 Virtual-Real Masking Consistency Technology


Masking consistency requires that virtual objects can occlude the background, and can
also be occluded by the foreground object, which makes it be with the correct masking
relationship. In the maintenance guiding system, the virtual objects include the virtual
parts to be repaired, the virtual repair tools, the virtual replacement parts, etc., these
virtual objects are superimposed on the real scene before the virtual objects need to
accurately calculate the front and rear position and hierarchical relationship of virtual
objects, to ensure the correct visual senses of the worker, to achieve the immersion and
true feeling. If you don’t think about masking relationships and overlay virtual objects
directly into a real scene, the virtual objects will always occlude objects in the real
scene, which will give the worker the illusion, make the work environment be
ambiguous, and reduce productivity. The current method to calculate the object’s
masking relationship is based on depth calculation, the specific implementation process
is shown in Fig. 3.

Fig. 3. Occlusion processing based on depth calculation


Construction of Equipment Maintenance Guiding System 279

3.3 Human-Computer Interaction Technology


Human-computer interaction technology is a necessary technology for mature aug-
mented reality systems, and it is also the key to realize AR-assisted maintenance intel-
ligence. The main purpose of human-computer interaction is that the user interacts
naturally with the virtual information in the real environment through the input and output
devices. At present, the research hotspot is gesture-based human-computer interaction.
In the aspect of gesture recognition technology, the traditional method is to wear
special markers or data gloves [7], the system realizes direct measurement and
acquisition of finger gestures and gesture data; the other way is to rely on computer
vision technology to shoot video sequence image containing natural gesture activity
information, separating the gesture portion from the image sequence, extracting the
hand feature value and completing the recovery of the gesture data. The vision-based
gesture recognition process is shown in Fig. 4.

Fig. 4. Vision-based gesture recognition process

4 Research Content and System Operation Process


of Virtual-Real Fusion Maintenance Guiding
4.1 Research Content

(1) Construction of virtual-real fusion maintenance guiding scene


In order to realize the maintenance support of augmented reality, it is necessary to
establish a priori maintenance model data. As shown in Fig. 5, the information sources
of the modeling mainly include the design information of the equipment components,
the maintenance operation rules, the failure mechanism, the operation instructions, and
the safety precautions. Forms include graphics, text, symbols, markers, animations, 3D
models, and more.
The virtual-real fusion maintenance guiding scene of the equipment is composed of
a virtual maintenance guiding scene and a real maintenance environment, wherein the
virtual maintenance guiding scene includes a virtual maintenance object, a virtual
maintenance tool, a virtual maintenance step guide, a virtual text video picture infor-
mation, and the like. For the faults that may occur in the intelligent manufacturing
process, it is necessary to establish a fault database in advance, and match the corre-
sponding fault ID according to the actual situation of running state of the equipment,
thereby calling out the corresponding virtual maintenance guiding scene.
(2) Establishment and superposition of maintenance guiding information
The process of maintenance operation is inseparable from the information, and the
maintenance guidance information plays a very important role in the equipment
280 L. Gao et al.

maintenance guiding process. The information mainly includes: text recurrence of each
step of maintenance operation, that is, the text prompt of each step in the traditional
maintenance manual; the prompt of whether the maintenance operation is completed, that
is, whether the staff member correctly completed the maintenance step of the previous
step according to the operation instruction; The prompt of whether the action is normal,
that is, whether the worker has used the correct repair tool for maintenance operations.
Only by combining virtual maintenance animations with comprehensive maintenance
information can the worker be instructed to complete the correct repair process.

Fig. 5. Prior maintenance model data

(3) Real-time display of virtual-real fusion maintenance guiding scene


After the establishment of the virtual maintenance guiding scene and the maintenance
guidance information model, it is necessary to accurately render it in real time to the
real scene, which is shown in Fig. 6.

Fig. 6. Real-time display of virtual-real fusion maintenance guiding scene


Construction of Equipment Maintenance Guiding System 281

As the position of the worker’s viewpoint changes, different occlusion relationships


will occur between the virtual objects and between the virtual objects and the real
devices. Therefore, it is necessary to calculate the actual spatial position of each virtual
object in the maintenance guiding scene and the depth information perception of the
real scene, and obtain the hierarchical relationship between the objects through the
comparison of the depth information, thereby rendering a maintenance scene with a
correct virtual-real occlusion relationship.

4.2 System Operation Process


The operation process of the virtual-real fusion maintenance guiding system based on
augmented reality is shown in Fig. 7. It mainly includes information collection module,
virtual scene generation module, real-time tracking registration of camera module,
virtual-real fusion display module and human-computer interaction module.

Fig. 7. Operational process of virtual-real fusion maintenance guiding system

The information acquisition module mainly collects information through the real
camera to the maintenance equipment, including image information of various angles,
related attribute information, equipment running status information, fault database
information of the equipment, etc., which are used by offline model training, and then
used by the virtual scene generation module. The virtual scene generation module
mainly captures and matches the corresponding virtual maintenance scene through real-
time camera acquisition. The real-time tracking registration module mainly calculates
the alignment of the virtual and real coordinate system and the registration of the virtual
and real objects by estimating the real-time pose of the camera. The virtual-real fusion
display module mainly displays the virtual and real objects in the image plane
282 L. Gao et al.

according to the correct occlusion relationship. The human-computer interaction


module is mainly to realize the interaction between the worker and the maintenance
guiding scene, and improves the efficiency of the worker to complete the maintenance
operation.

5 Summary

With the advent of the information age, paper manuals are slow to update and difficult
to store. A large amount of complex professional information is difficult to understand
and cannot improve the efficiency of maintenance personnel. Based on the analysis of
the drawbacks of traditional maintenance, this paper proposes an auxiliary maintenance
method based on augmented reality, and gives the system framework and system
operation flow of equipment virtual-real fusion maintenance guiding, and elaborates the
research content and key technologies involved in the method. According to this
method, the maintenance personnel can receive the virtual information guidance in real
time during the actual work, and can not only observe the actual maintenance scene,
but also observe the superimposed virtual information, which completes the unification
in the visual field, and can better complete the maintenance task, improve the work
efficiency.

Acknowledgements. The authors would like to express appreciation to mentors in Shanghai


University and Huayu-intelligence Technology Co. Ltd for their valuable comments and other
helps. Thanks for the pillar program supported by Shanghai Economic and Information Com-
mittee of China (No. 2018-GYHLW-02009).

References
1. Industry 4.0: How augmented reality is transforming manufacturing. Smart Fact. (09), 28–30
(2018)
2. Liu, F., Liu, P., Xu, B.: Key technology research on enhanced reality-enhanced weapon and
equipment maintenance systems. Fly. Missiles (09), 74–80 (2017)
3. Schwald, B., Laval, B.D., Sa, T.O., et al.: An augmented reality system for training and
assistance to maintenance in the industrial context. In: International Conference in Central
Europe on Computer Graphics, pp. 425–432 (2013)
4. Cui, B., Wang, W., Qu, J., Li, Z.: Design and realization of equipment induction maintenance
system based on augmented reality. Firepower Command Control 41(11), 176–181 (2016)
5. Zhao, S.: Augmented reality assisted maintenance key technology research. Hebei University
of Technology (2016)
6. Zeng, Y.: Review and Outlook on augmented reality virtual masking methods. J. Syst. Simul.
(1), 1–10 (2014)
7. Sun, C., Zhang, M., Li, Y., et al.: Human-natured interaction in augmented reality
environment. J. Comput.-Aided Des. Graph. 23(4), 697–704 (2011)
A New Fault Identification Method Based
on Combined Reconstruction Contribution
Plot and Structured Residual

Bo Chen1, Kesheng Wang2, Xiue Gao1(&), Yi Wang3, Shifeng Chen1,


Tianshu Zhang1, Kristian Martinsen4, and Tamal Ghosh4
1
College of Information Engineering, Lingnan Normal University,
Zhanjiang, China
[email protected], [email protected],
[email protected], [email protected]
2
Department of Mechanical and Industrial Engineering, NTNU,
Trondheim, Norway
[email protected]
3
The School of Business, Plymouth University, Plymouth, UK
[email protected]
4
Department of Manufacturing and Civil Engineering, NTNU, Gjøvik, Norway
{kristian.martinsen,tamal.ghosh}@ntnu.no

Abstract. The existing contribution plot-based reconstruction fault identifica-


tion methods suffer from low identification accuracy and serious trailing effect.
In this paper, a new fault identification method is proposed based on a com-
bination of reconstruction contribution plot and structured residual method. The
fault direction vector is calculated by utilizing the structured residual method.
The reconstruction contribution plot utilizes the obtained fault direction vector
to accurately localize the fault variable, where the fault source can be accurately
localized subsequently. The experimental results show that, compared with the
traditional PCA and PPCA (PCA based on probability) reconstruction contri-
bution method, this algorithm can accurately identify the fault variables, and
reduce the influence of the fault variables on the non-fault variables.

Keywords: Fault identification  Reconstruction contribution plot  Structured


residual  Fault direction vector

1 Introduction

Variously fault identification algorithms have currently evolved based on the use of fuzzy
logic theory, multivariate statistical analysis, artificial intelligence and other types of
algorithms. Among these algorithms, algorithms based on statistical analysis and its
improvement has been widely used. Miller [1] introduced the contribution plot method to
reflect the contribution of each variable to the statistics for the first time. Based on this
work, various PCA model-based contribution plot methods have been proposed subse-
quently, including complete decomposition contribution, partial decomposition contri-
bution, diagonal contribution, angle-based contribution and reconstruction-based

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 283–291, 2020.
https://doi.org/10.1007/978-981-15-2341-0_35
284 B. Chen et al.

contribution (RBC). The traditional reconstruction contribution plot method would


increase the contribution of non-fault variables when localizing the fault variables, it is
necessary to combine multivariate statistical methods and analytical methods to inves-
tigate and decouple the fault vectors by constructing appropriate residuals, where the
accuracy of fault separation can be improved [3, 4]. When the fault range is set to a larger
value [5], RBC can identify the single-variable fault with 100% accuracy. However, the
accuracy of multivariable fault identification [6] cannot be 100%. Therefore, it is urgent to
study new methods to improve the traditional reconstruction contribution method.
A weighted RBC-based [7] fault identification method was proposed, which can
reduce the impact of fault variables on non-fault variables. By leveraging the missing
values, an improved reconstruction contribution plot method was proposed to signifi-
cantly reduce the “pollution effect” on non-fault variables [8]. A kernel principal
component analysis (KPCA) based fault identification method is proposed based on
data reconstruction [9], it can desirably avoid the single-variable fault in the traditional
KPCA algorithm and obtain reduced computational complexity of the data indices. To
overcome the inconsistent monitoring results with two different monitoring statistics, a
probabilistic principal component analysis-based reconstruction contribution plots
(PPCA-RCP) was proposed by using monitoring statistics of a unified metric [10]. The
problem of inaccurate fault variables localization by traditional contribution plot
analysis method is further investigated, where a new fault localization method was
proposed by combining the nearest neighborhood imputation and the traditional con-
tribution plot [11, 12].
The fault variable acquisition process is relatively complicated in complex systems,
where the identification accuracy and real-time performance are difficult to be guar-
anteed. Therefore, a reconstruction contribution plots combined structured residual
(RCP-SR) method is proposed based on reconstruction contribution plot and structured
residual. The structured residual method is used to calculate the fault variable direction
and the fault variable. Subsequently, the direction data is fed to the reconstruction
contribution map for accurate fault localization, which can effectively avoid the trailing
effect and improve the fault identification rate.

2 Reconstruction Contribution Method

The contribution plot can be utilized to visualize the results of fault variables with a bar
chart, which can reflect the severity of the fault variables and visualize the impact of the
fault variable on the non-fault variable. The contribution rate of all principal variables
is reflected in the contribution plot, which can be used to intuitively observe the
statistical contribution and determine an abnormal data.
Due to single fault variable, the fault sample x can be decomposed as normal and
faulty components,

x ¼ x  þ ni f i ð1Þ

where x is the normal component, ni is the direction vector of the fault and fi is the
fault amplitude of the fault variable.
A New Fault Identification Method 285

The reconstructed value zi can be expressed as follow,

z i ¼ x  ni f i ð2Þ

The statistics D of observed samples can be expressed as,

DðxÞ ¼ xC1 xT ð3Þ

For each variable xi ði ¼ 1; 2;    ; mÞ, the statistical contribution of monitoring


statistics D from xi to x can be expressed as,

cD
i ¼ ðxC
0:5
ni Þ 2 ð4Þ

where ni is the direction vector of the i-th fault variable vector.


According to (4.3), the monitoring measure of the reconstructed sample zi can be
expressed as follows,

Dðzi Þ ¼ zi C 1 zTi ð5Þ

If the method is to minimize the monitoring measure of reconstruction sample


Dðzi Þ, it can be obtained by taking the partial derivative of fi as follow

dðDðzi ÞÞ
¼0 ð6Þ
dfi

The amplitude fi can then be expressed as,

fi ¼ ðnTi C1 ni Þ1 nTi C 1 xT ð7Þ

When the fault occurs, the reconstruction method is utilized to localize the fault.
When the fault direction ni is correctly localized, the fault variable identification is
correct if the monitoring measure of reconstruction sample Dðzi Þ is lower than the
control threshold. Otherwise, the fault variable identification is incorrect.
For a single variable, the reconstruction contribution [2] can be calculated as,

cRBC
i ¼ indexðxÞ  indexðzi Þ ¼ xC 1 ni ðnTi C 1 ni Þ1 nTi C1 xT ð8Þ

For multiple variables, the reconstruction contribution [13] can be calculated as,

cRBC ¼ xC 1 NðNT C1 NÞ1 NT C1 xT ð9Þ

where N is the matrix constructed by the fault direction vector.


286 B. Chen et al.

3 Structured Residual
3.1 The Principle of Structured Residual
From the analysis in the previous section, it can be seen that the fault direction vector ni
directly determines the fault localization accuracy. In order to improve the accuracy, it
is required to obtain the correct fault direction vector ni . In this paper, the fault direction
vector ni based on PCA structured residual is proposed, which is further illustrated as
follows.
Since the original residual te ¼ PTe x represents the deviation of the monitored
variable from the principal component subspace (PCS) at each sampling time, the
original residual can be used to construct the structured residual. Since the relationship
between PCS and residual Subspace (RS) is orthogonally complementary in space, it
can be obtained for system with fault,

PTe x0 ¼ 0 ð10Þ

te ¼ PTe x ¼ PTe nf ¼ /f ð11Þ

where x0 represents the true variable value without impact of measurement and noise,
PTe represents the fault mapping vector and f represents the incident matrix.
Based on (10) and (11), it can be obtained,

/ ¼ PTe n ð12Þ

In particular, / represents the mapping matrix from individual fault to original


residual te , where each column represents the fault mapping coefficient vector from one
corresponding fault to original residual te .
By introducing the transformation matrix G, the structured residual c is constructed
by the original residual,

c ¼ Gte ¼ G/f ¼ Hf ð13Þ

where the number of structured residuals is the number of rows in G and H ¼ G/


represents the mapping matrix from each fault to the structured residual. The i-th row in
the incidence matrix is represented by ci . When corresponding elements in the i-th row
is 0, it indicates that the residual has no response to the fault, i.e., the i-th row gTi of G is
orthogonal to the corresponding columns in /.

gTi /jifcode¼0 ¼ 0 ð14Þ

where /jifcode¼0 ¼ 0 represents a matrix consisting of the columns corresponding to the


row 0 element of the incidence matrix. Since the number of rows in / is m, the solution
A New Fault Identification Method 287

exists for (14) and there exists the following relationship between its rank and the
number of rows in /,

rank½/jifcode¼0 ¼ 0\m ð15Þ

To guarantee that ci corresponds to the fault from the 1 value in i-th row of the
incidence matrix, the i-th row of the incidence matrix should satisfy the following
necessary condition,

rank½ /jifcode¼0 ¼ 0 u1j  ¼ rank½/jifcode¼0 ¼ 0 þ 1 ð16Þ

where u1j is the column in / not belonging to the column of /jifcode¼0 ¼ 0.


The mapping vector matrix and the incidence matrix directly determine the trans-
formation matrix, and it can directly obtain PTe with PCA statistical model.

4 The Fault Identification Algorithm Flow Based


on Reconstruction Contribution Plot and Structured
Residual

Based on the reconstruction contribution plot and structured residual method as


described in the previous two sections, a fault identification algorithm is designed by
combining the reconstruction contribution plot and structured residual, where the
algorithm flow is summarized in Fig. 1. The basic idea is briefly described as follows.
Firstly, the fault direction vector ni is calculated by utilizing the structured residual
algorithm. Secondly, the obtained fault direction vector is fed into the reconstructed
contribution plot for accurate fault variable localization. The steps of the algorithm are
further summarized as follows.
(1) The fault direction vector ni can be obtained with the structured residual
algorithm;
(2) The fault sample can be expressed as normal and faulty components, i.e.,
x ¼ x þ ni f i ;
(3) The PCA based method is utilized to reconstruct the fault variable by utilizing
z i ¼ x  ni f i ;
(4) According to the statistics of observed sample DðxÞ ¼ xC 1 xT , the reconstruction
monitoring measure can be expressed as Dðzi Þ ¼ zi C 1 zTi ;
(5) By taking partial derivative of reconstruction monitoring measure, i.e., dðDðz
dfi
i ÞÞ
¼ 0,
the fault amplitude can be obtained as fi ¼ ðnTi C1 ni Þ1 nTi C 1 xT ;
(6) The contribution from the fault types of a single fault variable and multiple fault
variables can be calculated with cRBC i ¼ xC 1 ni ðnTi C 1 ni Þ1 nTi C1 xT and
cRBC ¼ xC 1 NðNT C1 NÞ1 NT C 1 xT , respectively;
(7) The fault source can be accurately localized according to the identified fault
variables.
288 B. Chen et al.

5 Simulation Validation and Analysis


5.1 Simulation Environment Setting
The TE process is a realistic simulation of the actual production plant. The entire
process consists of five operating units, i.e., reactor, product condenser, gas-liquid
separator, recycle compressor and stripper. Four types of gaseous materials mainly
participate in the reaction, which generates two types of products G and H via chemical
reaction. Moreover, a small amount of inert gas B and gaseous by-products are
removed by venting during the product feed, where 22 measurement variables are used
in the continuous process. There are 21 types of faults in the TE process, which is
shown in Table 1. In this section, fault 1 and fault 14 are used as examples to validate
the practicability of the algorithm, wherein the variables associated with fault 1 are x1 ,
x4 and x18 , and the fault variables associated with fault 14 are x9 and x21 (Table 2).

Table 1. A summary of the fault types in the TE process


Number Process variables Types
Fault 1 Feed ratio of Material A/C changes Step
Fault 2 Composition ratio changes Step
Fault 3 Temperature change of Material D Step
Fault 4 Temperature change of the reactant cooling water inlet Step
Fault 5 Temperature change of the reactant cooling water inlet Step
Fault 6 Loss of Material A Step
Fault 7 Head loss of Material C Step
Fault 8 Composition changes of Materials A, B, C Random
Fault 9 Temperature change of Material D Random
Fault 10 Temperature change of Material C Random
Fault 11 Temperature change of reactor cooling water inlet Random
Fault 12 Temperature change of condenser cooling water inlet Random
Fault 13 Reaction kinetic constant change Slow shifting
Fault 14 Reactor cooling water valve Sticky
Fault 15 Condenser cooling water valve Sticky
Fault 16-21 Unknown Unknown

Table 2. Fault description


Fault number Fault description Fault variables Fault type
Fault 1 Feed ratio of Material A/C x1 , x4 , x18 Step
Fault 14 Reactor cooling water valve x9 , x21 Sticky
A New Fault Identification Method 289

5.2 The Analysis and Comparison for Simulation


By comparing RBC, PCA-MRCP, PPCA-RCP and proposed RCP-SR methods, the
location analysis of fault 1 is shown in Fig. 1(a), (b), (c), and (d). In these figures, the
darkness of the shadow color represents the contribution rate of reconstruction, among
which the darker color represents the larger contribution rate, and the variable with a
larger contribution rate is defined as the fault variable. From these figures, it can be
seen that there is a trailing effect from the time 160 to 400, because the reason is that
the variables are not independent of each other in the initial stage of the fault. Although
there are certain relationships between the variables after time 400, the system will
reach a new stable state due to the self-regulation of the control system. The recon-
struction contribution plot based on traditional RBC method is relatively chaotic,
resulting in some non-faulty variables being misidentified as fault variables. Ultimately,
it affects the identification and leads to trailing effects. The PCA-MRCP method suffers
from the inconsistent fault variables identification issue when different PCA metrics are
used, where the trailing effect is also generated. The PPCA-RCP method is better than
the traditional method. Consistent with the previous process analysis, x1, x4 and x18 can
be correctly identified to be the most obvious three fault variables. The RCP-SR
method can overcome the inconsistency of the localization results when monitoring the
statistics of different metrics based on the PCA-MRCP method, and can localize the
fault variables more accurately and effectively.

(a) (b)

(c) (d)

Fig. 1. (a) Basic RBC method, (b) PCA-MRCP method, (c) PPCA-RCP method, (d) RCP-SR
method
290 B. Chen et al.

(a) (b)

(c) (d)

Fig. 2. (a) Basic RBC method, (b) PCA-MRCP method, (c) PPCA-RCP method, (d) RCP-SR
method

The fault localization analysis is shown in Fig. 2(a), (b), (c) and (d). Although the
traditional RBC method can correctly identify the fault variable to be x2, x9 and x21, the
localized fault variables are not consistent with the actual fault variables, where the
trailing effects are rather significant. Although the PCA-MRCP method can localize the
fault variables, the localization results are inconsistent and affect determination of the
fault variables. The PPCA-RCP method can localize the fault variables accurately, but
the trailing problem exists. The proposed RCP-SR can desirably resolve the drawbacks
of the above methods with significant superiority.

6 Conclusion

A new fault vector direction vector calculation algorithm is developed based on


structured residuals, and a fault identification algorithm is proposed based on recon-
struction contribution plot and structured disability. Simulation example of the algo-
rithm is also designed based on the fault identification algorithm workflow. The
simulation results show that the proposed algorithm can achieve superior performance
compared to the conventional RBC, PCA-MRCP and PPCA-RCP methods, which can
obtain reduced impact on non-fault variables, suppression of trailing effects and
improved fault localization accuracy.
A New Fault Identification Method 291

Acknowledgements. This work is supported by the Special Funds for Science and Technology
Innovation Strategy in Guangdong Province of China (No. 2018A06001).

References
1. Miller, P., Swanson, R.E., Heckler, C.E.: Contribution plots: a missing link in multivariate
quality control. Appl. Math. Comput. Sci. 8(4), 775–792 (1998)
2. Alcala, C.F., Qin, S.J.: Analysis and generalization of fault diagnosis methods for process
monitoring. J. Process Control 21(3), 322–330 (2011)
3. Gertler, J., Cao, J.: Design of optimal structured residuals from partial principal component
models for fault diagnosis in linear systems. J. Process Control 15(5), 585–603 (2005)
4. Hu, Z., Chen, Z., Gui, W.: Adaptive PCA based fault diagnosis scheme in imperial smelting
process. ISA Trans. 53(5), 1446–1455 (2014)
5. Lin, S., Jia, L., Qin, Y.: Research on urban rail train passenger door system fault diagnosis
using PCA and rough set. Open Mech. Eng. J. 8(1), 340–348 (2014)
6. Kerkhof, P.V., Vanlaer, J.: Analysis of smearing-out in contribution plot based fault isolation
for Statistical Process Control. Chem. Eng. Sci. 104(50), 285–293 (2013)
7. Mnassri, B., El Adel, E.M., Ouladsine, M.: Reconstruction-based contribution approaches
for improved fault diagnosis using principal component analysis. J. Process Control 33, 60–
76 (2015)
8. Liu, J., Chen, D.-S.: Fault isolation using modified contribution plots. Comput. Chem. Eng.
61, 9–19 (2014)
9. Wang, Z., Feng, S., Chang, Y.: Improved KPCA fault identification method based on data
reconstruction. J. Northeast. Univ. (Nat. Sci.) 33(4), 500–503 (2012)
10. Guo, X., Liu, S., Li, Y.: Fault detection of multi-mode processes employing sparse residual
distance. Acta Automatica Sinica 45(3), 617–625 (2019)
11. Li, Y., Wu, J., Wang, G.: k-nearest neighbor imputation method and its application in fault
diagnosis of industrial process. J. Shanghai Jiaotong Univ. 49(6), 830–836 (2015)
12. Zhang, C., Gao, X., Xu, T.: Fault detection strategy of independent component–based k
nearest neighbor rule. Control Theory Appl. 35(6), 806–812 (2018)
13. Li, G., Alcala, C.F., Qin, S.J.: Generalized reconstruction-based contributions for output-
relevant fault diagnosis with application to the Tennessee Eastman process. IEEE Trans.
Control Syst. Technol. 19(5), 1114–1127 (2011)
Prediction of Blast Furnace Temperature
Based on Improved Extreme
Learning Machine

Xin Guan(&)

School of Information Engineering, Lingnan Normal University,


Zhanjiang, China
[email protected]

Abstract. In iron-making process of blast furnace, temperature is an important


indicator that relates closely to working condition. Silicon content in hot metal is
one of the main parameters to reflect the temperature inside the blast furnace. By
predicting the silicon content in hot metal, the theoretical basis for subsequent
parameters adjustment is provided. Aiming at the non-linear feature of silicon
content, a prediction method based on improved extreme learning machine is
proposed. The improved extreme learning machine uses flower pollinate algo-
rithm to optimize its parameters, and the prediction model of silicon content is
constructed by optimized extreme learning machine. Verified with production
data, the simulation results show that compared with the basic extreme learning
machine, the improved algorithm can speed up the prediction accuracy and
generalization ability, and it also has good stability.

Keywords: Extreme learning machine  Flower pollinate algorithm  Blast


furnace temperature  Data-driven

1 Introduction

The iron and steel industry is a pillar industry in China, but at the same time it
gradually becomes a major source of air pollution and energy consumption. Blast
furnace (BF) is the core and major iron-making container [1]. A number of studies have
shown that reasonable BF temperature was a significant key for BF steady operation.
Because of the positive correlation between BF temperature and silicon content in hot
metal, the silicon content is commonly used to reflect the change of BF temperature.
Taking into account of the large-lag system and a harsh environment inside the BF, the
mechanism modeling becomes impractical. In this case, using data-driven technique to
predict silicon content in hot metal is a good choice.
In the existing literature, there are many prediction model of silicon content,
including linear model and non-linear model For the linear model, VARMAX model
was developed for the prediction of silicon content in hot metal [2]. In addition, partial
least squares (PLS) [3] and others algorithms [4, 5] were used in silicon content in hot
metal prediction model. For the non-linear model, major algorithm for prediction
modeling can be divided into support vector machine (SVM), neural network and

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 292–298, 2020.
https://doi.org/10.1007/978-981-15-2341-0_36
Prediction of BF Temperature Based on Improved Extreme Learning Machine 293

extreme learning machine (ELM). Using LSTM to improve RNN to establish predic-
tion model [6]. Moreover, the algorithm combining RBF neural network and particle
swarm optimization algorithm was also used for silicon content prediction [7].
For SVM, improved ELM algorithm [8] and improved multi-layer ELM [9], are also
applied in the prediction of hot metal silicon content. In recent years, ELM and its
variants have been successfully applied in many real fields [10–12].
In order to overcome the issues of the conventional gradient-based learning algo-
rithms for SLFNs [13], ELM algorithm is proposed in 2006. Compared with other
neural network algorithms, ELM has more fast training speed and universal approxi-
mation capability.
The rest of this paper is organized as follows. Section 2 presents the overviews of
ELM and flower pollinate algorithm, the proposed improved ELM algorithm is mainly
introduced. Prediction model of blast furnace temperature is presented in Sect. 3. Sim-
ulation results are shown in Sect. 4, and the conclusions of this paper are given in Sect. 5.

2 Improved Extreme Learning Machine

2.1 Flower Pollinate Algorithm


Flower Pollinate Algorithm (FPA) is proposed by Yang in 2012, is a new method for
seeking global optimization [14]. It can simulate the process of flower pollinate in
nature. Moreover, FPA has been widely applied in many fields, such as feature
selection, solar maximum power point tracking and optimization algorithm for maxi-
mizing area coverage in wireless sensor networks. FPA has many advantages of simple
implementation, less parameter setting, fast convergence speed and high convergence
accuracy. In nature, flower pollination behavior is divided into self pollination and
cross pollination. Self pollination refers to pollen of the same plant’s stamen moves to
the stigma of the same plant, which is regarded as global optimization. Whereas, cross
pollination refers to pollen travels others plants and sticks to stigma of different plant
[15, 16], which is regarded as local optimization in the algorithm. Global optimization
is performed which used Levy flights as defined in Eq. 1 and local optimization is
performed with defined in Eq. 3. The transformation between global optimization and
local optimization is controlled by switch probability p, p 2 ½0; 1.
 
Xit þ 1 ¼ Xit þ L Xit  g ð1Þ

where Xit þ 1 and Xit are the positions of pollen Xi at t þ 1 and t respectively. g is the
best solution found among the current solutions. L is control parameter which repre-
sents the random strength obeying Lévy flighting mode. The parameter L is defined in
Eq. 2.

kCðkÞ sinðpk2 Þ 1
L  1 þ k ; ðs  s0  0Þ ð2Þ
p s
where CðkÞ is a gamma function.
294 X. Guan

The local optimization process is represented as follows.


 
Xit þ 1 ¼ Xit þ e Xjt  Xkt ð3Þ

where Xjt and Xkt are selected from the same population randomly. e is a random
number subjected to [0, 1] distribution.

2.2 Extreme Learning Machine


Extreme learning machine (ELM) was firstly proposed by Huang, and it is regarded as
a single hidden layer neural network. ELM has better generalization performance, and
it has fast learning speed.
Given a training dataset containing N samples ðxi ; ti Þ, 1  i  N, where xi ¼
½xi1 ; xi2 ;    ; xin T 2 Rn denotes the ith input sample and ti ¼ ½ti1 ; ti2 ;    ; tim  2 Rm pre-
sented the corresponding output value. The mathematical model of ELM is expressed
as:
XL  
i¼1
bi g xi  xj þ bi ¼ oj ; j ¼ 1; 2;    ; N ð4Þ

where the model contains L hidden layer mode. xi ¼ ½xi1 ; xi2 ;    ; xiL T and bi are the
learning parameters generated randomly. oj is the corresponding output value of the jth
sample. gðxÞ is the activation function. The above Eq. 4 is expressed in matrix form as
follows:

Hb ¼ T ð5Þ

where
2 3
gðx1  x1 þ b1 Þ  gð x L  x 1 þ bL Þ
6 .. .. .. 7
Hðx1 ;    ; xL ; b1 ;    ; bL ; x1 ;    ; xN Þ ¼ 4 . . . 5
gðx1  xN þ b1 Þ    gð x L  x N þ bL Þ N L
ð6Þ

is called hidden layer output matrix. According to the least square theory, the b can be
calculated by Eq. 7:

^ ¼ HþY
b ð7Þ

where H þ is the Moore-Penrose generalized inverse of H.

2.3 Improved ELM Based on FPA (FPA-ELM)


During ELM training, a large number of hidden nodes are set to get higher learning
accuracy. However, the learning speed of neural network can be affected to some
Prediction of BF Temperature Based on Improved Extreme Learning Machine 295

extent. In order to reduce the number of hidden nodes and improve the generalization
ability, FPA-ELM algorithm which combining FPA and ELM is proposed. The new
algorithm makes use of flower FPA to optimally select the weight values of input layer
and the biases of hidden nodes. The learning process of new algorithm is shown in
follows:
① Initialize all N flowers in the population randomly. Each flower can be
represented in vector:
h i
ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ
F i ¼ x11 ; x12 ;    ; x1k ; x21 ; x22 ;    ; x2k ;    ; xm1 ; xm2 ;    ; xmk ; b1 ;    ; bk
ð8Þ
ðiÞ ðiÞ
where F i represents the ith flower in swarm. xmk and bk are input layer weight and
bias of hidden layer separately.
② The error between the real value of samples and the predicted value of ELM
model is taken as the fitness function. Find the best solution g in the initial
swarm.
③ Define a switch probability p 2 ½0; 1.
④ while ðt\MaxIterationÞ
for i = 1:N
⑤ if rand < p
Do global optimization via Eq. 1
⑥ else
Do local optimization via Eq. 3
⑦ end if
⑧ Find the current best solution g
⑨ end while
⑩ ELM network structure is established with g

3 Prediction Model for Blast Furnace Temperature

In order to strive to maintain the stable operation in BF iron-making process, temperature


control is the most important factor. The silicon content in hot metal can well reflect the
temperature condition of BF. In this paper, we propose the prediction model for the
silicon content in hot metal based on improved extreme learning machine algorithm.
Because of complex characteristics of BF internal reaction, there are many param-
eters affecting the temperature of BF. These parameters are divided into two categories
[17, 18]. One is the state parameters, including feed speed, gas permeability et al., and
the other one is the control parameters, such as giving wind quality, wing temperature
et al. Too many parameters are used, which will inevitably affect the learning speed and
accuracy of the model. According to relevant operation experience and correlation
296 X. Guan

analysis among parameters and the silicon content [19], we chose eight parameters as
input value of the prediction model, which are air volume, blast temperature, blast
pressure, gas permeability, oxygen enrichment, pulverized coal injection, feed speed
and the latest silicon content. The output value is the next silicon content.

4 Simulation Results

This section describes the simulation results of proposed prediction model for silicon
content. We collect 500 production data. Among them, 400 data were used as training
data, and another 100 were used as test data. All simulations have been conducted in
Matlab R2012. In order to verify the performance of the proposed prediction model, we
compared it with the basic ELM algorithm. Figures 1 and 2 are simulation results of
basic ELM and the improved ELM respectively.

Fig. 1. Prediction results of basic ELM

Fig. 2. Prediction results of the improved ELM


Prediction of BF Temperature Based on Improved Extreme Learning Machine 297

It can be seen from the simulation results that for the same input data, the prediction
accuracy of the proposed improved ELM is higher than that of the basic ELM algo-
rithm. Making use of RMSE as the prediction index, the results are shown as Table 1.

Table 1. The simulation results of silicon content


Algorithm ELM FPA-ELM
RMSE 0.0381 0.0307

Table 1 shows that the proposed improved ELM is 0.0307, and the basic ELM is
0.0381. The RMSE is reduced by 19.4%. The proposed improved ELM has better
prediction accuracy and stability.

5 Conclusions

Aiming at the randomization of ELM parameters, flower pollinate algorithm is used to


optimize ELM parameters. The simulation results show that the proposed improved
ELM algorithm improves the prediction accuracy of silicon content. Because of the
flower pollinate algorithm has not been put forward for a long time and is seldom used
in BF applications, it can be further studied in algorithm and application in the future.

Acknowledgements. The work is supported by Lingnan Normal University (no. ZL1816).

References
1. Liu, L.M., Wang, A.N., Sha, M., Sun, X.Y., Li, Y.L.: Optimal SVM for fault diagnosis of
blast furnace with imbalanced data. ISIJ Int. 51(9), 1474–1479 (2011)
2. Ostermark, R., Henrik, S.: VARMAX-modelling of blast furnace process variables. Eur.
J. Oper. Res. 90(1), 85–101 (1996)
3. Bhattacharaya, T.: Prediction of silicon content in blast furnace hot metal using partial least
squares. ISIJ Int. 45(12), 1943–1945 (2005)
4. Lin, S., Zhi, L.I., Tao, Y.U., et al.: Model of hot metal silicon content in blast furnace based
on principal component analysis application and partial least square. J. Iron Steel Res. 18
(10), 13–16 (2011)
5. Saxén, H., Ostermark, R.: State realization with exogenous variables-A test on blast furnace
data. Eur. Oper. Res. 89(1), 34–52 (1996)
6. Li, Z.L., Yang, C.J., et al.: Research on hot metal Si-content prediction based on LSTM-
RNN. J. Chem. Ind. Eng. (China) 69(3), 992–997 (2018)
7. Liu, J.Y., Zhang, W.: Blast furnace temperature prediction based on RBF neural network and
genetic algorithm. Electron. Meas. Technol. 41(3), 3505–3508 (2018)
8. Zhang, H.G., Yin, Y.X., Zhang, S.: An improved ELM algorithm for the measurement of hot
metal temperature in blast furnace. Neurocomputing 174, 232–237 (2016)
298 X. Guan

9. Su, X., Zhang, S., Yin, Y., et al.: Prediction model of hot metal temperature for blast furnace
based on improved multi-layer extreme learning machine. Int. J. Mach. Learn. Cybern., 1–14
(2018)
10. Liao, K., Wu, Y.P., Li, L.W., et al.: Displacement prediction model of landslide based on
time series and GWO-ELM. J. Cent. South Univ. 50(3), 129–136 (2019)
11. Jing, H.X., Qian, W., Che, K.: Short-term traffic flow prediction based on grey ELM neural
network. J. Henan Polytech. Univ. 38(2), 102–107 (2019)
12. Dong, Z., Ma, N., Meng, L.: Model improvement for boiler NOx emission based on
DEQPSO algorithm, no. 3, pp. 191–197 (2019)
13. Scardapane, S., Comminiello, D., Scarpiniti, M., Uncini, A.: Online sequential extreme
learning machine with Kernels. IEEE Trans. Neural Netw. Learn. Syst. 26(9), 2212–2220
(2015)
14. Yang, X.S.: Flower pollination algorithm for global optimization. In: Proceedings of the 11th
International Conference on Unconventional Computation and Natural Computation.
Lecture Notes in Computer Science, pp. 240–249. Springer, Orléan (2012)
15. Kaur, A., Pal, S.K., Singh, A.P.: New chaotic flower pollination algorithm for unconstrained
non-linear optimization functions. Int. J. Syst. Assur. Eng. Manag. (2017)
16. Yang, X.S., Karamanoglu, M., He, X.: Multi-objective flower algorithm for optimization.
Procedia Comput. Sci. 18, 861–868 (2013)
17. Xu, X.: Modeling of blast furnace temperature based on improved particle swarm optimizer
and support vector machine, Yanshan University (2015)
18. Wu, J.H.: The analysis on blast furnace smelting process and research on hot metal silicon
content prediction model, Yanshan University (2016)
19. Yan, C.: Hot Metal Temperature Forecast Research based on Quantum Genetic Neural
Network. Northeastern University (2014)
The Economic Dimension of Implementing
Industry 4.0 in Maintenance
and Asset Management

Tom I. Pedersen(&) and Per Schjølberg

Norwegian University of Science and Technology, Trondheim, Norway


{tom.i.pedersen,per.schjolberg}@ntnu.no

Abstract. The fourth industrial revolution is expected to bring substantial


changes to how maintenance will be conducted in the upcoming years. In this
paper three main areas of technology innovation related to how Industry 4.0 can
change maintenance and asset management have been identified: smart main-
tenance, smart working and smart products. However, implementing new
technology is only part of the picture, and several resent papers have pointed to
the importance of organizational factors. But it is also important to include
economic factors when developing strategies to implement Industry 4.0 in
maintenance and asset management. In this paper a model that does just that is
proposed. This is done by utilizing a concept called Economic Value Added
(EVA) and a more recent offspring in the domain of maintenance and asset
management labeled Value Driven Maintenance (VDM) sensitivity analysis. An
example of how the recent technology development in combination with eco-
nomic preconditions can be used as input to strategy for implementing Industry
4.0 in maintenance and asset management is presented in this article. This can be
a valuable input to help practicing managers develop strategies that generate
economic value for the company.

Keywords: Value Driven Maintenance (VDM)  Industry 4.0  Asset


management  Smart maintenance

1 Introduction

The fourth industrial revolution is expected to bring substantial changes across industry
sectors and business functions in the upcoming years, and the maintenance function is
no exception [1–5]. Predictive maintenance is often highlighted [5], but change is also
expected related to how maintenance tasks are carried out [6], and to the relationship
between manufacturers and operators of equipment [7].
According to surveys [5] and [8] some companies have already gained good results
from implementing maintenance techniques related to Industry 4.0, while others are
lagging. According to [9] unclear economic benefits are the greatest challenge to
successful implementation of Industry 4.0.
Given the examples of how, for instance smart connected products can change the
competitive environment [7, 10], it can be argued that companies that do not pursue the

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 299–306, 2020.
https://doi.org/10.1007/978-981-15-2341-0_37
300 T. I. Pedersen and P. Schjølberg

potential benefits of Industry 4.0 will be in danger of losing their competitive position
in the future.
According to [11] the focus in literature has mainly been on the technological
aspects of Industry 4.0, and according to [12] many organizations lack the understand
that organizational factors are an important part of Industry 4.0. The importance of
organizational factors is also pointed to in [11] and [13], which both present models for
how this can be considered in order to succeed with a digital transformation.
But one factor that has been covered to only a limited degree in the academic
literature is the economic dimension [2]. This is however an important factor because
digital transformation is not an end, but only a means to generate economic value.
In this paper it is proposed that a concept called sensitivity analysis from a
framework called Value Driven Maintenance (VDM) can be used to get a better
understanding of how the implementation of Industry 4.0 in maintenance and asset
management can help in generating economic value.
In the next section of this paper the VDM sensitivity analysis is presented. In
Sect. 3 a presentation of Industry 4.0 and three main areas of technology innovation
relevant for maintenance and asset management is given together with an overview of
the connection between the technology and economic dimensions. In Sect. 4 a dis-
cussion of how this can form an input to strategies for implementing Industry 4.0 to
maintenance and asset management is given together with an example of how this can
be done. The paper ends with a conclusion in Sect. 5.

2 Value Driven Maintenance and Sensitivity Analysis

According to several authors, the traditional view has been to regard maintenance as
only a cost item and a necessary evil [14–16], but there is a growing understanding that
maintenance should be viewed as a value-adding activity that has an important role in
securing the competitive position of a company [15].
One tool developed with the aim of demonstrating how maintenance and asset
management can deliver economic value is the sensitivity analysis that is part of the
Value Driven Maintenance (VDM) framework developed by Haarman and Delahay
[17, 18].
The VDM sensitivity analysis builds on a framework called Economic Value
Added (EVA) developed by the Stern Stewart Corporation in the 1990s [19]. The basic
premise in this framework is that the paramount goal of any company is to maximize
shareholder value [20]. In the EVA framework a company generate value if operating
profit is higher than the opportunity cost of capital employed [21]. This is calculated by
the following equation [20]:

EVA ¼ operating income  operating cost  capital charge ð1Þ

EVA as a metric is based on accounting data (usually for the past year) and is
therefore backwards looking [21]. To counter this, the concept of value drivers is
defined in the EVA framework as factors that can help create economic value in the
future [20].
The Economic Dimension of Implementing Industry 4.0 301

Haarman and Delahay have taken the EVA framework and specified a set of value
drivers related to maintenance and asset management. These are: asset utilization; cost
control; safety, health, environment and quality (SHEQ) control and capital allocation.
Capital allocation is again divided into the value drives: investments, spare parts
inventory and lifetime extension [17].
The VDM sensitivity analysis is done by first calculating the change in cash flow
from one percentage point improvement in year t (ΔCFt) for each of the value drivers.
Then the Incremental Present Value (IPV) from the one percentage point improvement
for each of the value drivers are calculated over the remaining expected lifetime of the
asset by using the discount rate r [17]:
X DCFt
IPV ¼ t ð1 þ r Þt
ð2Þ

The purpose of this is to better understand the relative importance of the different
value drivers. Something that is important input when developing a maintenance
strategy to help maximize the economic potential of an asset [17].
Figure 1 below show an example of how the result of an VDM sensitivity analysis
may look like. In this example publicly available data from an oil platform on the
Norwegian continental shelf (NCS) has been used. This is a good case in this context
because it demonstrates how the sensitivity analysis can be useful in a context where
changing conditions are expected. The first year of the analysis is year 2, while
expected asset lifetime is set to 20 years. According to the P2 estimate (the base
estimate for future production [22]), the oil platform will reach the end of the pro-
duction plateau in the end of year four.
The value on the y-axis for each of the value drivers represent the present value of
one percentage point improvement from the base year and throughout the expected
lifetime (IPV).

Fig. 1. Illustration of how the relative importance of the IPV for the different value drivers
change as the base year of the VDM sensitivity analysis increases. Inspired by [17].
302 T. I. Pedersen and P. Schjølberg

3 Industry 4.0

Industry 4.0 is about adaption of resent advancement in information and communi-


cation technology in industrial processes products [23] and is in [12] defined as “real-
time, high data volume, multilateral communication and interconnectedness between
cyber-physical systems and people”.
Inspired by [6] and [11] a set of front-end and base technologies related to Industry
4.0 in the context of maintenance and asset management have been defined. Front-end
technologies are in this context understood as the technologies that have a direct impact
on maintenance and asset management, while the base technologies are the foundation
that make the front-end technologies possible. The identified front-end technologies
are: smart maintenance, smart working and smart products.
Smart maintenance is about utilizing the potential from deploying more sensors,
collecting more data, and analyzing it based on better degradation models, and by that
develop better CBM and RBM so that both unplanned corrective maintenance and
unnecessary preventive maintenance are reduced [11]. If successfully implemented this
will result in higher availability [8] and possibly lower maintenance costs and longer
equipment life [24].
The downside is the investment in sensors and IT infrastructure that is needed in
order to achieve this [25].
Smart working is about using mobile devices, 3D and augmented reality (AR) to
make the active maintenance more effective and efficient [6].
Benefits associated with smart working are reduced maintenance cost and improved
SHEQ [26]. This technology may also have a positive effect on availability by
shortening response time in the case of corrective maintenance [3] and reducing
downtime due to maintenance induced errors [26].
But this is still immature technology in the manufacturing sector, and solutions are
not readily available, so an implementation will depend on investments in equipment
and software systems before any successful implementation can be achieved [11].
Smart products are physical products that are integrated with components that make
then smart (sensors, microprocessors and software), and that also are connected so that
data can be exchanged with the user, the manufacturer and other products [10].
The focus in this paper will be on how smart products open up new possibilities for
aligning manufactures, operators and service providers in new ways that can unlock
synergies that deliver value to all three parties [27]. One example of this can be
servitization where manufactures sell the outcome from a product on a pay-as-you-go
basis and not the product itself [10].
Servitization of physical products is not something new, for example Xerox
introduced this in the photocopy industry around 1960 [28], and Rolls Royce devel-
oped its “power-by-the-hour” model for jet engines in the 1980s [29]. But the recent
development in prognostic health management (PHM) give manufactures more control
over the risk associated with selling availability instead of equipment, and will prob-
ably make servitization more widespread [30].
The Economic Dimension of Implementing Industry 4.0 303

Changing to a model where you pay-as-you-go, instead of investing in assets will for
the operator have the effect that the amount of capital employed will go down while the
costs will go up. In a traditional products-based model, manufactures often get a large
part of their revenue from selling spare parts and after-market services and therefore they
have little incentive to make a more reliable product [10]. This changes with a products-
as-a-service-model, something that may lead to higher availability [8, 31].
In the table below the technology and economic dimensions have been combined
based on the references cited above.

Table 1. The link between technology and economic value potential. + and − indicates whether
the technology is expected to have a positive or negative impact on the corresponding value
driver. Less strong changes are marked with parentheses. Inspired by [1, 6, 11].
Technology dimension
Front-end tech. Smart maintenance Smart work Smart products
Base technologies Sensors Mobile solutions Same as
Big data 3D & VR smart maint.
IIoT
Income
Availability + (+) (+)
Economic dimension

- Cost
Cost control (+) + -
SHEQ control + + 0
- Capital charge
Investments (-) (-) +
Spare parts 0 0 +
Lifetime extension (+) 0 +

4 Discussion

Combining the VDM sensitivity analysis presented in Fig. 1 with the overview in
Table 1 can give an important input to strategy for implementing Industry 4.0 in
maintenance and asset management. This is illustrated in Fig. 2 below.
304 T. I. Pedersen and P. Schjølberg

Fig. 2. Example of how the focus of maintenance strategy should change with the relative
importance of the different value drivers, inspired by [17].

As one can see from Fig. 2 the value driver related to availability has the largest
IPV (present value from one percent point improvement) in the first three years of the
analysis. As stressed in [17] this does not say anything about the feasibility of this
improvement, or the level of investments needed to achieve this. Consequently, the
input from Sects. 2 and 3 does not say how big the economic value of implementing for
instance smart maintenance will be. This will depend on the current performance level
of the company [17], the potential of the technical solutions for smart maintenance that
are chosen [8] and the organizations ability to implement and make use of the new
technology [13].
But Fig. 2 offers a starting point for where the company in question should begin to
explore potential economic value of implementing Industry 4.0 to maintenance and
asset management. In addition, it illustrates how the relative importance of the value
drivers are expected to change in the future. In the case of smart maintenance, it shows
that the expected fall in production level for the example company indicates that
emphasis should be on mature solutions that can be implemented quickly.

5 Conclusion

The upcoming fourth industrial revolution is, as stated in the introduction, expected to
bring substantial changes to maintenance and asset management and companies that do
not adapt will be in danger of losing their competitive position.
Companies does however struggle to identify the economic benefits of imple-
menting Industry 4.0 in maintenance and asset management [8]. Some of this is related
to companies only regarding maintenance as a cost center and because if that do not
take proper account for how the new technological innovations have the potential to
generate economic value for the company as a whole [25].
The Economic Dimension of Implementing Industry 4.0 305

In this paper an example is given of how the VDM sensitivity analysis combined
with an understanding of three areas of technology innovations related to Industry 4.0
can give a valuable input to where companies should focus their search for economic
value related to implementing Industry 4.0 in maintenance and asset management.
This can, combined with an understanding of the relevant organizational factors, be
important input for practicing managers in the maintenance and asset management
domain when making strategies to prepare for the fourth industrial revolution.

References
1. Kagermann, H., Helbig, J., Hellinger, A., Wahlster, W.: Recommendations for implementing
the strategic initiative INDUSTRIE 4.0: securing the future of German manufacturing
industry; final report of the Industrie 4.0 Working Group. Forschungsunion (2013)
2. Roda, I., Macchi, M., Fumagalli, L.: The future of maintenance within industry 4.0: an
empirical research in manufacturing. In: Moon, I.L., Gyu, M., Park, J., Kiritsis, D., von
Cieminski, G. (eds.), pp. 39–46. Springer International Publishing, Cham (2018)
3. Fordal, J.M., Rødseth, H., Schjølberg, P.: Initiating industrie 4.0 by implementing sensor
management–improving operational availability. In: Wang, K., Wang, Y., Strandhagen, J.O.,
Yu, T. (eds.) International Workshop of Advanced Manufacturing and Automation, pp. 200–
207. Springer, Changzhou (2019)
4. Rødseth, H., Schjølberg, P., Marhaug, A.: Deep digital maintenance. Adv. Manuf. 5, 299–
310 (2017)
5. Staufen, A.G.: German Industry 4.0 Index 2018. Staufen.AG (2018)
6. Frank, A.G., Dalenogare, L.S., Ayala, N.F.: Industry 4.0 technologies: implementation
patterns in manufacturing companies. Int. J. Prod. Econ. 210, 15–26 (2019)
7. Porter, M., Heppelmann, J.: How smart, connected products are transforming companies.
Harvard Bus. Rev. 93, 97–114 (2015)
8. Haarman, M., de Klerk, P., Decaigny, P., Mulders, M., Vassiliadis, C., Sijtsema, H., Gallo,
I.: Predictive maintenance 4.0 - beyond the hype: PdM 4.0 delivers results. Pricewater-
houseCoopers and Mannovation (2018)
9. Geissbauer, R., Schrauf, S., Koch, V., Kuge, S.: Industry 4.0 – Opportunities and Challenges
of the Industrial Internet. PricewaterhouseCoopers (2014)
10. Porter, M., Heppelmann, J.: How smart, connected products are transforming competition.
Harvard Bus. Rev. 92, 64–88 (2014)
11. Akkermans, H., Besselink, L., van Dongen, L., Schouten, R.: Smart moves for smart
maintenance (2016)
12. Schuh, G., Anderl, R., Gausemeier, J., Hompel, M.T., Wahlster, W. (eds.): Industrie 4.0
Maturity Index Managing the Digital Transformation of Companies (Acatech Study).
Herbert Utz Verlag, Munich (2017)
13. Kane, G.C., Palmer, D., Phillips, A.N., Kiron, D., Buckley, N.: Aligning the organization for
its digital future. MIT Sloan Manag. Rev. (2016). Deloitte University Press
14. Kumar, U., Galar, D., Parida, A., Stenström, C., Berges, L.: Maintenance performance
metrics: a state-of-the-art review. J. Qual. Maint. Eng. 19, 233–277 (2013)
15. Simões, J.M., Gomes, C.F., Yasin, M.M.: Changing role of maintenance in business
organisations: measurement versus strategic orientation. Int. J. Prod. Res. 54, 3329–3346
(2016)
16. Smit, K.: Maintenance Engineering and Management. Delft Academic Press, Delft (2014)
306 T. I. Pedersen and P. Schjølberg

17. Haarman, M., Delahay, G.: VDM(xl) Value Driven Maintenance & Asset Management.
Mainnovation (2016)
18. Haarman, M., Delahay, G.: Value Driven Maintenance & Asset Management. Managing
Aging Plants (2018)
19. Otley, D.: Performance management: a framework for management control systems
research. Manag. Account. Res. 10, 363–382 (1999)
20. Young, S.D., O’Byrne, S.F.: EVA and Value-Based Management: A Practical Guide to
Implementation. McGraw-Hill, New York (2001)
21. Zimmerman, J.L.: Accounting for Decision Making and Control. McGraw-Hill/Irwin,
Boston (2011)
22. Etherington, J., Pollen, T., Zuccolo, L.: Comparison of Selected Reserves and Resource
Classifications and Associated Definitions. Society of Petroleum Engineers (2005)
23. Diez-Olivan, A., Del Ser, J., Galar, D., Sierra, B.: Data fusion and machine learning for
industrial prognosis: trends and perspectives towards Industry 4.0. Inf. Fusion 50, 92–111
(2019)
24. Tao, F., Qi, Q., Liu, A., Kusiak, A.: Data-driven smart manufacturing. J. Manuf. Syst. 48,
157–169 (2018)
25. Vogl, G.W., Weiss, B.A., Helu, M.: A review of diagnostic and prognostic capabilities and
best practices for manufacturing. J. Intell. Manuf. 30, 79–95 (2019)
26. Elia, V., Gnoni, M.G., Lanzilotto, A.: Evaluating the application of augmented reality
devices in manufacturing from a process point of view: an AHP based model. Expert Syst.
Appl. 63, 187–197 (2016)
27. DIN/DKE: GERMAN STANDARDIZATION ROADMAP Industrie 4.0 Version 3. (2018)
28. Visintin, F.: Photocopier industry: at the forefront of servitization. In: Lay, G. (ed.)
Servitization in Industry, pp. 23–43. Springer, Cham (2014)
29. Grubic, T., Jennions, I.: Do outcome-based contracts exist? The investigation of power-by-
the-hour and similar result-oriented cases. Int. J. Prod. Econ. 206, 209–219 (2018)
30. Grubic, T., Jennions, I., Baines, T.: The interaction of PSS and PHM - a mutual benefit case
(2009)
31. Visnjic, I., Jovanovic, M., Neely, A., Engwall, M.: What brings the value to outcome-based
contract providers? Value drivers in outcome business models. Int. J. Prod. Econ. 192, 169–
181 (2017)
Manufacturing System
Common Faults Analysis and Detection System
Design of Elevator Tractor

Yan Dou1,2, Wenmeng Li3, Yang Ge1,2, and Lanzhong Guo1,2(&)


1
School of Mechanical Engineering, Changshu Institute of Technology,
Changshu, People’s Republic of China
{jixiedouyan,geyang,guolz}@cslg.edu.cn
2
Jiangsu Elevator Intelligent Safety Key Construction Laboratory,
Changshu, People’s Republic of China
3
Zhejiang Academy of Special Equipment Science,
Hangzhou, People’s Republic of China
[email protected]

Abstract. As the number of elevators increases, some parts of the elevator


tractor are not in place, which leads to the occurrence of elevator accidents. The
safety of elevator tractor is very urgent. In the process of elevator operation, due
to frequent use, friction and impact between mechanical components intensified,
resulting in unnecessary loads leading to vibration, thus reducing the stability
and safety of elevator tractor, shortening the service life of elevator tractor and
existing security risks. Faced with serious elevator safety problems, it is nec-
essary to design an elevator detection system for common mechanical faults of
elevator tractor. This system can display the waveform of the current tractor
vibration signal in real time to detect the elevator operation status data. The staff
can judge the current safety situation according to the elevator parameters
stipulated by relevant regulations, and can provide special inspection and
maintenance personnel with certain information support.

Keywords: Tractor  FFT  Detection system  Fault analysis

1 Elevator Tractor

Elevator tractor is also known as the main engine of elevator. It provides power to
make the car of elevator move relative to the weight device. Elevator tractor can be
divided into AC motor and DC motor according to the driving motor. At present, AC
motor is commonly used. Elevator tractor can be divided into gearless tractor and
geared tractor according to whether there are reducers or not. The geared tractor drives
the motor power to the traction wheel through the reducer drive. The gearless tractor
drives the motor power directly to the traction wheel.

2 Analysis of Common Faults of Elevator Tractor

According to the nature of the fault of elevator tractor, the common fault phenomena
and causes of elevator tractor can be divided into three categories:

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 309–316, 2020.
https://doi.org/10.1007/978-981-15-2341-0_38
310 Y. Dou et al.

Some parts are over-worn or aging, incorrect use management or routine mainte-
nance work, leading to no pre-detection so that unreliable, defective or abnormal parts
are not replaced or repaired, and ultimately the defects are further expanded or even
damaged [1].
During the use of elevator, the fasteners relax or even loosen naturally due to
normal extension or vibration. However, in daily maintenance and maintenance, the
equipment is not checked and maintained according to relevant standards and plans, so
the natural extension state of some parts is not adjusted and controlled in time, and the
moving parts are not made. The tightening and reasonable position of the elevator are
guaranteed, which results in abnormal meshing state between components and ulti-
mately leads to the damage of the elevator.
The system failure caused by poor lubrication or insufficient lubrication of
mechanical parts causes abnormal operation of rotating parts, resulting in heat gener-
ation and wear until the parts bite each other, resulting in abnormal working failure of
moving parts.

3 Vibration Analysis of Elevator Tractor

There are two main reasons for abnormal vibration of tractor:

3.1 Electromagnetic Vibration


When the electromagnetic vibration is caused by the eccentricity of the stator clearance
and the rotor clearance of the synchronous motor, the problem can be solved by
adjusting the clearance between the stator and the rotor separately. Therefore, we can
solve the electromagnetic vibration of the synchronous motor by adjusting the clear-
ance of the motor to the correct position with the plug ruler. The wear of rotor support
bearings in asynchronous motors results in uneven electromagnetic vibration between
stator and rotor, which can cause bore sweeping and lead to tractor not working.

3.2 Mechanical Vibration


There are mainly three kinds of mechanical vibration of tractor:
The axial vibration of the end cover caused by the violent vibration of the bearing.
The mechanical vibration caused by the movement of the bearing itself. The
mechanical vibration caused by the unstable mechanical movement of the rotor.
Rolling bearings are the most prone parts of tractor to vibration. Without properly
installed bearings, the heat and vibration of tractor will be caused. We need to make the
clearance between shaft and bearing as large as possible, make them firm and reduce
the possibility of vibration.
Common Faults Analysis and Detection System Design of Elevator Tractor 311

4 Selection of Vibration Detection Points for Elevator


Tractor

Collecting data is the primary task for the traction machine’s detection system. The
most true vibration signal can be collected by selecting a suitable vibration detection
point. The vibration detection point is generally selected at the rigid support point. In
this design, according to the vibration of the traction machine, the base of the traction
machine is selected as the detection point (Fig. 1).

Fig. 1. Detection point of elevator tractor detection system.

5 Design of Detection System for Elevator Tractor

5.1 Overall Design of Elevator Detection System


According to the structure characteristic of the traction type elevator, the acquisition of
the vibration signal of the elevator traction machine by the piezoelectric acceleration
sensor is not affected by the elevator control circuit and the normal operation, The
elevator traction machine detection device is used for acquiring, storing and displaying
the vibration signals of the elevator, and the operator can see the vibration waveform,
the characteristic value of the elevator traction machine and the safety of the elevator
traction machine in the system after the operator logs in the elevator traction machine
detection system [2].
The elevator detection and measurement system is composed of acceleration sensor
and acquisition card. In the main computer room of the elevator traction machine, the
vibration state of the elevator traction machine is detected. Through the conversion of
the data acquisition card, the analog signal of the detected vibration signal can be
converted into a digital signal through A/D, and then transmitted to the computer
through the USB interface, and by landing in the elevator traction machine detection
system software, In the main interface of detection, the functions of signal display,
filtering, analysis, storage and fault alarm can be realized.
312 Y. Dou et al.

5.2 Design of Detection Software


The software of elevator traction machine detection system is mainly composed of
landing system interface and traction machine signal detection main interface. The
main interface consists of real-time acquisition and display, filtering information, fre-
quency domain analysis, time domain analysis and fault display.
The detection software shall realize the following functions:
Login interface, traction machine detection main interface, parameter setting, signal
saving and fault testing module. The detected data and waveform can be saved, dis-
played and analyzed. It can give an automatic alarm to the traction machine fault,
which can enable the operator who detects the elevator traction machine to quickly
know whether the traction machine is faulty or not.

5.3 Main Interface for Traction Machine Detection


The main interface of vibration signal detection of elevator traction machine is mainly
composed of five interfaces; real-time acquisition interface, filter information interface,
frequency domain analysis interface, time domain analysis interface and fault display
interface (Fig. 2).

Fig. 2. Program block diagram of main interface of elevator traction machine detection system.

The real-time acquisition interface can receive the original vibration signal of the
elevator traction machine detected by the hardware of the elevator traction machine
detection system, mainly from the original vibration signal waveform, difference/single
end, the number of samples, magnification, channel entrance, sampling frequency
Common Faults Analysis and Detection System Design of Elevator Tractor 313

parameters display, click start storage can be stored in the computer file. Among them, the
channel entrance is consistent with the channel entrance of the acquisition card (Fig. 3).

Fig. 3. The real-time acquisition interface.

The noise in the elevator operation site environment is very serious, which affects
the vibration signal of the elevator traction machine and leads to signal distortion. FIR
filtering is selected to eliminate the noise. FIR filtering can let the frequency compo-
nents of the vibration signal of the elevator traction machine pass through and weaken
the unnecessary frequency components.
The vibration signal of the elevator traction machine is a low-frequency signal, and
the FIR low-pass filter is selected. The amplitude-frequency characteristic curve of the
FIR low-pass filter of the elevator traction machine is shown in the following Fig. 4.
The frequency represented by f2 is the upper limit cut-off frequency of the FIR low-pass
filter. If the signal frequency is lower than f2, it can pass, but if it is higher than f2, it
cannot pass [3].

Fig. 4. Amplitude-frequency characteristic curve of FIR low-pass filter


314 Y. Dou et al.

Table 1. Name plate of elevator tractor in elevator laboratory.


Serial number Name Unit
1 Brake voltage DC110 V
2 Rated power 5 KW
3 Rated speed 95.5 r/min
4 Rated loading capacity 800 kg
5 Rated torque 502 N m
6 Rated voltage 1340 V
7 Rated current 11 A
8 Rated frequency 25.5 Hz

As shown in the Table 1, the rated frequency of the elevator is 25.5 Hz, so the pass
band range of the FIR filter is between 0 and 25 Hz. As shown in the filter information
interface of the following Fig. 5, the pass band range is between 0 and 25 Hz, and the
type is low pass filter and denoise the collected waveform diagram.

Fig. 5. The filter information interface.

In the design of elevator traction detection system, the filtered vibration signal of
elevator traction machine is irregular waveform signal, which can be superimposed or
subtracted by sinusoidal function waveform signal of different frequencies, and then the
characteristic value of elevator traction machine vibration signal can be represented by
calculating the eigenvalue superposition or subtraction of sinusoidal function of dif-
ferent frequencies.
The Frequency Domain Function (Image Function of Fourier Transform) is as
follows (Fig. 6).
Z 1
FðxÞ ¼ F½f ðtÞ ¼ f ðtÞeixt dt ð1Þ
1
Common Faults Analysis and Detection System Design of Elevator Tractor 315

Fig. 6. The frequency domain analysis interface.

The time domain function (the original function of Fourier transform) is as follows.
Z 1
1
f ðtÞ ¼ F 1 ½FðxÞ ¼ FðxÞeixt dx ð2Þ
2p 1

The frequency domain analysis interface can display phase spectrum, maximum,
minimum, maximum and minimum positions. After the signal is filtered, the size and
position of the maximum and minimum values can be obtained by image function
processing of fast Fourier transform (Figs. 7 and 8).

Fig. 7. The time domain analysis interface.


316 Y. Dou et al.

Fig. 8. The fault display interface.

The vibration fault or safety can be judged by comparing the maximum value,
minimum value and peak-to-peak value of frequency domain analysis and peak-to-peak
value of time domain analysis with the maximum value, minimum value and peak-to-
peak value of alarm value. The regulation of alarm value is based on the regulation of
three characteristic values of safety elevators measured many times. As shown in the
figure, the maximum value of 3.07578 is less than the maximum value of alarm value
of 3.5, the minimum value of −3.14143 is greater than the minimum value of alarm
value of −3.5, and the peak-to-peak value of 0.488289 is less than the alarm peak-to-
peak value of 0.5. It is judged that the elevator traction machine is safe and consistent
with the safety signal lamp in the figure.
The maximum value 4 is greater than the maximum value 3.5 of the alarm value,
the minimum value −4.14146 is less than the minimum value −3.5 of the alarm value,
the peak-to-peak value 0.51 is greater than 0.5, the vibration fault lamp flashes, and the
traction machine fault is judged.
Through the storage of signal waveform and data, the experience and reference of
traction machine fault detection can be provided to the following elevator traction
machine detection operators.

References
1. Salman, H.E., Yazicioglu, Y.: Flow-induced vibration analysis of constricted artery models
with surrounding soft tissue. J. Acoust. Soc. Am. 142(4), 1913–1925 (2017)
2. Li, X.Z., Zhang, X., Zhang, Z.J., et al.: Experimental research on noise emanating from
concrete box-girder bridges on intercity railway lines. Proc. Inst. Mech. Eng. Part F: J. Rail
Rapid Transit 229(2), 125–135 (2015)
3. Deng, Y.Q., Xiao, X.B., He, B., et al.: Analysis of external noise spectrum of high-speed
railway. J. Cent. South Univ. 21(12), 4753–4761 (2014)
Balanced Maintenance Program
with a Value Chain Perspective

Jon Martin Fordal1(&), Thor Inge Bernhardsen2, Harald Rødseth1,


and Per Schjølberg1
1
Department of Mechanical and Industrial Engineering, Norwegian University
of Science and Technology (NTNU), 7491 Trondheim, Norway
{jon.m.fordal,harald.rodseth,per.schjolberg}@ntnu.no
2
Elkem ASA, 7030 Trondheim, Norway
[email protected]

Abstract. Maintenance has traditionally been seen as a costly and unwanted


evil. Nowadays, the view of maintenance has changed, as new maintenance
technologies and Smart Maintenance have proven to give competitive advan-
tages for industrialists. Additionally, increased global competition and need for
improving manufacturing performance has resulted in more focus on company’s
maintenance function. Another field of interest, is how modern value chains can
support the maintenance function in a company. Research on the field of value
chain and maintenance has mainly been carried out independently, and research
on how these two fields can be combined and utilized to strengthen each other
seem to have just launched the start line. This is especially true when it comes to
the area of improving maintenance programs, which traditionally have been
static, inflexible, and inefficient. Thus, the aim of this paper is therefore to
investigate on how maintenance programs can benefit from including a value
chain perspective, and, additionally, present a concept for balanced maintenance
programs in asset intensive industrial plants.

Keywords: Maintenance program  Value chain  Smart Maintenance

1 Introduction

The view on maintenance has changed, to be recognized as an opportunity for gaining a


competitive edge by predicting and being one-step ahead of failures. Traditionally, the
role of maintenance has been seen as a costly and unwanted evil. However,
improvements in maintenance can be of significant value, as maintenance costs are one
of the biggest contributors to the total operating costs for all manufacturing and pro-
duction plants [1], specifically 15% to 40% of costs is due to maintenance [2]. This
percentage is expected to increase along with the fourth industrial revolution, as more
automation and new technology increases complexity [2]. To face these specific
challenges, the strategic development of Industry 4.0 is considered as an important
measure, by offering an industrial internet coined as Cyber-Physical Systems (CPS) [3].
Enabled by technology such as Internet of Things (IoT) and Big Data, it will be
possible to create networks that incorporate the entire manufacturing process into a

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 317–324, 2020.
https://doi.org/10.1007/978-981-15-2341-0_39
318 J. M. Fordal et al.

smart environment. To ensure an successful implementation of Industry 4.0 it is of


importance to follow the vision for Smart Maintenance, where maintenance data is
shared between the manufacturer, industry service as well as the operator [4]. Other
important elements for optimal delivery of Industry 4.0 goals, is digital end-to-end
engineering across the entire value chain, and development of inter-company value
chains and networks through horizontal integration [3]. In Industry 4.0 it has also been
identified several technologies for Smart Manufacturing [5]. One of these technologies
is Artificial Intelligence (AI) for predictive maintenance. Figure 1 position this tech-
nology for predictive maintenance in a maturity index. This model evaluates the degree
of succession of maturity stages in Industry 4.0 [6], where predictive maintenance can
be positioned in Stage 5 and 6 [7]. For example, in these stages both future prediction
of the technical condition is performed as well as decision support in maintenance
planning.

Fig. 1. Six stages in the development towards Smart Maintenance, adopted from [6, 7].

Despite that predictive maintenance has been demonstrated with remaining useful
life (RUL) predictions supporting maintenance planning activities [7], more develop-
ment of predictive maintenance is expected. In fact, predictive maintenance seems to
fall short of its possibilities in order to deliver what it promises [8]. A possible
explanation for this is that the maintenance system cannot rule out all operating errors.
To cope with this challenge, it is of interest to explore other alternative modules for
predictive maintenance. In addition to the module of RUL it is also possible to apply
anomaly detection as a module for predictive maintenance [9], along with the use of AI
in maintenance programs for making them adaptable, which is one focus area in this
article. Figure 1 can also describe the maturity levels for maintenance programs, where
stage 1 concerns the integration of maintenance programs into computerized mainte-
nance management systems (CMMS), and, on the other end, stage 6 presents the use of
AI in maintenance programs for enabling adaptability.
Balanced Maintenance Program with a Value Chain Perspective 319

A statement of what a maintenance program should include is given in NORSOK


Z-008:2017 [10]: “The maintenance program includes maintenance interval and
written procedures for maintaining, testing, and preparing the various components
within the plant as well as minimum qualification of personnel.” The same standard
also says that [10]: “The purpose of a maintenance program is to control or mitigate
risks associated with degradation of failure”. Maintenance programs have traditionally
been static and inflexible [11–14]. An inefficient maintenance program can significantly
restrict a maintenance operation to meet its objective of providing the organizations
needs, and cause reduced availability, product quality, fulfillment of safety require-
ments, and higher cost of maintenance [15, 16]. Summarized, in terms of maintenance
programs, there are several problems as they are today [11–14]:
• The maintenance program provided by the machine supplier may not be adjusted to
the customer’s usage of the machine
• When implemented, the maintenance program is rarely updated, even though the
use of the machine changes
• Bad alignment between the maintenance program and the actual maintenance need,
due to maintenance programs based on simple parameters such as calendar time or
hours in operation
• Updating and evaluating maintenance programs require a great amount of manual
labor
To the best knowledge of the authors, it seems to be a lack of literature and research
on how to address these problems. Although, some research on Machine Learning
(ML), an application of AI, combined with maintenance programs have been per-
formed. For example, in [17], the authors presented a multiple classifier ML
methodology for predictive maintenance (PdM) of integral type faults in semiconductor
manufacturing, which allows the user to dynamically change maintenance policy based
on current costs and needs of manufacturing production. Research on flexible main-
tenance is conducted by [11], where a ML model combined with constraint pro-
gramming and route optimization is presented. The overall goal was to maximize the
ability for heavy trucks to perform transport assignments, with respect to maintenance.
In terms of value chain and maintenance, a lot of research has been carried out,
mainly independently. However, research on how these two fields can be combined and
utilized to strengthen each other seem to have just launched the start line. Further, the
original value chain concept presented by Michael Porter, does not include mainte-
nance [19]. Maintenance is neither included in more recent value chain frameworks
presented by [20, 21]. The need for research within this area is underpinned by [18],
where they claim there is a lack of developed literature within value chain management.
The aim of this paper is therefore to investigate on how maintenance can benefit from
including a value chain perspective, and, additionally, present a concept for how ML
can be used for balancing maintenance programs in asset intensive industrial plants.
The structure of this article is as follows: Sect. 2 presents a value chain perspective
for Smart Maintenance. Section 3 presents the concept for a balanced maintenance
program, and, lastly, Sect. 4 concludes the paper.
320 J. M. Fordal et al.

2 A Value Chain Perspective for Smart Maintenance

A definition of value chain is presented by [20]: “The value chain is a tool to dis-
aggregate a business into strategically relevant activities. This enables identification of
the source of competitive advantage by performing these activities more cheaply or
better than its competitors. Its value chain is part of a larger stream of activities
carried out by other members of the channel-suppliers, distributors and customers.”
Until recently, maintenance has been seen as a cost center, but findings from [22, 23]
proves that maintenance is a profit generating function. Further, maintenance is said to
have a significant impact on capacity, quality, costs, environment and safety [22], and
the introduction of Smart Maintenance is assumed to increase this impact [4]. In [22–
24], they also suggest to further investigate on the relationship between maintenance
and overall organizational performance to provide a more holistic view of maintenance
performance benefits. Summarized, the definition of value chain and findings proving
maintenance to be value-adding, supports the potential of seeing maintenance as an
activity in the value chain.
Based on experiences obtained from a process company, the internal and external
value chain effects of six maintenance program objectives have been acknowledged.
This underpins the relationship between maintenance and value chain, and, in terms of
Smart Maintenance, adaptability is one desirable objective of a maintenance program.
Figure 2 shows the external and internal value chain effects of six maintenance pro-
gram objectives: cost, dependability, health, safety and environment (HSE), adapt-
ability, quality, and effectiveness.

Fig. 2. Internal and external value chain effects of maintenance program objectives.
Balanced Maintenance Program with a Value Chain Perspective 321

The maintenance program objective cost, is focused on having a cost-efficient


maintenance program, and is affected by all of the other objectives. Thus, improving in
the other objectives will improve the value chain productivity, which will enable lower
product prices for the customers, and/or a higher margin for the company. Second, a
maintenance program should enable dependability. This will provide reliable pro-
cesses, which the customer will experience as dependable deliveries. Third, a main-
tenance program is an important element in ensuring HSE regulations are pursuant.
Internally, focus on HSE will support a safe work environment, while the external
image of the company will be corroborated as an attractive workplace and business
partner. Fourth, adaptability, a part of Smart Maintenance, will in a maintenance
program facilitate a dimensioned maintenance and high availability, which the cus-
tomer will benefit from as increased flexibility in product, volume, and delivery. Fifth,
quality in maintenance execution supports zero-defect manufacturing internally, and
on-specification products and services for the customer. Lastly, effectiveness in
maintenance execution increases availability, which internally means increased
throughput, and shorter delivery time is an external value chain effect.

3 Balanced Maintenance Program

Experiences from a process company shows that most CMMS today do not provide the
ability for maintenance programs to be individualized and balanced to the degree that
they could be. Currently, in most maintenance programs, data from, for example,
condition monitoring, vibration, temperature, speed, level monitor, flow, and pressure
measurements, can be integrated into the maintenance program. In case of deviations in
this data, several maintenance programs can make an error report that maintenance
personnel can follow up with a work order. This can be seen as corrective maintenance.
However, there is a need and desire to move to predictive maintenance, as performing
maintenance prior to an error occurs will improve plant cost-effectiveness and increase
availability.
The following proposed concept address the abovementioned challenges by uti-
lizing ML to create a balanced maintenance program. The concept will provide
decision-support for how the maintenance program should be adjusted, based on the
parameters “machine anomalies” and “machine load.” Combining these parameters
will enable adaptability in the maintenance program, meaning a more flexible,
dimensioned and predictive maintenance is possible. In terms of Fig. 1, the concept is
positioned in stage 6: Adaptability. Figure 3 shows the concept for balancing a
maintenance program with ML, based on machine load and machine anomalies.
322 J. M. Fordal et al.

Fig. 3. Concept for balanced maintenance program.

The concept will have the existing maintenance program as a basis. Thus, the
original maintenance program will be the preference for normal conditions. The
parameter machine anomalies will take current, and expected, technical condition into
account, and give a high or low indication. High level of machine anomalies indicates a
need for maintenance, while a low level indicates the opposite. To calculate this
parameter, data from the machine, along with sensor data, historical data, and work
orders, will be gathered in a data lake. Further, ML will be used to process these data
and give a high or low indication. The parameter machine load will consider current,
and expected, machine load. High level machine load indicates that the machine is
either running over its design capacity, or that something in the process increases the
load, for example raw materials out of specification. Low level machine load indicates
that the machine is running under its design capacity. Data from production planning
and process control will be the main source for calculating level of machine load.
Four different actions are possible. First, “reduced maintenance” can be beneficial if
machine anomalies and machine load are both at low levels, meaning the need for
maintenance is lower than under normal conditions. Second, “maintenance call-out” is
proposed for low machine load combined with high level of machine anomalies.
A machine running under its design capacity, and still have numerous anomalies
indicates a need for an extra inspection to find the root cause. Third, “increased
maintenance” is proposed if both machine anomalies and machine load are at a high
level, indicating the need for maintenance is higher than under normal conditions.
Fourth, “control process” is suggested when machine load is high and machine
anomalies level is low. This control can be to see if the high machine load is due to
over-production, or if other elements in the production is causing extra load.
The four different actions can be linked to Fig. 2, and evaluation of different mea-
sures may be seen up against how it will internally and externally effect the value chain.
For example, if the machine load is high and the suggested action is to control the
process, an evaluation of dependability must be done. Hence, consider the high machine
load up against the risk of reduced process reliability, and a dependable delivery. This
will give a more holistic view to how maintenance effects the value chain.
Balanced Maintenance Program with a Value Chain Perspective 323

4 Conclusion

The aim of this article was to investigate how maintenance programs can benefit from
including a value chain perspective, and to present a concept for balanced maintenance
programs in asset intensive industrial plants.
Predictive maintenance and adaptability in maintenance programs are discussed,
and their linkage to the six development stages towards Smart Maintenance is pre-
sented. Current challenges with maintenance programs are that they are static, inflex-
ible, and inefficient. The concept presented in this article addresses these challenges by
utilizing ML to create a balanced maintenance program. The concept will provide
decision-support for how the maintenance program should be adjusted, based on the
parameters machine anomalies and machine load. The value chain perspective for
Smart Maintenance is also discussed, and six maintenance program objectives, based
on experiences from a process company, and their effect on the internal and external
value chain is presented. This gives a more holistic view of the maintenance function,
and challenges the original value chain concept presented by Michael Porter. Figure 4
provides an overview of challenges, work, and results presented in this article.

Fig. 4. Overview of challenges, work, and results presented in this article.

In overall, it is concluded to further develop the proposed concept in this article.


Research on how a value chain perspective can support and improve the maintenance
function in a company, is another suggestion for future work.

References
1. Fordal, J.M., Rødseth, H., Schjølberg, P.: Initiating industrie 4.0 by implementing sensor
management – improving operational availability. In: International Workshop of Advanced
Manufacturing 2018, Advanced Manufacturing and Automation VIII. Lecture Notes in
Electrical Engineering, vol. 484, pp. 200–207. Springer, Changzhou (2019)
2. Chan, F.T.S., Lau, H.C.W., Ip, R.W.L., Chan, H.K., Kong, S.: Implementation of total
productive maintenance: a case study. Int. J. Prod. Econ. 95, 71–94 (2005)
3. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for implementing the strategic
initiative INDUSTRIE 4.0 (2013)
4. DIN: German Standardization Roadmap - Industry 4.0 (2018)
5. Frank, A.G., Dalenogare, L.S., Ayala, N.F.: Industry 4.0 technologies: implementation
patterns in manufacturing companies. Int. J. Prod. Econ. 210, 15–26 (2019)
324 J. M. Fordal et al.

6. Schuh, G., Anderi, R., Gausemeier, J., Ten Hompel, M., Washlster, W.: Industrie 4.0
maturity index. Managing the Digital Transformation of Companies (Acatech STUDY)
(2017)
7. Rødseth, H., Schjølberg, P., Marhaug, A.: Deep digital maintenance. Adv. Manuf. 5, 299–
310 (2017)
8. Staufen: Industry 4.0 - German Industry 4.0 Index 2018: A study from Staufen AG and
Staufen Digital Neonex GmbH (2018)
9. Nur Adi, T., Wahid, N., Sutrisnowati, R., Choi, Y., Bae, H., Seo, C.S., Jeong, S.H., Seo,
T.Y.: Cloud-based predictive maintenance framework for sensor data analytics (2018)
10. Standards Norway: Risk based maintenance and consequence classification Z-008, pp. 14.
Standards Norway (2017)
11. Biteus, J., Lindgren, T.: Planning flexible maintenance for heavy trucks using machine
learning models, constraint programming, and route optimization. SAE Int. J. Mater. Manuf.
10, 306–315 (2017)
12. Lindgren, T., Biteus, J.: Expert guided adaptive maintenance. In: European Conference of
the Prognostics and Health Management Society, Nantes, France, 8th–10th July (2014)
13. Lindgren, T., Warnquist, H., Eineborg, M.: Improving the maintenance planning of heavy
trucks using constraint programming. In: ModRef 2013: The Twelfth International
Workshop on Constraint Modelling and Reformulation, Uppsala, Sweden, 16th September
2013, pp. 74–90. Université Laval (2013)
14. Prytz, R., Nowaczyk, S., Rögnvaldsson, T., Byttner, S.: Predicting the need for vehicle
compressor repairs using maintenance records and logged vehicle data. Eng. Appl. Artif.
Intell. 41, 139–150 (2015)
15. Cholasuke, C., Bhardwa, R., Antony, J.: The status of maintenance management in UK
manufacturing organisations: results from a pilot survey. J. Qual. Maint. Eng. 10, 5–15 (2004)
16. Basak, D.: Integrating maintenance activities and quality assurance in a research and
development (R&D) system. Qual. Assur. J. 10, 249–254 (2006)
17. Susto, G.A., Schirru, A., Pampuri, S., McLoone, S., Beghi, A.: Machine learning for
predictive maintenance: a multiple classifier approach. IEEE Trans. Ind. Inform. 11, 812–
820 (2015)
18. Al-Mudimigh, A.S., Zairi, M., Ahmed, A.M.M.: Extending the concept of supply chain: the
effective management of value chains. Int. J. Prod. Econ. 87, 309–320 (2004)
19. Porter, M.E.: Competitive Advantage - Creating and Sustaining Superior Performance. Free
Press, New York (1985)
20. Walters, D., Lancaster, G.: Implementing value strategy through the value chain. Manag.
Decis. 38, 160–178 (2000)
21. Ambrosini, V., Bowman, C.: How value is created, captured and destroyed. Eur. Bus. Rev.
22, 479–495 (2010)
22. Alsyouf, I.: The role of maintenance in improving companies’ productivity and profitability.
Int. J. Prod. Econ. 105, 70–78 (2007)
23. Maletič, D., Maletič, M., Al-Najjar, B., Gomišček, B.: The role of maintenance in improving
company’s competitiveness and profitability: a case study in a textile company. J. Manuf.
Technol. Manag. 25, 441–456 (2014)
24. Rødseth, H., Fordal, J.M., Schjølberg, P.: The journey towards world class maintenance with
profit loss indicator. In: International Workshop of Advanced Manufacturing 2018,
Advanced Manufacturing and Automation VIII. Lecture Notes in Electrical Engineering,
vol. 484, pp. 192–199. Springer (2019)
Construction Design of AGV Caller System

Zhang Xi(&), Wang Xin, and Yuanzhi Xu

College of Mechatronics Engineering and Automation, Shanghai University,


Shanghai, China
{xizhang,13120116,xuyuanzhi}@shu.edu.cn

Abstract. To realize the call request from the shop station to the AGV
(Automatic Guided Vehicle), the caller came into being. However, the existing
wireless physical caller is mostly based on a microcontroller and hardly
extended. Thus, it is unable to reflect the information between the station and the
AGV. A set of caller and CCS (Control Center System) software to realize basic
call function was designed in this paper. Moreover, the caller can display task
status, queue situation, vehicle status fed back and CCS have the function of
location management. In the experiment, a workshop caller system was con-
structed proposed by this paper to verify the validity and stability. The result of
the experiment shows that the flexible of AGV caller system has been great
enhance compared with traditional caller system. Finally, the validity and sta-
bility of the AGV caller system are verified by experiments.

Keywords: Caller  CCS  AGV  Task

1 Introduction

With the rapid development of flexible production in factories, material distribution


become a significant part and AGV (Automatic guided vehicle) is widely used for
internal transportation of factory materials [1, 2]. Compared with traditional automated
production line, the flexible production line does not require fixed beats and the
transport is out of order. It breaks the inherent work cycle of the production line, and
each station transports materials or workpieces as needed. The blocking job shop
problem involves the transferring job between different machines by limited number of
AGV, called the BJS-AGV problem [3, 4]. In order to solve the real-time AGV demand
of each station, the caller came into being. An order is generated in response to a call
from any shop floor requesting an AGV for loading a job at a pick-up machine location
M and then delivering it to machine location N for unloading [5]. The caller improves
production efficiency, which can not only increase the AGV utilization rate, but also
reduce the waiting steps in the production process. However, the existing physical
caller based on a microcontroller is so simple that can only request a task to use AGV,
and it is hard to expand. Compared with the development of AGV, it falls behind and
cannot meet the demands of modern enterprises.
In order to improve the situation above, a set of AGV caller and CCS which
expands the function of the caller was designed in this paper. The workshop LAN is set
up to realize the information exchange in a large area. AGV, CCS, and the caller are all

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 325–332, 2020.
https://doi.org/10.1007/978-981-15-2341-0_40
326 Z. Xi et al.

loaded with the wireless module. The caller software is based on Android system and
can be installed on any Android mobile device with great portability.

2 Construction of AGV Caller System

2.1 Overall Framework


The AGV caller system comprises of the caller, AGV and CCS, shown in Fig. 1. Each
station has an independent caller. The role of caller is to send information to CCS via
TCP/IP by pressing the corresponding call button. CCS has three major functions:
processing the information sent from the caller and deliver corresponding instructions
to AGV; receiving the information returned from AGV and giving the feed back to
caller; monitoring AGV and caller in real time. AGV is the executor of the system,
performing tasks after receiving instructions from CCS.

Fig. 1. AGV caller system diagram

2.2 Design of Caller


AGV caller system is the primary stage of AGV intelligent system, mainly used in
applications where the number of AGV is less than the station. In this program, caller
and CCS communicate with TCP/IP socket. Caller acts as the client role and CCS acts
as the server role. The design of caller interface is shown in Fig. 2.
Construction Design of AGV Caller System 327

Fig. 2. Caller interface screenshot: 1-connection state, 2-AGV state, 3-pause/go on button, 4-IP
configuration button, 5-Destination, 6-withdraw button, 7-caller number

In this program, caller has the following functions:


1. Communication with CCS
To connect to CCS with TCP/IP, caller needs IP address and port of CCS, so caller
interface provides an input box. After IP configuration for the first time, the information
is saved, and the connection can be automatically established at the same time.
Afterwards, caller connects to CCS without repeated configuration. Caller maintains a
long connection to CCS and automatically reconnects once the connection is discon-
nected to ensure that the caller is working normally.
2. Sending and receiving information
After setting up connection with CCS, it needs to transmit data. The caller sends tasks
to CCS with fixed destination, pause instruction, withdraw instruction, etc. CCS returns
task status and task queuing status. To prevent incorrect operation, caller interface adds
confirmation button after user’s pressing a request button.
3. Real-time monitoring
Caller obtains the task status and AGV status from CCS, and uses the indicator light to
display different states on caller interface.
4. Voice Guide
Since the number of AGVs in the system is less than the number of callers, the AGV
may not be executed immediately after the caller sends a task. To remind the operator
to prepare well before AGV arrives, the caller gives a voice guide—TASK
IS BEGINNING. If AGV arrives, the operator still has a short time to adjust. He can
pause or go on current task to match current station beat. However, in order to remind
the operator of continuing the current task after the pause, the caller gives a voice
guide—TIME OUT to alert the operator overtime occupancy of AGV.
328 Z. Xi et al.

2.3 Design of CCS


The logical framework of CCS is shown in Fig. 3.

Fig. 3. CCS logical framework

CCS has the following functions:


1. Communication with caller
The communication module of CCS is used to process and parse the information sent
from the caller. In another hand, this module feeds back the caller task status and the
AGV status continuously.
2. Communication with AGV
CCS constantly queries the battery power of the AGV, abnormal conditions, task
status, etc.
3. Querying the task in the matching module
After the communication module obtains the task, CCS queries the AGV controller
whether the task is pre-stored in task list. If it exists, CCS adds the task to the do-list.
Construction Design of AGV Caller System 329

These two modules are constrained by AGV power and exert cross-control AGV to
perform tasks. When the AGV battery is below bad level or there is no task, AGV will
automatically charge in the charging post. On the contrary, priority is given to the task
when the battery is above good level.
4. Message log
The connection/disconnection of each device status and task completion status are
output as a log and saved as a txt file for subsequent data analysis.

2.4 Introduction to AGV


AGV navigation mode is laser navigation. AGV controller is the core module of AGV
which has the function of localization, mapping and navigation. In addition, AGV
controller includes a series of software and hardware interfaces. In order to cooperate
with the caller system, the key software interfaces include: performing pre-stored tasks,
pausing tasks, continuing tasks, withdrawing tasks, and so on.

2.5 Stock Management


The optional destination station provided by the caller is limited, and in fact there are
situations where the material needs to be shipped to a stock. A stock usually contains a
large number of stations, some of which are not limited to a planar structure and may
contain layers. This paper proposes a “one-to-many” approach that skillfully resolves
this problem and extends functions of the caller.
The destination of the caller can represent not only a single station, but also a stock
containing a large number of stations. A destination corresponding to multiple stations
is called a “one-to-many” approach. CCS needs to develop a stock module corre-
spondingly. After the user inputs the number of each layer in the stock, the stock is
initialized. The stock provides three basic functions: acquiring an empty station,
occupying a station, and clearing occupancy status of all station in the stock. Figure 4
is a flow chart of the stock module of CCS. Besides, a special caller has the function of
emptying the stock. When all the station in the stock are occupied, the operator can
empty stock by one button after the material on the station is manually processed.
Tasks arriving at each station within the stock need to be pre-stored in AGV controller.
330 Z. Xi et al.

Fig. 4. CCS logical framework

3 Experiment

The caller system is experimented in Fidabo workshop. There are 10 work-stations in


the workshop. The workpieces of each workstation may be sent to 2 or 3 workstations.
It is necessary to equip a caller for each workstation. Table 1 lists the hardware needed
on site.

Table 1. Main hardware of AGV caller system.


Number Item Hardware Quantity
1 CCS Dell OptiPlex 3060 Desktop 1
2 Caller 7-inch Android pad 10
3 AGV Forklift truck 2
4 Communication equipment Router 1
5 Other devices Charging pile 2

The steps to set up the experimental environment of the AGV caller system are as
follows:
1. Setting up the LAN so that the AGV controller, caller, and CCS are in the same
LAN.
2. Arranging the charging pile to prepare for automatic charging.
3. Making the factory map. The AGV controller uses the controller of Serer Robotics.
The controller can process the lidar scan data to obtain the map of surrounding
environment; build the path between the station through the teaching method;
Construction Design of AGV Caller System 331

establish the task, and the task’s name corresponds to the matching module in CCS;
save all tasks to the AGV controller. Figure 5 is a map of the experimental envi-
ronment of the AGV system, including stations and paths.

Fig. 5. Environmental map

4. Since CCS is the server of TCP/IP communication, CCS should be opened before
configuration of caller. After CCS establishes a connection with AGV, AGV per-
forms an automatic charging task firstly. Next, the stock is initialized in CCS based
on the number of stations established in the AGV controller.
5. After the caller is connected to the network, configure IP by inputting address and
port of CCS. As soon as the connection with CCS is established, caller can send the
task.
The experimental environment is shown in Fig. 6.

(a) caller (b) AGV (c) CCS

Fig. 6. AGV caller system experiment site


332 Z. Xi et al.

The caller system runs continuously for 2 h, and 10 callers send tasks out of order.
The AGV performs call tasks and charging tasks in order without any abnormality. The
status of the task is displayed correctly on the caller interface. Log file is automatically
output after the CCS is shut down.

4 Conclusion

This paper focuses on the caller and CCS where Caller realizes basic call function.
Moreover, caller can display task status, queue situation, vehicle status fed back from
CCS. As for the problem of stock, a “one-to-many” approach was proposed to extend
the meaning of destination. Finally, the feasibility and reliability of the proposed
scheme were verified by experiments.

Acknowledgements. The authors thank for the financial support from the Nation Natural Sci-
ence Foundation of China (51205243) and National Key Research and Development Plan
(2016YFC0302402).

References
1. Zhou, L.: Neuronavigation Surgery. Reference to a chapter in an edited book
2. Omara, A.I.M.: Accuracy analysis and error correction for anatomical landmarks based
registration in image-guided neurosurgery. Fudan University (2013)
3. Roberts, D.W., Strohbehn, J.W., Hatch, J.F., et al.: A frameless stereotaxic integration of
computerized tomographic imaging and the operating microscope. J. Neurosurg. 65(4), 545–
549 (1986)
4. Sun, Y., Luebbers, H.-T., Agbaje, J.O., et al.: Validation of anatomical landmarks-based
registration for image-guided surgery: an in-vitro study. J. Cranio-Maxillofac. Surg. 41(6),
522–526 (2013)
5. Han, W., Zheng, J.B., Li, X.X.: A fast and accurate stereo matching algorithm based on
epipolar line restriction. In: 2008 Congress on Image and Signal Processing, no. 5, pp. 271–
275 (2008)
6. Wang, Z., Yang, R., Xiong, D., Wu, X.: Stereo matching for positioning instruments in near
infrared surgery navigation. Photoelectron 23(01), 203–208 (2012)
7. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach.
Intell. 22(11), 1330–1334 (2000)
A Transfer Learning Strip Steel Surface Defect
Recognition Network Based on VGG19

Xiang Wan1(&), Lilan Liu1, Sen Wang2, and Yi Wang3


1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai, China
[email protected]
2
Shanghai Baosight Software Corporation, Shanghai, China
3
Norwegian University of Science and Technology, Trondheim, Norway

Abstract. The types of surface defects of strip steel are various and complex
gray gradation structure. The existing image detection technology based on
machine vision still has the disadvantages of low recognition efficiency and poor
generalization performance in strip steel defect detection. However, image
detection technology based on deep learning need large numbers of image data
to train networks. For a typical multi-class and small sample data with low
quality pixels, these data cannot complete a deep neural network training. For
this type of data, traditional convolutional neural networks have low recognition
rate for small samples and poor generalization for large samples. Combining
with deep learning and transfer learning, this paper proposes a transfer learning
strip steel defect recognition network based on VGG19. The frozen pre-training
network layers in VGG19 are not trained, the learning rates are setting in
combination with the actual use of the network layers. The convergence speed
and accuracy of the model are taken into account, and the recognition rate and
generalization force on small sample data are greatly improved. On the NEU
surface dataset 2, the recognition accuracy of our model is 97.5%, which is
much higher than the traditional machine learning algorithm. Moreover, the
network model in this paper does not require data preprocessing and model
parameter adjustment, nor does it need to manually participate in designing the
classifier. It is a simple and effective method for identifying the surface defects
of strip steel. The method of this paper has a certain practical value in the field of
surface recognition of other products.

Keywords: Strip steel surface defects  Multi-classes & small samples  Deep
learning  Transfer learning

1 Introduction

Strip steel is one of the main products of the steel industry. It is an indispensable raw
material for aerospace, shipbuilding, automobile, machinery manufacturing and other
industries. The quality of the strip will directly affect the quality and performance of the
final product. In the strip steel manufacturing process, due to various factors such as
raw materials, rolling equipment and processing technology, cracks, crusting, roll
marks, holes, skin delamination, pitting, nclusions, oxidation occur on the surface of

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 333–341, 2020.
https://doi.org/10.1007/978-981-15-2341-0_41
334 X. Wan et al.

the strip steel. Different types of defects such as skin, scratches and welds, which not
only affect the appearance of the product, but also reduce the corrosion resistance, wear
resistance and fatigue strength of the product. Surface defects are one of the important
factors affecting the quality of strip steel. According to statistics, more than 60% of the
quality objection incidents of domestic strip steel products are caused by surface
defects [1]. Therefore, in the rolling process, it is very important to find these defects in
time, adjust the control parameters and classify the strips steel of different grades to
improve the quality of the strip steel products.
Traditional steel companies use manual visual inspection and flash-light detection
to test the surface quality of strip steel through integrate the information on the severity
of all defects by the help of probability & statistical and testers’ experience to form a
comprehensive quality report. Due to this method is affected by the limitations of the
detection means and the subjective factors of the testers, the real-time detection is poor,
and error rates of missed detection and false detection are high. For the surface defect
identification of high-speed moving products on the conveyor belt, the human eye can
only detect 60% of defects, the width of the product should not exceed 2 m and the
moving speed should not exceed 30 m/s in the best case [2]. The most difficult thing is
to find the exact location of the defects and determine their types [3].
With the development of industrial technology, gradually appeared methods of
using sensors instead of naked eye for detection, such as eddy current detection,
capacitance detection and ultrasonic inspection. These methods can not only detect
surface defects, but even some internal small defects can also be detected. The max-
imum detection accuracy can be 5  10−4 mm3, which greatly changes the fact that
accuracy of naked eye detection is low. However, due to the limitations of the detection
principle, these detection methods cannot detect all products and their defect types. The
defects of tested products must be significantly different from those of the non-
defective parts to be effectively detected.
After entering the 1970s, machine vision applications became possible in steel plate
inspection due to the development of CCD cameras and the rapid development of
image processing technology. Machine vision is a non-contact non-destructive test
technology. It has the advantages of high resolution, strong classification, small
influence by environmental electromagnetic field, large working distance, high mea-
surement accuracy and low cost. It has become the mainstream in current research
work. The current mainstream classification algorithms for products’ surface defect
recognition based on machine vision include decision tree method, KNN algorithm,
genetic algorithm, neural network method, support vector machine method and so on
[4]. However, when identifying and classifying defect images with many categories,
complex shapes and overlapping class boundaries, the classifiers are often designed in
the form of multiple algorithms connected in series and parallel. The robustness and
real-time performance of the algorithm is poor [5]. Especially when the image contains
noise or texture background, the real defect edge may be missed due to noise or texture
background interference, which cannot meet the requirements of on-line detection of
surface defects [6].
A Transfer Learning Strip Steel Surface Defect Recognition Network 335

After 2010, with the popularization of high-performance computers and the


improvement of hardware devices performance, deep learning has been widely used in
the industry, which also provides a new idea for computational vision to solve strip
steel surface defect identification. Convolutional neural networks (CNN) have been
widely used in the fields of fault diagnosis, defect detection and image recognition,
CNN has achieved great success. Since the steel plate moves rapidly on the production
line, a large amount of image data (such as 25 frames/sec) is generated every second to
wait for the system to process, which requires high real-time defect recognition
capability. However, not all images contain defect information. Under normal cir-
cumstances, the defect area ratio of steel is less than 5% (Steel Acceptance Stan-
dard GB.1965/1978). This results in a low resolution of images and small number of
qualified images taken by the CCD cameras, which is a typical multi-class and small
sample data with low quality pixels. No matter the amount of data or quality of data, it
is unable to complete a deep neural network train. Traditional convolutional neural
networks have problems such as low recognition rate of small samples, poor gener-
alization of large samples, and long recognition time.
Combined with the previous research results, this paper proposes a strip steel defect
recognition network based on transfer learning for multi-class and small sample data
with low quality pixels. The VGG19 network is used as the basic structure of our
network. The top 15 layers are frozen layers cannot be trained. The network layer sets
their learning rates as 10−6, 10−4 and 10−2 for different layer areas by the rule of 2:4:6.
Then the normalization layers, activation layers and the pooling layers are added as
appropriate. Finally the softmax classifier is connected and output of target task classes
number is matched to complete identification and classification of strip steel defects.
The model effectively solved the problem that traditional transfer learning model can be
over-fitting when the target data set is small and data enhancement is not always
effective for over-fitting. The method has strong generalization and good convergence,
it is a non-contact non-destructive machine vision detection method which has the
advantages of high resolution and strong classification.

2 Strip Steel Surface Defect Recognition Based


on Convolutional Neural Network
2.1 Data Background Introduction and Model Performance Index
Evaluation System
In this paper, NEU surface strip steel defect data set 2 is taken as the experimental
object. The data set contains six major types of strip defects: cracks, inclusions, pla-
ques, pit surfaces, rolling and scratches. Each category contains 300 images, for the
total of 1800 images and each of which is 200  200 pixels. The images in the dataset
are bmp format and each image is a 40.1 KB grayscale image. As shown in Fig. 1, it is
the strip steel defect category.
336 X. Wan et al.

Fig. 1. Strip steel defect category

Model prediction accuracy and error are closely related. Since each type of data in
NEU surface strip steel defect data set 2 is class balanced, accuracy is an ideal indicator
for evaluating the performance of various algorithms. It can compare the advantages
and disadvantages of various algorithms for classification of defects. There is no need
to consider the recall rate and F1 value.

Number of Correctly Classified Images


Accuracy ¼ ð1Þ
Total Number of Input Images

2.2 Convolutional Neural Network Structure Design and Performance


Analysis
The traditional machine vision methods mainly design the feature extractors for
characteristics of special data set, which has strong subjective dependence on people
with complicated processes but have poor adaptability in different fields. In 2012, in the
famous ImageNet image recognition contest, the deep learning model AlexNet won the
championship. In the same year, a deep neural network (DNN) technology led by
Stanford University’s well-known professor Enda Wu and world’s top computer expert
Jeff Dean, achieved amazing results in the field of image recognition. In ImageNet
contest evaluation, the error rate was successfully reduced from 26% to 15%. The deep
learning algorithm stands out in the world competition, once again attracting academic
and industrial attention to the field of deep learning.
Based on the above research results, in this paper we also use a deep learning con-
volutional neural network technology to design a CNN network to achieve automatic
extraction and detection of strip steel surface defects. 80% of the data is used to train
network classifiers, 20% of data is used to test network performance. As shown in Fig. 2,
the convolutional neural network designed in this paper shows that the network has 3
convolutional layers and 3 maximum pooling layers to extract features from images.
Then connect 3 fully connected layers and 1 softmax classification layer. Since the neural
network learning process is essentially to learn the distribution of data, if the distribution
of training data and test data is different, the generalization ability of the network will be
seriously reduced. In order to solve this problem, this paper adds a batch normalization
layer after each independent layer to avoid network gradient disappearance, standard
weight, optimize network gradient flow, and speed up network convergence.
A Transfer Learning Strip Steel Surface Defect Recognition Network 337

Fig. 2. Convolutional neural network structure

Figure 3 shows the training and verification accuracy of the network. It can be seen
from the figure that the convergence and accuracy of the network is poor. On the
training set, the model has an over-fitting phenomenon. On the verification data set, the
network The accuracy fluctuates greatly. For this problem, according to the experience
of ImageNet image recognition contest, it is known that retraining an effective con-
volutional neural network requires at least 1000 high quality images of the same
category. The dataset of this paper contains six types of images, each of which contains
only 300 images, for a total of 1800 images, each of which occupies a low-quality
grayscale image with a storage capacity of only 40.1 KB. The existing strip steel defect
images have low resolution and small samples, which is a typical multi-class and small
sample data with low quality pixels. It is also very difficult for some types of images to
be observed by artificial naked eyes (for example Class 1: Inclusions), which is not
suitable for training a deep neural network.

Fig. 3. The accuracy rate of CNN network


338 X. Wan et al.

From the above research, it is not feasible to reconstruct and train an effective strip
steel defect identification network in the case of the existing data volume, a new
method must be sought.

3 Strip Steel Surface Defect Recognition Network Based


on Transfer Learning

3.1 Transfer Learning Network Based on VGG19


Transfer learning is a way to learn from previous tasks and apply them to new tasks. Its
purpose is to extract knowledge and experience from one or more source tasks and then
apply them to a new target domain. Since 1995, transfer learning has attracted the
attention of many researchers. Deep learning requires a large amount of high-quality
annotated data. But in some specific fields, high-quality data is extremely limited and
precious and traditional deep learning does not work well for such data. Combining
transfer learning with deep learning can solve this kind of problems well. A very
popular strategy in current deep learning is to use the pre-training model on the big data
set as the new network’s backbone, and then perform under layers fine-tuning to fit
specific domain data set. Especially in the image field, most transfer learning networks
choose ImageNet’s pre-trained parameters to initialize model and achieve excellent
results [7].
Since the target data set in this paper is small, models can be over-fitting. Tradi-
tional image data augmentation techniques such as horizontal, vertical flipping, random
scaling, random sampling, cropping and addition of various noises to the original
image are based on the assumption that the ROI region of sampled images have
geometric correlation and semantic correlation. However, strip steel defect images do
not satisfy the above assumptions and the traditional data augmentation methods donot
solve such problems well. In view of the above difficulties, the VGG19 network is used
as the basic backbone of our new network. The first 15 layers of the network are non-
trainable frozen layers. The learning rate is set as 10−4 and normalization layers,
activation layers, pooling layers are added. Finally, the softmax classification is con-
nected. And matching output classes, the strip steel defect recognition transfer learning
network based on VGG19 is construction.

Fig. 4. The accuracy and loss error of network


A Transfer Learning Strip Steel Surface Defect Recognition Network 339

It can be seen from the network’s accuracy in Fig. 4 that the transfer learning based
on VGG19 is more complicated than the previous network model. Although the
accuracy and convergence of network are greatly improved, the verified loss of data set
is quite turbulent in the early stage of training. In the medium term, there will be
occasional increases in loss. Although the loss will converge after training, the unstable
loss curve obviously has some unreasonable in network design. For the node that
suddenly increases loss during the training processes, it is called the problem of dead
relu node in deep learning. This type of problem is mainly due to excessive magnitude
of a gradient update, which causes the weight adjustment of some Relu nodes to be too
large, so that subsequent training will no longer work on the nodes. This node is
equivalent to permanent dead. What is more, after some batch losses function are
updated, these losses suddenly rise to infinity and lead training processes to fail.

3.2 Improvement of Transfer Learning Network Based on VGG19


The current learning rate of the VGG19 transfer learning network is a learning rate
decay based on training steps. The initial learning rate is set as lr = 10−4, attenuation
rate is set as decay = 10−6 and moment = 0.9. According to the previous problem
research, the benefit of transfer learning is the network has higher accuracy when the
target data set is relatively small. Most deep learning frameworks allow selectively
thaw last n layers of deep neural networks and weights learned in the rest are frozen,
but this method isnot effective for strip steel defect identification, which is particularly
small in data volume and needs further network improvement.
It has been found through research that use different learning rates instead of
thawing specific layers can be much more effective, where learning rates are deter-
mined according to each layer. The underlying learning rate will be lower, because
these layers mainly respond to edges, drop shapes and fine geometry. While layers that
respond to more complex features use higher learning rate. This will effectively
improve the speed of network’s convergence. Finally, a classifier matching the number
of target tasks’ classes is used as the output layer and the entire network is fine-tuned.
In view of the above difficulties, this paper still uses the VGG19 transfer learning
network as backbone in the previous paper. The first 15 frozen layers are non-trainable,
but the network layers set their learning rates as 10−6, 10−4 and 10−2 for different layer
areas by the rule of 2:4:6. The attenuation rate is set as decay = 10−6 and the attenu-
ation momentum as moment = 0.9.
As seen in Fig. 5 that the convergence and accuracy of the network have been
greatly improved on the basic network. The model occasionally has a sudden increase
in loss during training period, the verification function’s loss decreases and tends to
converge as the training progresses. After the test, the final accuracy of the model
converges to 97.5%, the generalization and robustness have been greatly improved,
which has a certain practical value.
340 X. Wan et al.

Fig. 5. The accuracy and loss error of network

4 Comparison with Machine Learning Algorithms

There are many kinds of surface defect detection algorithms based on machine
learning, such as k nearest neighbor (KNN), support vector classifier (SVC), gradient
enhancement, random forest classifier, AdaBoost and decision tree. The idea is manual
design and train feature extractors to extract texture features such as contrast, dis-
similarity, uniformity and asymmetry from the image data to complete image recog-
nition and classification. Because the feature extractors of machine learning are very
sensitive to data and parameter settings, they must be designed according to the cor-
responding data set and prior knowledge, so data preprocessing and model parameter
adjustment are the key step in building the model. However, steel plate’s defects highly
depend on the texture of surface and any pretreatment methods (such as smoothing or
sharpening) will change its texture characteristics. Moreover, the careful parameter
adjustment of algorithm is very time-consuming and less relevant theoretical basis. It
mainly relies on the experience of the experimenter. Therefore, the recognition rate of
the machine learning algorithm models on the strip steel defect are relatively low.
The transfer learning network model based on VGG19 can directly input the
original data image into network without data preprocessing. The model has high
robustness and accuracy in small sample data sets and is insensitive to parameter
settings. It is a practicality and flexible deployment method, suitable for implementa-
tion in the identification of steel surface defects.
Comparison of various algorithms’ performances

No. Classifier Accuracy (%)


1 KNN 75.27
2 AdaBoost 51.11
3 SVC 14.72
4 Decision Tree 88.33
5 Random Forest 89.44
7 Gradient Boosting 92.50
8 Ours 97.50
A Transfer Learning Strip Steel Surface Defect Recognition Network 341

5 Conclusions

This paper combines deep learning with transfer learning and proposes a transfer
learning strip steel defect recognition network based on VGG19. The frozen pre-training
network layers donot train and combines with the actual use of the network layer to set
the learning rate in sub-regions, which not only takes into account the convergence
speed and accuracy of the model. The model focuses on learning the trivial edges of the
image and responding, which solves the loss oscillating problem of the dead relu node
and the verification data set. The convolutional neural network has greatly improved the
recognition rate and generalization ability of small sample data. Compared with the
traditional machine learning algorithms, the network model of this paper not only
doesnot need data preprocessing and model parameter adjustment, nor does it need to
manually participate in designing the classifier. It is a simple and efficient method.
At present, the network model of this paper is only suitable for the identification of
small sample image dataset with class balance. In the future, it is necessary to consider
small sample image dataset with unbalanced categories and the image data augmen-
tation algorithm (such as horizontal or vertical flip of ROI area, random scaling,
random sampling and cropping, adding various noises to the original image, etc.)
which is different from the traditional technology to realize automatic segmentation of
strip surface defects, labeling, autonomous learning and identification classification.

Acknowledgements. The authors would like to express appreciation to mentors in Shanghai


University and Shanghai Baosight Software Corporation for their valuable comments and other
helps. Thanks for the support from the Ministry of industry and information technology for the
key project “The construction of professional CPS test and verification bed for the application of
steel rolling process” (No. TC17085JH).

References
1. Tian, S.: Research on object detection and classification algorithms for surface defects of steel
plates and strips. University of Science and Technology, Beijing (2019)
2. Srinivasan, K., Dastoor, P.H., Radhakrishnaiah, P., et al.: FDAS: a knowledge-based
framework for analysis of defects in woven textile structures. J. Text. Inst. 83(3), 431–448
(1992)
3. Karayiannis, Y.A., Stojanovic, R., Mitropoulos, P., et al.: Defect detection and classification
on web textile fabric using multiresolution decomposition and neural networks. In: The 6th
IEEE International Conference on Electronics, Circuits and Systems 1999, Proceedings of
ICECS 1999. IEEE (1999)
4. Tang, B., Kong, J., Wang, X., et al.: Steel surface defect recognition based on support vector
machine and image processing. China Mech. Eng. 22(12), 1402–1405 (2011)
5. Zhao, Y.: Research on segmentation of oil contamination region on silicon steel surface based
on superpixels. Northeastern University (2014)
6. Zhao, J., Yan, Y., Liu, W., et al.: A multi-scale edge detection method of steel strip surface
defects online detection system. J. Northeast. Univ. (Nat. Sci.) 31(3), 432–435 (2010)
7. Yosinski, J., Clune, J., Bengio, Y., et al.: How transferable are features in deep neural
networks? Eprint Arxiv 27, 3320–3328 (2014)
Visual Interaction of Rolling Steel Heating
Furnace Based on Augmented Reality

Bowen Feng1(&), Lilan Liu1, Xiang Wan1, and Qi Huang2


1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai, China
[email protected], [email protected],
[email protected]
2
Shanghai Baosight Software Corporation, Shanghai, China
[email protected]

Abstract. Based on the augmented reality technology, the augmented reality


visualization of the rolling steel heating furnace and the equipment interaction
operation realized by the augmented reality technology were studied through the
means of virtual reality fusion and digital twinning. The heating furnace is the
key equipment in the steel metallurgy hot rolling production process. In actual
production, it is difficult for operators to understand the operation status of the
heating furnace and the heat flow parameters of the equipment at any time and
any place. In response to this problem, this paper proposes to construct a visual
interactive platform for rolling steel heating furnace through augmented reality
technology, and present the equipment operating status of the heating furnace.
With heat flow parameters.

Keywords: Augmented reality  Rolling furnace  Digital twinning  Visual


interaction

1 Introduction

The informationization and digitization of the manufacturing industry continue to


develop and deepen. As the main implementation carrier of industrial production
activities, how to use digital technology to quickly and efficiently understand the
current operating status of production equipment is the prerequisite for improving
production efficiency and achieving safe production.
The heating furnace is an important thermal equipment in the hot rolling process in
steel production, which has a great impact on the production of steel [1]. Its main task
is to heat the billet so that the temperature distribution of the billet meets the rolling
requirements, so that the subsequent rolling mill can roll out the finished steel with
good quality. The heating of the steel slab is carried out in order to satisfy the rolling
requirements of the surface and internal temperature of the slab, and the temperature of
the slab should be uniform to avoid the occurrence of overheating and over-burning,
and to reduce the oxidation of the slab during heating as much as possible. And
decarburization, the production of reliable quality finished steel. Therefore, the smooth
progress of the entire rolling process is greatly affected by the operating state of the

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 342–348, 2020.
https://doi.org/10.1007/978-981-15-2341-0_42
Visual Interaction of Rolling Steel Heating Furnace Based on AR 343

furnace [2]. However, the production environment of the hot rolling mill is relatively
complicated, and the unfavorable factors such as noisy and high temperature make the
operator not have the conditions for inspecting the state of the heating furnace
equipment and the state of the internal flow field during the production operation. And
traditional monitoring methods often require a large number of electronic instruments
and monitoring instruments. Operators can only know the current device status
information at a fixed location where monitoring instruments are installed, and it is
difficult to understand and operate anytime, anywhere. And the traditional monitoring
information lacks the mastery of the overall information of the entire heating furnace.
Therefore, there is a need for a system that allows an operator to view the operational
status of a device anytime, anywhere and operate in a timely manner.
Augmented reality (AR) technology is a product of the rapid development of
computer graphics technology. It is developed on the basis of virtual reality technology
[3]. It generates virtual objects, images and even sounds through computer graphics,
but unlike virtual reality, augmented reality technology superimposes computer-
generated virtual objects, scenes, sounds or system prompts into real scenes, thus
realizing realistic scenes. Enhancements increase user perception of the real world. In
the industrial sector, some developed countries have already applied augmented reality
technology. Boeing Company of the United States assists maintenance personnel in
wiring the wires by developing augmented reality auxiliary maintenance devices for the
head-mounted display. Sony’s TransVision Augmented Reality Prototype System [4]
can display a variety of auxiliary information to the user through the helmet display,
including the panel of the virtual instrument, the internal structure of the equipment
being repaired, and the parts drawings of the equipment being repaired. Augmented
reality applications in the domestic industrial sector are just beginning [5].
This paper proposes a visual interaction of rolling steel heating furnace realized by
augmented reality technology, and combined with the application of computational
fluid dynamics to realize the internal flow field simulation of the heating furnace. Based
on the 3D digital model of the hot rolling furnace, the Unity 3D development engine is
combined with the Vuforia SDK development kit as the augmented reality develop-
ment platform. The augmented reality-based visual interaction platform is developed
according to the device work information and the actual workflow to realize the device
interaction of the augmented reality device. Operation and information review.
Through this platform, operators can monitor the heating furnace anytime and any-
where through augmented reality technology, which is beneficial to the improvement of
production efficiency and safety.

2 System Framework

The visual interaction of the rolling steel heating furnace based on augmented reality
described in this paper can be divided into three levels, namely mobile augmented
reality application, AR data source server and heating furnace equipment. The overall
framework of the system is shown in Fig. 1.
344 B. Feng et al.

Fig. 1. System overall scheme design drawing

The main functions of each part of the system are as follows.


(1) Mobile augmented reality application: This is the core of the visual interaction of
the entire furnace. It relies on mobile smartphones or head-mounted display
devices to construct a complete set of heating furnace parameter information,
interactive control and dynamic simulation in the form of augmented reality.
Interactive system. From the technical level, it is mainly divided into five parts
and four levels, mark recognition and model superposition, parameter information
display, interactive control, and data communication. The two parts of the logo
recognition and model superimposition belong to the basic level, which is the key
element to realize the augmented reality; the parameter information display part
belongs to the display level and is used to display the necessary information and
images; the interactive control part belongs to the control level and is used to
implement the user. The operation command; the data communication part
belongs to the communication level, and it undertakes the task of the AR client
communicating with the data server.
(2) Data server: This is the data center that serves the AR application. The sensing
information from the device is gathered here with the analysis and control
information from the user, and the feedback is sent to the user to produce the data,
and the control command is transmitted to the device.
(3) Heating furnace equipment: As the main target of this application, it is the
embodiment carrier of actual production, and the realistic source of augmented
reality digital twinning.

3 System Structure Function Design and Implementation


3.1 Equipment Motion Control
The equipment motion control module provides the rolling mill operator with the
function of manipulating the workshop equipment through the mobile intelligent
device, and solves the problem that the operator’s current control room and the actual
equipment are far from being inconvenient to operate. In addition to the interactive
Visual Interaction of Rolling Steel Heating Furnace Based on AR 345

control device, the system also provides digital and real-time digital motion control,
which provides virtual operation of digital production line through augmented reality,
providing operators with the functions of trial and error, trial operation and test output
before actual operation. The equipment motion control module refers to the presen-
tation of the steel rolling furnace equipment through augmented reality technology. In
traditional paper materials, it is very difficult to realize the motion simulation of the
equipment, which is the space for augmented reality. The 3D model of the created
device is made into a static augmented reality display model by the Unity 3D engine.
Through the animation system (The schematic diagram of the animation system is
shown in Fig. 2.) and the C# script language editing, the device is displayed through
the augmented reality technology by referring to the actual running steps of the device
in the actual working condition. Motion simulation and practical manipulation of the
device.

Fig. 2. Schematic diagram of the animation system

3.2 Equipment Parameter Information Monitoring


The parameter information module is the key to solve the problem that the operator
cannot obtain the monitoring room parameters on the spot. By accessing the AR client,
the operator has the equivalent of having a mobile device status monitor, which is
extremely important for on-site front-line operators to read device information in real
time. It is an important part of the entire augmented reality system. The parameter
information module will display the key parameter information of the device currently
displayed through the application. By setting the parameter panel, it displays infor-
mation such as device operation parameters, fault alarms, and operation suggestions.
Through this module, users can query, browse and feedback anytime, anywhere. The
parameter monitoring panel is shown in Fig. 3.

3.3 Virtual Line Scheduling


The virtual line scheduling module is a module for the entire hot rolling line. If the
augmented reality application can only display one device or product, its vitality in the
industrial field will be very limited. But if you can build a digital workshop or a digital
production line that is presented through AR, then AR’s industrial applications will add
a new world. In this module, a set of hot rolling furnaces in the steel metallurgical
production process will be presented through augmented reality technology. The staff
can view the entire production process anytime and anywhere by using a handheld or
head-mounted terminal, and the module has built-in The selection switch of the device
346 B. Feng et al.

simulation operation realizes the operation of a certain device by selecting different


buttons. This means that the operator can perform the scheduling simulation operation
of the entire production line and the inspection of the production line operation process
only by moving the augmented reality application on the smart terminal. This module is
a big attempt to apply augmented reality technology in the production process of
enterprises. The schematic diagram of virtual production line scheduling is shown in
Fig. 4.

Fig. 3. Parameter monitoring panel Fig. 4. Schematic diagram of virtual produc-


tion line scheduling

4 Effect Analysis of Practical Application

The visual interaction of the rolling steel heating furnace based on augmented reality
designed in this paper has been successfully applied to the AR laboratory of Shanghai
Intelligent Manufacturing and Robot Key Laboratory, which solves the problems of
equipment data monitoring and maintenance operations in the workshop production.
The furnace model device is connected through the network to obtain real actual
production data stored in the device and perform interactive control.
Through the smart phone scanning identification map, the system displays the data
of the device in real time, including the device operation data and production data. For
abnormal data, it can adjust the historical alarm data, assist the personnel to analyze the
abnormal cause, and the system promptly reminds Alarms help to detect equipment
failures and avoid accidents. The mobile phone AR production line display is shown in
Fig. 5.

Fig. 5. Mobile phone AR production line display


Visual Interaction of Rolling Steel Heating Furnace Based on AR 347

When the head-mounted display device uses the system to monitor the device, the
operator can view the current furnace device alarm information and operation status in
real time, and timely view the required maintenance manual and operation guidance
recommendations. And you can check the internal working state of the furnace, and
check the whole process of the equipment in detail, which greatly helps the staff to
reduce the difficulty and quantity of daily operation and inspection work. The visual
interactive interface on the head-mounted display device and the device status moni-
toring on the head-mounted display device is shown in Fig. 6.

Fig. 6. Visual interactive interface device status monitoring on the head-mounted display device

5 Conclusion

In this paper, Unity 3D is used as the augmented reality system development platform.
A set of visualization and interactive system for rolling steel heating furnace based on
augmented reality is designed. The main functions of the system include AR device
control, AR device parameter information display, and AR virtual scheduling. The
augmented reality technology is applied to the actual industrial production process. By
integrating the sensing material technology, the steel rolling industry production
information is read, and the running status information of the rolling steel equipment is
clear at a glance, and the complex industrial production equipment is monitored and
interacted under the virtual and real interaction environment. The on-site operator’s
real-time data monitoring and interaction with the equipment is not subject to site
constraints, reducing the environmental constraints of the operator’s actual operation
and improving the quality and efficiency of the equipment in the production envi-
ronment. The implementation of this application research lays the foundation for the
future application of augmented reality technology in digital workshops and intelligent
production.

Acknowledgements. The authors would like to express appreciation to mentors in Shanghai


University and Shanghai Baosight Software Corporation for their valuable comments and other
helps. Thanks for the support from the Ministry of industry and information technology for the
key project “The construction of professional CPS test and verification bed for the application of
steel rolling process” (No. TC17085JH). Thanks for the pillar program supported by Shanghai
Economic and Information Committee of China (No. 2018-GYHLW-02020).
348 B. Feng et al.

References
1. Song, J.: Numerical simulation of pressure distribution in large steel rolling furnace furnace.
Anhui University of Technology (2017)
2. Zhang, S.: Improved design of regenerative heating furnace combustion system based on CFD
simulation. Shanghai Jiaotong University (2012)
3. Zhao, X., Zuo, H., Xu, X.: Research on key techniques of augmented reality maintenance
guiding system. China Mech. Eng. 19(6), 678–682 (2008)
4. Schwald, B., Figue, J., Chauvineau, E.: STARMATE: using augmented reality technology for
computer guide maintenance of complex mechanical elements. In: Proceedings e200:
eBusiness and eWork, pp. 17–19. Venice Cheshire Henbury (2001)
5. Liu, L., Jiang, C., Gao, Z., Wang, Y.: Research on real-time monitoring technology of
equipment based on augmented reality. In: Wang, K., Wang, Y., Strandhagen, J., Yu, T. (eds.)
Advanced Manufacturing and Automation VIII, IWAMA 2018. Lecture Notes in Electrical
Engineering, vol 484. Springer, Singapore (2019)
Design and Test of Electro-Hydraulic Control
System for Intelligent Fruit Tree Planter

Ranguang Yin1, Jin Yuan1(&), and Xuemei Liu2


1
College of Mechanical and Electronic Engineering,
Shandong Agricultural University, Tai’an 271018, China
[email protected]
2
Shandong Provincial Key Laboratory of Horticultural Machinery
and Equipment, Tai’an, China

Abstract. Aiming at the problems of high labor intensity and low automation
degree in the process of fruit tree planting, this paper designs an electro-
hydraulic control system for intelligent fruit tree planter. The system adopts
STM32 as the control unit, through the intelligent control of multi-channel relay
and PWM governor, it can realize the automatic operation of digging, Emission
of fertilizers, mixing of fertilizers, backfilling and irrigation. Data of Rotating
speed of bit, system pressure and oil temperature are collected and displayed by
interrupt, ADC, IIC and serial communication technologies. PID control algo-
rithm is adopted to realize the matching operation of amount of digging feed and
system pressure, solve the problems of high resistance, system pressure and oil
temperature, and PID control process is simulated by MATLAB to get the
pressure-time curve of feedback operation. Field test results show that the
pressure of the hydraulic system tend to 10 MPa, the oil temperature maintain
between 50 and 55 °C, the total planting time is within 3 min, the system greatly
reduces labor intensity and improves the quality of planting.

Keywords: Fruit tree planting  Automation  STM32  PID control

1 Introduction

The process of fruit tree planting is miscellaneous, and the process of digging, Emission
of fertilizers, mixing of fertilizers, backfilling and irrigation result in great labor
intensity, and the existing mechanical automation degree is low [1]. Therefore, it is of
great significance to realize the automatic operation of fruit tree planting. Moreover,
hydraulic system pressure is high, easy to cause the leakage increase, high oil tem-
perature, affect the service life of the machine [2]. In the process of fruit tree planting,
the energy that digging needs is the largest, soil quality and depth have great influence
on digging resistance, excessive resistance will be reflected in a sharp drop in speed,
the increase of system pressure and the oil temperature, so the amount of digging feed
should be adjusted according to the digging resistance and system pressure [3].
In view of the above problems, this paper designs an electro-hydraulic control
system for intelligent fruit tree planter, STM32F103ZET6 as main control unit [4],
combining with the PID control algorithm to control the matching relationship between

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 349–357, 2020.
https://doi.org/10.1007/978-981-15-2341-0_43
350 R. Yin et al.

the amount of digging feed and the system pressure, realizes the automatic and intel-
ligent operation of fruit tree planting, reduces the labor intensity, improves the service
life of the machine.

2 Mechanical Structure and Control Principle of Planting


System

The mechanical structure of the planting system is shown in Fig. 1, after the syn-
chronous oil cylinder is extended, the quadrilateral lifting mechanism drives the
quadrilateral lifting frame to descend so that the bottom end of the outer cylinder is
attached to the ground. The contraction of the lifting oil cylinder makes the fertilizer
discharging device, inner cylinder and the screw bit which are fixed in the lifting frame
work vertically along the lifting slide track. The bottom screw bit is used for digging
holes and raising soil, the upper screw bit is used for mixing soil and bacterial fertilizer
evenly, the fertilizer discharged by the fertilizer discharging device is directly in the
inner cylinder and in direct contact with the soil. When completing the excavation and
mixing of fertilizer, the contraction of synchronous oil cylinder makes the place where
the guide pipe holds the saplings just in the middle of the hole. Putting on the sapling,
and raising the baffle to complete the backfilling of soil and bacterial fertilizer.

Fig. 1. Mechanical structure diagram of the planting system 1. screw bit 2. diversion pipe
3. baffle 4. outer cylinder 5. quadrilateral lifting mechanism 6. inner cylinder 7. hydraulic motor
8. fertilizer discharging device 9. lifting slide rod mechanism 10. Vertical lifting frame
11. quadrilateral lifting frame 12. synchronous oil cylinder 13. lifting oil cylinder 14. crawler
chassis frame

Electro-hydraulic control system consists of automatic planting operation subsys-


tem, data acquisition subsystem, PID intelligent feedback subsystem and hydraulic
transmission control subsystem. Through the level signals, the system controls the on-
off states of multiple relays, so as to control the movement position of hydraulic
solenoid valve and achieve the combined control of hydraulic actuator. The travel
switch group, key group, proximity switch can achieve the judgment of the operating
state, the controller completes the corresponding operation timely by the on-off state of
relay. In addition, the slotted switch sensor, pressure sensor and temperature sensor are
Design and Test of Electro-Hydraulic Control System 351

used for data collection, and IIC is used to display on OLED display screen, the system
controls the amount of digging feed according to the collected pressure value intelli-
gently, so that soil and fertilizer can be mixed evenly in the shortest time. The control
flow chart of planting operation is shown in Fig. 2.

start

initialize

Detect whether"one-click
operation" ispressed

The controller sends


job signals

relay operation

Manually control
Single planting
each component to Need an emergency Whether or not to
operation was
return to its original stop? continue?
completed
position

stop

Fig. 2. Plant operation control flow chart

3 Overall Scheme Design of Control System

3.1 Hardware Design of Control System


The hardware of the system and connection structure of each hardware is shown in
Fig. 3. The MCU adopted is STM32F103ZET6 based on Cortex-M3 kernel, it has the
advantages of low power consumption and high performance [5]. There are 6 multi-
plexers with interface loads of DC 30 V/10 A. There are 5 multi-channel solenoid
valves, which are used to control the start and stop of motor, the expansion and
contraction of oil cylinder respectively. The trip switch is used to detect the trip
information of the operating device and the operation information of the key
group. Pressure sensor, temperature sensor and slot photoelectric switch transmit the
detected information to controller, which communicates with OLED display screen
through IIC. PWM governor can adjust the speed of the motor according to the duty
ratio of PWM input by the controller, so as to control the amount of fertilizer. The
pump positive lead is connected to the COM end and NO end of the relay, the relay
controls the operation of the load.
352 R. Yin et al.

pressure temperature The key Travel switch Slot type


proximity switch
sensor sensor groups set photoelectric switch

STM32 microcontroller

Synchronous Synchronous OLED display Vertical Vertical descent PWM speed


Pump relay Motor relay
ascending relay drop relay screen ascending relay relay governor

Synchronous Synchronous Motor


Vertical lift Vertical drop Distributing
Pump ascending drop solenoid solenoid
solenoid valve solenoid valve motor
solenoid valve valve valve

Fig. 3. Hardware connection structure diagram

3.2 Automatic Planting Operation Subsystem


Automatic planting operation subsystem has two operation modes: manual mode and
automatic mode. In manual mode, the controller controls the multi-channel relay by
detecting the input state of the key. The key group includes seven keys, including reset,
one-key operation, motor start and stop, synchronous rise, synchronous fall, vertical
rise and vertical fall. Reset button enables the controller to initiate mode, one-key
operation to achieve automatic control mode, other keys can achieve individual control
of each solenoid valve, in the case of an urgent need to stop automatic operation, the
parts can be run to the initial position through manual control.
In the automatic mode, after the slider on the slide track triggers the travel switch
group, the single-time automatic planting operation of the fruit tree is realized by the
single-way relay controlled by the controller. The travel switch group includes 2 upper
travel switches, 3 middle travel switches, 4 lower travel switches, 6 synchronous
ascending travel switches and 5 synchronous descending travel switches. When the
backfilling operation state is reached, the staff will lift the baffle, 1 proximity switch
will be triggered, and the motor and water pump will be opened. The layout diagram is
shown in Fig. 4.

Fig. 4. Schematic diagram of sensor layout in automatic mode

3.3 Data Acquisition Subsystem


Microcontroller collects data of pressure, temperature and speed, the structure of data
acquisition subsystem is shown in Fig. 5. The maximum voltage that can be supported
Design and Test of Electro-Hydraulic Control System 353

by the channel of ADC module is 3.3 V, so a voltage dividing circuit is adopted to


conduct one-half voltage of the collected voltage.
The measuring range of the temperature sensor is 0–150 °C, and the output voltage
is 1–5 V, so the relationship between temperature y1 and voltage x1 can be written:

y1 ¼ 37:5ðx1  1Þ

The measuring range of the pressure sensor is 0–30 MPa, the output voltage is 0–
5 V, and the relationship between pressure y2 and voltage x2 is:

y2 ¼ 6x2

Temperature and pressure are collected through the ADC module, the relationship
between ADC value and voltage x is:

3:3ADC

4096

The detection frequency can reach 1 kHz through the slotted photoelectric switch
sensor. The slotted disc with 30 holes is installed on the motor shaft. When the motor
rotates, the timer interrupt function is used for timing t, and the external interrupt is
used for counting z, external interrupt has a higher priority, so the rotational speed per
minute is:

2z

t

ADC
Temperature sensor
IIC communication
OLED
ADC
Pressure sensor MCU
Serial communication
Computer
Slot type photoelectric Timer interrupt
switch sensor External interrupt

Fig. 5. Data acquisition subsystem structure diagram

3.4 PID Intelligent Feedback Subsystem


Real-time closed-loop control of pressure is a technical problem that often needs to be
solved in engineering. Pressure closed-loop control is usually realized by PID control
algorithm, which is relatively simple, robust, stable and easy to adjust [6].
In this paper, the position-type PID control algorithm is applied to the excavation
feed operation to regulate the optimal working pressure in real time and realize the
closed-loop control of pressure. After the target pressure is compared with the collected
system pressure by PID control algorithm, the PWM control signal with different duty
cycle is fed back to the controller to realize the feedback control of the lifting hydraulic
354 R. Yin et al.

cylinder, so as to intelligently adjust the supply of excavation, the PID feedback control
scheme is shown in Fig. 6 [7]. The position-type PID control equation is as follows:

X
n
out ¼ ðKp  Ek Þ þ ðKp  ðT=Ti Þ Ek Þ þ ðKp  ðTd =TÞðEk  Ek1 Þ þ out0
k¼0

The target
pressure PID Duty cycle Solenoid Output
Relay
+ controller valve
-
Real-time system pressure

Fig. 6. PID feedback control scheme

CCR is the comparison value, ARR is the loading value, and duty cycle is the ratio
of the two. The duty cycle is determined by the comparison value and the duty cycle is
determined by the loading value. PSC is the pre-frequency division coefficient of the
system. Here we set it to 0 and the PWM frequency calculation formula is:

72M
f ¼
ðARR þ 1ÞðPSC þ 1Þ

Through MATLAB Simulink simulation, verify the rationality of the design [8],
Simulink simulation model is shown in Fig. 7, post-processing curve is shown in
Fig. 8, the system pressure added with PID coincides with the target pressure over
time, achieves the expected design goal.

Fig. 7. Simulink simulation model

Fig. 8. Pressure-time simulation curve of feedback operation


Design and Test of Electro-Hydraulic Control System 355

3.5 Hydraulic Transmission Control Subsystem


Hydraulic control subsystem is used to control the hydraulic components to perform an
action, the relay controls electricity state of electromagnetic valve coil, hydraulic multi-
way valve is shown in Fig. 9, synchronous hydraulic cylinder and vertical hydraulic
cylinder telescopic need four electromagnetic valve to control, the four electromagnetic
valves must be in series with shunt solenoid valve, the work circuit as shown in
Fig. 10, uses four rectifier diode solve the impact on other lines when a single relay is
conduction.

+ 12VPower -

Shunt solenoid
valve
Synchronous Synchronous Vertical lift Vertical drop
ascending drop solenoid solenoid solenoid
solenoid valve valve valve valve

COM COM COM COM


Relay Relay Relay Relay
NO NO NO NO

Fig. 9. Multiple valve Fig. 10. Solenoid valve working circuit diagram

Hydraulic transmission oil circuit schematic diagram is shown in Fig. 11, the left
three position four-way electromagnetic change-over valve realizes the expansion and
contraction of the synchronous hydraulic cylinder, the inlet and outlet of the syn-
chronous hydraulic cylinder are in parallel respectively to achieve synchronous
movement, the right controls the vertical rise and fall of the vertical lifting hydraulic
cylinder. Two position four-way electromagnetic change-over valve realizes one-way
rotation of hydraulic motor, when the solenoid valve on the right is not energized, the
oil will return to the oil tank through port P to port T. When the solenoid valve on the
right is energized, the oil will drive the hydraulic motor through port P to port A.

Fig. 11. Hydraulic transmission circuit schematic diagram 1. hydraulic oil tank 2. filter
3. hydraulic pump 4(11). three position four-way electromagnetic change-over valve 5. two-way
hydraulic lock 6. synchronous hydraulic cylinder 7. vertical lifting hydraulic cylinder
8. hydraulic motor 9. speed regulating valve 10. Two position four-way electromagnetic
change-over valve 12. overflow valve 13. cooler
356 R. Yin et al.

4 Test and Result Analysis

Field experiments were carried out in HeDong District, Linyi, the machine field test is
shown in Fig. 12. The machine operation status is shown in Fig. 12(a) and the fruit tree
planting field effect is shown in Fig. 12(b), planting speed of automatic mode is within
3 min, planting speed of manual mode is within 3 min 30 s, automatic mode is more
efficient than manual mode, the system greatly reduces the labor intensity, realizes the
automation of the fruit tree planting.

(a) machine operation status diagramand (b) Effect of fruit tree planting in field

Fig. 12. Machine field test

The collection values of temperature and pressure during operation period is shown as
Fig. 13.The average system pressure during the operation period is shown as Fig. 13(a)
tending to 10 MPa, and the average oil temperature is shown as Fig. 13(b) tending to 50–
55 °C. The system pressure tends to the best working pressure through PID control, and
the oil temperature is at the normal oil temperature, which is of great significance to
extend the service life of the machine.

14 55

12 50
Temperature(℃)

45
Pressure(MPa)

10
8 40

6 35

4 30
2 25
0 20
0 12 24 36 48 60 0 12 24 36 48 60
Time(s) Time(s)

(a) Average pressure trend (b) Average temperature trend

Fig. 13. Collection values of temperature and pressure during operation period
Design and Test of Electro-Hydraulic Control System 357

5 Conclusions

This paper designs an electro-hydraulic control system for intelligent fruit tree planter,
the automatic operation of fruit tree planting is realized, which improves the planting
speed and quality, and reduces the labor intensity. PID control algorithm is used to
intelligently adjust the matching relationship between the amount of digging feed and
system pressure, so as to improve the problems of high system pressure and high oil
temperature. Field test results show that the pressure of the hydraulic system tend to
10 MPa, the oil temperature maintain between 50 and 55 °C, and the total planting
time is within 3 min.

Acknowledgements. This work is supported by National Natural Science Foundation of China


(51675317), Key R&D plan of Shandong Province (2017GNC12108) and National Key R&D
Program of China (2017YFD0701103-3).

References
1. Liu, P.: Development and experimental study on the recipe of original filling. Shandong
Agricultural University (2017)
2. Tang, K.: Excessive high temperature of oil liquid of machine tool hydraulic system and
preventive measures. Mod. Manuf. Technol. Equip. 2006(06), 68–70+79 (2006)
3. Kudla, L.: Influence of feed motion feature on small holes drilling process. J. Mater. Process.
Technol. 109(03), 236–241 (2001)
4. Sun, Y., Shen, J., et al.: Design and test of monitoring system of no tillage planter based on
cortex M3 processor. Trans. Chin. Soc. Agric. Mach. 49(08), 50–58 (2018)
5. Huang, W., Bian, Y., Feng, Q., et al.: Crop growth parameters monitoring system based on
Cortex M3. J. Agric. Mech. Res. 37(02), 203–205 (2015)
6. Song, S., Ruan, Y., Hong, T., et al.: Self-adjustable fuzzy PID control for solution pressure of
pipeline spray system in orchard. Trans. Chin. Soc. Agric. Eng. 27(06), 157–161 (2011)
7. Wang, Q., Cao, W., Zhang, Z., et al.: Location control of automatic pick-up plug seedlings
mechanism based on adaptive fuzzy-PID. Trans. CSAE 29(12), 32–39 (2013)
8. Qiu, C., Liu, C., Shen, F., et al.: Design of automobile cruise control system based on Matlab
and fuzzy PID. Trans. Chin. Soc. Agric. Eng. 28(06), 197–202 (2012)
Lean Implementing Facilitating Integrated
Value Chain

Inger Gamme1, Silje Aschehoug2, and Eirin Lodgaard1(&)


1
Department of Product and Process Development, SINTEF Manufacturing,
Vestre Toten, Norway
{inger.gamme,eirin.lodgaard}@sintef.no
2
Department of Material Technology, SINTEF Manufacturing,
Vestre Toten, Norway
[email protected]

Abstract. High global competition forces manufacturers to continuously aim


for optimizing their processes. Identifying and eliminating waste and stream-
lining the value chain, are the main objective of lean. Therefore, many com-
panies use this tool to provide an optimized value chain. However, it is often
seen that the interfaces in between the process steps are forgotten. Based on
empirical data from five different organizations from different industries, the
following question will be addressed: In what way does lean contribute to
achieve value chain integration?

Keywords: Lean  Integration  Value chain  Visual management

1 Introduction

With increasing global competition manufacturers are forced to continuously seek


ways that will help them improve their production processes to sustain competitiveness.
A worldwide trend among leading companies, is to implement lean to improve and
achieve streamlined processes, and the interfaces in between the process steps. For
many organizations, significant improvements can be made with effective and precise
information flows within a company. In large and complex organizations, mastering
these factors will play an increasingly important role in the years to come.
Lean, which has its origins from Toyota Production System (TPS) has been
researched from many angles over the years. Still, there is an open question whether
companies succeed in improving integration of value chains by implementing lean as a
possible driver for improved interaction and collaboration. The overall value creation in
a business system may be described as links joined in a chain, in which each link is
dependent on the other connected links to ensure the total chain strength [1]. Hence, the
purpose of this paper is therefore, to investigate in what way does lean contributes to
achieving value chain integration. Five different companies from different sectors have
been studied to provide more insight into this field.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 358–365, 2020.
https://doi.org/10.1007/978-981-15-2341-0_44
Lean Implementing Facilitating Integrated Value Chain 359

2 Theoretical Background
2.1 Integrated Value Chain
The term value chain has been used in a broad range of contexts and meanings [2] and
may be defined as “a system of interdependent activities, which are connected by
linkages” [3]. Organizations often experience difficulties with the handover between
two consecutive process steps caused by aspects such as e.g. minor standardization,
documentation or systemization, the presence of functional silos or different cultures [4,
5]. Furthermore, if infrastructure and written procedures have little flexibility or are
cumbersome, employees may choose to make their own parallel routines and contribute
to unreliable processes. To achieve a well-managed value chain with aligned and
balanced intra-organizational customer demand- and supply capabilities, all value
creating processes must act together [6].

2.2 Lean
Lean in a historical perspective has its roots back to the Japanese industrial success
after World War II [7, 8], where Toyota adapted, customized and developed produc-
tions systems to fit Japanese culture and context. Today, lean is mainly known as an
operational management strategy derived from The Toyota Production System (TPS) in
the early 1980’s. These lean manufacturing principles and techniques have later been
developed to other contexts.
The core of lean may be described in the following five principles [9]; (1) speci-
fying value creation, (2) identifying the value streams and eliminating waste, (3) cre-
ating flow in the production line from supplier to customer, (4) creating pull by
allowing customer demand to be the driver, and (5) striving to achieve the four pre-
vious principles through a systematic approach towards continuous improvement.
Factors that are emphasized in succeeding with lean is, among others; management
commitment, standardization, visualization and employee-involvement [10, 11]. These
are also referred to as influencing value chain integration [5, 12, 13].

2.3 Factors Influencing Lean and Integration

Management Commitment – has been emphasized as a critical success factor when


implementing and working with lean, lean production systems based on TPS, and
integrated value chains [4, 14–16]. Another study [17] indicate that middle manage-
ment and workers mainly point to limited support and commitment from management,
as reasons for limited success in implementation. Mangers, however, attribute limited
success to shortcomings in methods and technical systems, and do not acknowledge the
importance of their own support and commitment.
Standardization – is important to deliver consistent customer value from a multidis-
ciplinary work force with the same knowledge standard. These standards prescribe how
employees should act and consequently secure coordination of the work [18]. Hence,
standardization may enable coordination and is thereby a mechanism that has the
360 I. Gamme et al.

potential to drive integration. Complying to standardized systems is thus essential to


achieve integrated value chains [13]. Between two process-steps the interdependencies
could be characterized by a producer/consumer relationship. Therefore, what the pre-
ceding process step produces, must be usable by the subsequent process step. To
standardize the work might be essential to ensure the fulfillment of the user expecta-
tions [19].
Visualization – tools are well-known to improve the commitment of employees,
enhance internal and external communication, improve the collaboration and integra-
tion [12, 20, 21]. Furthermore, they may enable a better understanding among persons
at any time, as well as creating process discipline by contributing to enhanced process
transparency.
Employee-Involvement – is one out of 10 factors that characterize different dimensions
of a lean production system [22]. In Scandinavia specifically, lean has developed into a
softer direction, in which employee participation is emphasized over specific tools and
techniques [23].

3 Research Design

The data presented in this article represents a larger research project with the aim to create
new insights on lean in the Norwegian context. Data analyzed from five case companies
are presented in this article. Table 1 describes the case companies and data collection.

Table 1. Case company and data collection


Case Main products Type of informants No. interviews
company
A Wafer production Management, lean coordinators, 11
operators
B Rolled and extruded Managers, staff members, union 11
products in aluminium representatives
C Insurance life and CFO, customer managers, lean 8
pension, insurance assets, managers, staff members, union
banking representatives
D Billet production, Lean managers, production 16
extruded and welded managers, operators, union
products in aluminium representatives
E Tele-communications Managers, customer consultant, 13
and data services union representatives

A research protocol was established describing methods for data collection


including the interview guide for semi-structured interviews to gain an understanding
of the experiences and reflections of the informants on the topic [24]. By collecting the
same information from multiple sources as documents and direct observations, data,
Lean Implementing Facilitating Integrated Value Chain 361

triangulation was obtained [25]. Extensive plant tours were made in all companies to
visually observe effects of lean in the organization. All interviews were recorded,
transcribed and anonymized. To identify issues and themes on how lean influence
value chain integration, the data was analyzed and coded into categories and analyzed
in a tabular.

4 Results and Discussion

Five case companies were studied to discover in what way lean contributes to achieve
integration. Table 2 describes the companies’ main approach to lean implementation
including the most commonly used tools.

Table 2. Lean approach and tools


Case Implemented lean tools
company
A Process stabilization, Six Sigma, team organization, team boards, continuous
improvements
B Standard operation descriptions, 5S, removed team leader role, visual
management
C 70 transformation projects during a 12-week period. Follow up by use of team
boards and continuous improvement
D New KPIs, team boards, operator-controlled maintenance, 5S
E Cross functional teams and value stream mapping

4.1 Management Commitment


In company A, it was reported that lean implementation led to more understanding
between management and employee representatives. The management now commu-
nicate clear requirements and expectations to the employees on how they can contribute
and establish standards to improve competitiveness. Company B always stressed the
importance of having available management. They used of Visual Management as a
method to be more available for shop-floor operators. This was mainly perceived
positively among the employees, but according to one of the informants: “Visible
management” is positive, but because of the shift scheme, it can take months before
you see the leaders. Sometimes they are there all the time, sometimes they are not there
at all”. Company C’s management policy included creating a common platform and a
common language, so that everyone adhered to the same values. Having the same type
of management standard contributed to build culture and improved performance
management in this company. In Company D, the managers had high lean focus and
dedication. They emphasized the importance of someone being responsible and to be
the driving force behind the improvement work. They created a new tool for Visual
Management, the Walk-Observe-Communicate (WOC) routine. In WOC a manager
and a representative from the production team walks in the production area, makes
observations based on a checklist and communicates the findings with those who are
362 I. Gamme et al.

affected by the findings. However, according to some employees, some managers


consider this routine less important, and the follow-up is not consistent throughout the
company. In company E, the management team focused on actions to create short
distance from words to action. In the beginning, they worked a lot with what triggers
people - there was a large difference from sellers (action-oriented) to engineers
(subject-oriented). In the lean implementation process, they brought these opposites
together as a social experiment, to increase cross functionality. A large gap in
knowledge between these groups and a lot of (wrong) assumptions was experienced in
this process (Table 3).

Table 3. Management
Case Tools and activities
company
A Clear and demanding
B Clearly anchored. “This is not something one wins once - it is important that
continuity is in place. It is important not to “tick off” prematurely”
C Focus on creating a common platform and a common language
D Management-based anchoring over a long time period
E Initiated by top management – managed as a project. “Has changed the
approach along the way. Started with individual processes, but the experience
was that we had a one-time effect that was lost on the way. We then focused
more on Lean Academy. But the next step is to get a lean culture”

4.2 Standardization
Table 4 presents a summary of the degree of standardization per company. There was a
high degree of use of standardized work processes in company A. In this company the
operators were involved in improvement projects, for example, by improving stan-
dards. The management perceived that having an operator to coach another operator
would have more impact than if for instance, management or an engineer did the same
work. Management in company B emphasized standardization as the foundation for
continuous improvement processes. The company had different types of processes; and
experienced that it was easier to achieve an understanding for the need to standardize
processes where direct control of the product was impossible. The implementation of
team boards was the most important new element at company C. They found that that
standardization was easier to accomplish with new employees, as opposed to changing
habits among older employees. Company D had a high degree of standardized systems,
but “we have operators who ask critical questions, - which is positive, but it could be
challenging to establish standard work sets”. In company E, some standards were
available like common regulations and routines. The employees had slightly different
reactions to the process of implementing new standardized systems, “Initially, we made
many changes, but then it stopped a bit. So, we have focused on what we can do
ourselves. But it is difficult to maintain that focus. The overall goal was to create a
sense of community, which is critical to succeed”.
Lean Implementing Facilitating Integrated Value Chain 363

Table 4. Standardization
Case Degree of standardization
company
A High degree of standardization
B Unalterable principles – but possibilities for local adjustments. Generally,
high degree of standardizing on each production site
C Standardized roll-out, but a large degree of local adaption of boards
D High degree of standardization
E Some standards as common regulations, work procedures and routines

4.3 Visualization Tools


Researchers argue for the importance of having common arenas for information sharing
and interaction to improve the value chain integration, and to focus on having good
quality rather than quantity on the interactions [26].
Company A focuses on visual management by using boards which for instance
includes information about the company’s financial performance, HSE, improvement
projects, other important projects, team and production performance. Teams get con-
tinuously feedback on their performance as shift performance and KPI’s are presented
on the boards. Company B’s did not have a standard approach of team-boards meeting.
Although they had adapted a team-based organization, where stability and standard-
ization were focused. In company C, one of the important new ways of working, was
the team-board meetings. At one point, it was a challenge that people did not show up
to the team-board meetings. As a result, the CEO decided to attend all team-board
meetings to demonstrate high priority of these meetings. In company D, a new version
of the team board with different content and different team composition was imple-
mented. Earlier, a representative from the process step ahead was a part of the team.
The informants from this team pointed that this led to less process transparence and
more complaints between the process step. Company E established a “war room” for
visual communication and to support fast and fact-based decisions.

4.4 Employee Involvement


Overall, all companies emphasized the importance of high degree of employee
involvement, which is typical when working with lean in the Scandinavian context
[23].
Company A stressed that a minimum 50% of participants in improvement projects
should be operators. At least 50 employees were involved in the lean and improvement
work. These employees acted as positive ambassadors for change management and
allowed for knowledge sharing throughout the value chain. The standardization process
in company B was largely based on the involvement of operators. As one of the
informants said; “What we appreciate the most about the lean implementation is
involvement of the employees”. In company C, the department managers could choose
how much they involved employees. However, employee involvement was emphasized
as important by the top management. Company D had a culture for allowing everyone
364 I. Gamme et al.

to express their own opinion. When the company was bought by Germans, they
experienced a negative change in employee involvement. A new lean business system
was implemented, but with few employees being involved in the process, and with
minimal possibilities for adjusting the system to the Norwegian context. In company E
they reported increased employee involvement after introducing lean. Another inter-
esting result is use of the tool value stream mapping and its positive influence on
improved interaction and collaboration between consecutive process step. By
addressing challenges that affect other process step, the involved people gain better
understanding that all value creating process must act together. This is obtained by
using cross functional team representing all the effected process steps and collaborating
on improvement to achieve flow in the value chain.

5 Conclusions

In this article, empirical findings from mapping of five different organizations has been
presented, with focus on their experiences with lean and how it influences the value
chain integration. When comparing the five companies, we notice a difference in degree
of lean implementation, however all the organizations had a management that
emphasized the employee involvement, and the importance of visual management.
There was a varying use of standardization, and an autonomy with respect to standards
may be a possible threat for the value chain integration. The extent of use of visual-
ization tools was different among the companies and were mostly used at the company
with the longest lean experience. Furthermore, this study indicates that use of cross
functional teams working with value stream mapping across several process step has a
positive influence on collaboration on improvement to achieve streamlined processes
and better understanding of that all value creating processes must act together. To
enhance the process transparency, this study shows that there is a need for a common
arena for people working at different process steps to work together on improvement;
i.e. team board meeting.
Further research should aim for attaining more empirical results from each sector.

Acknowledgements. The research was funded by the Research Council of Norway and the
participating company. Informed consent was obtained from all individual participants included
in the study.

References
1. Walters, D., Rainbird, M.: The demand chain as an integral component of the value chain.
J. Consum. Mark. 7, 465–475 (2004)
2. Feller, A., Shunk, D., Callarman, T.: Value chains versus supply chains. BP trends, pp. 1–7
(2006)
3. Porter, M.E.: Competitive Advantage: Creating and Sustaining Superior Performance. Free
Press, New York (1985)
4. Basnet, C., Wisner, J.: Nurturing internal supply chain integration. Oper. Supply Chain
Manag. 5, 27–41 (2012)
Lean Implementing Facilitating Integrated Value Chain 365

5. Pagell, M.: Understanding the factors that enable and inhibit the integration of operations,
purchasing and logistics. J. Oper. Manag. 22, 459–487 (2004)
6. Stank, T.P., Keller, S.B., Daugherty, P.J.: Supply chain collaboration and logistical service
performance. J. Bus. Logist. 22, 29–48 (2001)
7. Liker, J.K., Hoseus, M.: Toyota Culture, The Heart and Soul of the Toyota Way. McGraw-
Hill, New York (2008)
8. Liker, J.K., Meier, D.: The Toyota Way Fieldbook: A Practical Guide for Implementing
Toyota’s 4P’s. McGraw-Hill, New York (2006)
9. Liker, J.K.: The Toyota Way: 14 Management Principles from the World’s Greatest
Manufacturer. McGraw-Hill, New York (2004)
10. Womack, J.P., Jones, D.T.: Lean Thinking: Banish Waste and Create Wealth for Your
Corporation. Simon & Schuster, New York (1996)
11. Liker, J.K., Morgan, J.M.: The Toyota way in services: the case of lean product
development. Acad. Manag. Perspect. 20, 5–20 (2006)
12. Lindlof, L., Soderberg, B.: Pros and cons of lean visual planning: experiences from four
product development organisations. Int. J. Technol. Intell. Plan. 7, 269–279 (2011)
13. Bowersox, D.J., Closs, D.J., Stank, T.P.: 21st century logistics: making supply chain
integration a reality (1999)
14. Chen, I.J., Paulraj, A.: Towards a theory of supply chain management: the constructs and
measurements. J. Oper. Manag. 22, 119–150 (2004)
15. Lawrence, P.R., Lorsch, J.W., Garrison, J.S.: Organization and environment: managing
differentiation and integration. Division of Research, Graduate School of Business
Administration, Harvard University Boston, MA (1967)
16. Netland, T.H., Schloetzer, J.D., Ferdows, K.: Implementing corporate lean programs: the
effect of management control practices. J. Oper. Manag. 36, 90–102 (2015)
17. Lodgaard, E., Aschehoug, S.H., Gamme, I.: Barriers to continuous improvement:
perceptions of top managers, middle managers and workers. In: Procedia CIR: 48th
Conference on Manufacturing Systems, CIRP CMS 2015 (2015)
18. Mintzberg, H., Quinn, J.B., Ghoshal, S.: The Strategy Process, European edn. Prentice Hall
(1995)
19. Malone, T.W., Crowston, K.: The interdisciplinary study of coordination. ACM Comput.
Surv. 26, 87–119 (1994)
20. Bateman, N., Philp, L., Warrender, H.: Visual management and shop floor teams–
development, implementation and use. Int. J. Prod. Res. 54, 1–14 (2016)
21. Bititci, U., Cocca, P., Ates, A.: Impact of visual performance management systems on the
performance management practices of organisations. Int. J. Prod. Res. 54, 1–23 (2015)
22. Shah, R., Ward, P.T.: Defining and developing measures of lean production. J. Oper. Manag.
25, 785–805 (2007)
23. Sederblad, P.: Scanias produktionssystem: en framträdande modell i Sverige, Liber (2013)
24. Kvale, S.: Det kvalitative forskningsintervju. Gyldendal akademisk (1997)
25. Yin, R.K.: Case Study Research: Design and Methods. Sage Publications (2014)
26. Ayers, D.J., Gordon, G.L., Schoenbachler, D.D.: Integration and new product development
success: the role of formal and informal controls. J. Appl. Bus. Res. (JABR) 17, 133–148
(2011)
Developing of Auxiliary Mechanical Arm
to Color Doppler Ultrasound Detection

Haohan Zhang1, Zhiqiang Li2(&), Guowei Zhang3, and Xiaoyu Liu4


1
Faculty of Mechanical and Electrical Engineering, Yunnan Open University,
Kunming, China
[email protected]
2
ZHITUO Intelligent Technology Co., Ltd., Hangzhou, China
[email protected]
3
SAIC Volkswagen Automotive Co., Ltd., Shanghai, China
[email protected]
4
School of Culture Tourism and International Exchange,
Yunnan Open University, Kunming, China
[email protected]

Abstract. A Color Doppler Ultrasound-test auxiliary mechanical arm was


developed in order to reduce the labor intensity of doctors using Color Doppler
Ultrasound technique, by reducing the requests on the operator. In this study,
first of all, functional requirements of the Color Doppler Ultrasound detection
auxiliary mechanical arm are being analyzed, and then the design of the
mechanical arm is carried out, and finally the use of ADAMS software for the
simulation of the mechanical arm simulation analysis, The results show that by
installation of 9 force sensors between the ultrasonic probe and the end effector,
the contact state between the ultrasonic probe and the human body can be better
detected. The joint is subjected to more force at start-up, especially the joint
position. The stress state of the mechanical arm can be improved by adjusting
the start-up duration and reducing the acceleration at start.

Keywords: Robot operating system  Robot arm  Simulation analysis

With the rapid development of ultrasound medicine and the expansion of the scope of
clinical application, workload of ultrasound departments increased dramatically,
because the ultrasound department doctors work in an unphysiological posture on a
long-term basis, in which the body is under an overloaded state, meanwhile during
clinical diagnosis, it is vulnerable to light source, infection, radiation, noise, mind and
other physical, chemical, biological factors [1, 2]. These can result in various occupa-
tional injuries that have seriously affected the lives of doctors in an ultrasound depart-
ment, and a new solution is needed to alleviate or resolve the present situation [3].
In 1999, the French professor Pierrot and his team initially used industrial robots
(PA-10) to break through the limitations of ultrasound artificial scanning [4]. After that
similar ultrasonic detection auxiliary robot systems gradually started to permeate in the
application during clinical diagnosis, many domestic and international scholars and
engineers have made effort into the developing of auxiliary mechanical arm systems for
Color Doppler Ultrasound detection [5]. This study aims to develop a robot mechanism

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 366–372, 2020.
https://doi.org/10.1007/978-981-15-2341-0_45
Developing of Auxiliary Mechanical Arm to Color Doppler Ultrasound Detection 367

to substitute manual operation for Color Doppler Ultrasound-equipment probe of the


color Doppler ultrasound testing equipment commonly used in hospitals, with a view to
reducing the labor intensity of the operator, and increasing the detection accuracy of the
probe and improving the detection accuracy of the color Doppler ultrasound.

1 Function Requirements Analysis

Health care workers need to carry an ultrasonic probe during the procedure of working.
because of the body epidermis and body fat causes the decay of the effect of ultrasound,
for the patients of heavy weight with more body fat, operators often need to force the
subcutaneous fat spreading out, and use cross cutting, vertical cutting, oblique cutting
and other means of operation to check the patient’s internal organs. The above oper-
ations often require the of Color Doppler Ultrasound to pay a considerable amount of
physical strength to complete the day’s work tasks [6].
Commonly used Color Doppler Ultrasound probe as shown below in Fig. 1, which
is a abdominal cavity detection probe, used in detection for internal organs of the
abdominal cavity. Operators need to use the coordination of shoulder and elbow joints
to secure the positioning of the probe detection area, through the mutual cooperation of
front arm and wrist to realize swinging, rotation, force pressure and other operations, to
achieve a clear image monitoring or capturing of the internal organs of the abdominal
cavity. Especially for patients with higher body fat (obese), the operator needs to exert
strong pressure to spread out the fat under the skin, so that the ultrasound can penetrate
the skin to reach the internal organs and return to the ultrasound probe to achieve the
detection of the state of internal organ.

Fig. 1. Color Doppler Ultrasound probe

Operation areas of the patient body for the probe of the Color Doppler Ultrasound
instrument can be summarized mainly as: under the right rib, under the xiphoid, under
the left rib, within the right rib. Each position is intended for different organs, manual
operation method, strength and the use of the probe have specific requirements [6].
368 H. Zhang et al.

There are majorly three degrees of freedom in the freedom distribution of the
human arm and wrist, namely: the rotational freedom of the arm, the pitching freedom
of the wrist, and the deflection freedom of the wrist. Through the interaction of the
above three degrees of freedom, the previously introduced actions of swinging, turning
and cutting can be realized. Medical operators’ Ultrasound testing for patients sub-
stituted with a mechanical arm requires the above-mentioned front arm and wrist
freedom degrees to be synchronized to the mechanical arm exactly.
Therefore, for the design of auxiliary mechanical arm for color Doppler ultrasound
equipment, this study presents 6° of freedom and an end clamping freedom. The
freedom on the mechanical arm is designed to simulate the operator’s shoulder, elbow
as well as wrist freedoms, while the clamping freedom is used to simulate the freedom
of the hand. The end gripper of the mechanical arm has been integrated with multiple
force sensors to detect the volume of the force fed back to the end gripper by the Color
Doppler Ultrasound probe, so as to achieve force control at the end of the probe and
avoid the discomfort caused by the mechanical arm during the detection procedures.
In the examination process, in order to prevent affects of respiratory movement
artifacts on the image, through the examination process the patient is required to hold
his breath during the detection, and the operator can not be overexerted, otherwise the
other areas other than the lesions (such as skin or chest wall) can also be shown in red,
is mistaken for a high stiffness. By contrast, adjustment of force intensity can be solved
well by the force sensing data monitoring of the mechanical arm.
AI control system of the mechanical arm analyzes the position information, coor-
dinate information and operation state information collected by the visual inspection
system in the overall system, resolves the motion trajectory of the joints of the
mechanical arm and the motion mode of the end joints, so as to complete the overall
movements of the mechanical arm, and achieve the specific operation of the ultrasonic
probe.

2 Design of the Mechanical Arm

As a mechanical arm in the prototype of human arms, its joint configuration completely
simulates the way degree of freedom is distributed in a human arm. The shoulder of the
mechanical arm has two freedom degrees, namely the freedom degree of rotation and
deflection; the elbow of the mechanical arm has two freedom degrees, namely the
freedom degree of rotation and deflection; and the wrist of the mechanical arm has two
degrees of freedom, namely the freedom degree of rotation and deflection. The
mechanical arm joints adopt the modular design mode, the structural form of each
rotation joint as well as the transmission scheme is exactly the same, the structure form
of each rotating joint, the transmission scheme is the same. Nevertheless, according to
the load condition of each joint, its specific structure size and configuration parameters
are slightly different (Fig. 2).
Developing of Auxiliary Mechanical Arm to Color Doppler Ultrasound Detection 369

Fig. 2. Construction of the mechanical arm

Deflection joints of the mechanical arm is designed in the way of side-mounted and
direct derived by the servo motor integrated harmonic reducer, of which the average
accuracy error of the harmonic reducer is no more than 10 arc points, so that the
combination of servo motor and harmonic reducer combination can fully meets the
positioning accuracy requirements of the end probe.
The rotating joints of the mechanical arm is designed in the way of direct con-
nection and direct derived by the servo motor integrated harmonic reducer, in which the
coaxial degree between the two arms is determined through the high-precision rolling
bearings at both ends, and realizes the stable connection between the joints.
The end clamping device of the mechanical arm installed with force sensors,
through the mutual cooperation of force sensors and the visual sensors to achieve the
autonomous control of the mechanical arm, and to prevent secondary damage to the
patient caused by the movements of the mechanical arm.

3 Simulation of the Mechanical Arm

3.1 Detection Simulation


In Fig. 3 is an analysis of detection simulation of the mechanical arm for testing the
patient in substitution of the medical staff. By simulating various physiological indi-
cators data of human body in the mannequin, elasticity, stiffness and other values of
human body are simulated.
Through the depth of intrusion between the end probe of the mechanical arm and
the human skin, the contact force value sits between the two sides is calculated by
using the elastic mechanics formula by means of its own stiffness characteristics and
contact angle, and fed back to the force sensor on the mechanical arm.
The simulation arm is used to check the human abdominal cavity. Visually
determine the initial position of the end of the mechanical arm, the mechanical arm
drives the color Doppler ultrasound probe to move down and determines its own state
by means of a force sensor signal feedback, the detection height and angle is adjustable
through to the threshold setting of the system, and realizes the detection techniques
such as cross-cutting and vertical cutting.
370 H. Zhang et al.

Fig. 3. Scene simulation of the auxiliary mechanical arm for Color Doppler ultrasound

9 force sensors are installed between the ultrasonic probe and the end effector to
detect the contact status between the probe and the human body. As shown in Fig. 4.

F1 F3 F9
F8
F5

F2 F4
F7
F6

Fig. 4. Indication of integrated position of the probe in a Color Doppler ultrasound equipment

3.2 Dynamics Simulation


The mechanical arm operating system is the executive mechanism of the overall sys-
tem, which is the part in direct contact with the patient, through the extensive under-
standing of the probe operation method and the in-depth study of the mechanical arm, it
is found that optimizing the key parameters and characteristics of the mechanical arm
can effectively increase the smoothness of operation, improve the patient’s medical
satisfaction. Therefore, a dynamics simulation of the mechanical arm is performed, as
shown in Fig. 5.
ADAMS dynamics simulation software is utilized for the simulation. Use the Type 1
Lagrange equation to establish the system’s dynamic equation, while the Cartesian
Developing of Auxiliary Mechanical Arm to Color Doppler Ultrasound Detection 371

coordinates and the Euler angle of the centroid of the rigid body is used as generalized
coordinates, i.e.

qi ¼ ½x, y, z; w; h; uT; q ¼ ½q1 T; q2 T; . . .; qT nT

Fig. 5. Dynamics simulation results

The process is a dynamics course of the first detection position of the human body
from the initial position of the mechanical arm moving to the visual-positioned loca-
tion. At first, the mechanical arm is in the initial position, after receiving the motion
signal, the arm lifts up and at the same time retrieves entirety, and then the first joint of
the mechanical arm rotates 90° to the patient’s direction, the second joint of the
mechanical arm opens entirely, so that the end probe moves exactly to the patient’s
illness areas. During the inspection process, the end of the mechanical arm is imple-
mented through the image vision and probe, used to determine the pressure strength of
the probe, once the strength exceeds any one of the values of the detection points, the
mechanical arm returns to the origin along the initial path, to prevent the mechanical
arm causing harm to the patient.
In the simulation results above, the horizontal axis is the timeline and the vertical
axis is the value of joint hinged torque, in which the second joint torque, the third joint
torque and the five joint torque are from top to bottom. By decomposing the detection
movements of medical personnel, the pressing down movements during the detection
process are completed mainly depends on the corresponding degree of freedom of the
second, third, fifth joints of the mechanical arm.
Analysis of the above simulation results, the second joint as the pitching joint at the
root of the mechanical arm in simulation of deflection freedom of the shoulder joint in
the human arm, it mainly bears the mechanical arm body weight and the feedback force
of the probe end, therefore the second joint bears the most force moment value among
the three joints. The torque of the second joint is gradually reduced because in the
recovery process, self-weight centroid of the mechanical arm gradually close to the
second joint, and then the torque gradually increases in the subsequent opening
372 H. Zhang et al.

process, the second joint torque simulation results are consistent with the theoretical
results of the actual process.
The third joint of the mechanical arm simulates the degree of freedom of deflection
of the human arm, and it mainly bears the weight of the front arm of the mechanical
arm and the feedback force at the end of the probe, so the torque beard by the third joint
is less than the second joint. In the process of overall recovery of the mechanical arm,
the torque of the third joint also goes through a process of getting increased and then
reduced, but because the third joint is closer to the end of the mechanical arm, the range
of torque fluctuations is smaller, and the results of the third joint torque simulation are
consistent with the theoretical results of the actual process.
The fifth joint in the mechanical arm simulates the deflection freedom of the wrist
in a human arm, it bears the feedback force at the probe end of the mechanical arm, and
the absolute torque value is between 10–20 mNm, which is relatively low. The torque
value at the fifth joint mainly comes from the load and the angle between the end of the
fifth joint and the direction of gravity, and the end angle is affected by the position
relationship between the arm and the elbow which is interrelated to the overall state of
the mechanical arm.

4 Conclusions

From the simulation results three points can be seen: First, for better detection of the
contact status between the ultrasonic probe and the human body, 9 force sensors are
installed between the ultrasonic probe and the end effector; Second, at the start of the
joints of the mechanical arm, cuspidal points occurred in corresponding force curve,
which requires a mechanical strength intensifying for the robot arm in the joint posi-
tion; Third, under the premise of meeting the requirements of medical resources, by
adjusting the duration of the start-up phase so as to reducing the start-up acceleration,
the force state of the mechanical arm can be effectively improved.

References
1. Li, F.: Occupational hazards and protective measures of ultrasound doctor. J. Hebei United
Univ. 14(3), 406–407 (2012)
2. Huang, P., Huang, F., Zhao, B., et al.: Attention must be paid to the injury of practicing
ultrasound doctors. J. Chin. Ultrasound Med. 26(12), 1141–1143 (2010)
3. Zhang, J., Li, R., Chen, W., et al.: Research on robot operating system-based robot-assisted
ultrasound scanning system. J. Biomed. Eng. Res. 37(4), 382–387 (2018)
4. Pierrot, F., Dombre, E., Dégoulange, E., et al.: Hippocrate: a safe robot arm for medical
applications with force feedback. Med. Image Anal. 3(3), 285–300 (1999)
5. Swerdlow, D.R., Cleary, K., Wilson, E., et al.: Robotic arm - assisted sonography: review of
technical developments and potential clinical applications. AJR Am. J. Roentgenol. 208(4),
733 (2017)
6. Rusu, R.B.: Semantic 3D object maps for everyday manipulation in human living
environments. KI - Künstliche Intelligenz 24(4), 345–348 (2010)
The Importance of Key Performance
Indicators that Can Contribute
to Autonomous Quality Control

Ragnhild J. Eleftheriadis and Odd Myklebust(&)

SINTEF Manufacturing aS, Trondheim, Norway


[email protected]

Abstract. Industrial manufacturing companies simultaneously work to improve


the quality and reduce cost in their production processes. The level of quality of
what is being produced is essential to succeed. Several types of techniques and
applications are used to collect information from manufacturing processes and
the quality of data is crucial. Today’s equipment has embedded systems, data-
bases, sensors that gives both quantitative and qualitative information’s. For
planning and control in a factory this is essential to give good visual control
tools to the production processes.
However, the amount of structured data is growing fast, and goal-oriented
performance indicators are key to measure success of produced smart products.
With focus on planning and variant deviations of processes, performance indi-
cators and enhancement of equipment combined with data analytics and AI
techniques, we can outline a possibility of Autonomous Quality Control.
Autonomous quality control will therefore be a key to succeed in the digital
era, implemented and used in the right way; it will give excellent quality control,
planning and maintenance by use of Zero-Defect Manufacturing techniques.

Keywords: Zero Defect Manufacturing (ZDM)  Industry 4.0  Autonomous


Quality (AQ)  Key performance indicators (KPI)  Data analytics  Prediction

1 Introduction

Quality Control in manufacturing has built on same principles over the last decades,
and still many companies are still using manual inspection by identification of non-
conformances and corrective actions. Process data and quality planning are developed
in many different ISO standards, and are defined as requirement, specification,
guidelines or characteristics that can be used consistently to ensure that material,
products, processes and services are fit for their purpose [4, 5]. Such material is today
available in databases, the question is therefore to find suitable methods to structure and
apply the information for further best practices in industrial settings.
Another important area that can generate information to data driven quality vision
is structured key performance indicators (KPI’s) which can show feedback from cus-
tomer and organizations and the performance indicators will be able to give an
excellent direction for the organizations quality control [6, 7].

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 373–378, 2020.
https://doi.org/10.1007/978-981-15-2341-0_46
374 R. J. Eleftheriadis and O. Myklebust

This includes the management and the operator’s possibility to influence on the
value creation for the customer, but also to enhance equipment at site and the product to
the customer. This form of smart product and services give us the new smart factories
and introduce the view of quality to a new digital development [8].
Quality systems for control of plant in real-time by monitoring the operating
equipment are crucial. Real-time monitoring, different types of data of key components
need to be collected, including temperature, vibration, water level, pressure, flow, etc.
This paper shows some important performance indicators to fulfill autonomous
quality control. It shows in which way Zero Defect Manufacturing methods is
important for implementation of harvested data, both for process and for product
(equipment in use) and enchantment of new products.
Further on the paper shows the relation to ZDM for set-up of data collection,
datamining and the relation to analytics for smart manufacturing. Section 3 gives
further a detailed list of some available performance indicators that is important and
their role for creating new hardware, sensors and software development for prediction
of fault, propagation and non-conformances of quality in production line. With
emphasize on planning, management and operation for connected production. Finally,
some conclusions and advices are given in Sect. 5.

2 Zero Defect as a Purpose for Quality Planning and Data


Collection

The introduction of Industry 4.0 in industrial manufacturing shows a rapid development


of the factory of tomorrow. In the framework of Industry 4.0 the plants will look more
like an integrated hardware and software system due to integration of embedded systems
in equipment, analytical systems that can identify and extract insight, and opportunities
from information that can provide intelligent machines that learn, act and work with
highly skilled humans being in safe environments. The introduction of strategies for
Zero Defect Manufacturing (ZDM) with emphasis on data collection, datamining and
extraction of features for best fit, which focus on quality, planning and maintenance we
have been working with extracting feasible goals to these strategies [10].
The ZDM approach can be implemented in different ways as a product-oriented
study on defects or deficiencies in a part or a product [11]. On the other hand, ZDM can
also be a process study where the equipment it’s selves in a product line or on a place in
the process line, that causes a defect or a fault either in the machine or on the pro-
duction process of a product.
In the last it also can generate several faults that can causes hard damage on other
product since the produced fault will be repeated. Schlegel et al. [9] has documented
that decrease in latency corresponds to a disruption in processes for Additive Manu-
facturing and an important target will be to quickly compensate disruptions in process
quality for avoiding negative impact on product quality.
In both ways there is a need for evaluate the problem and to find a solution, for
most manufacturer this is a long-lasting historical documentation of quality checks
based on historical data which is available data, plans and internal information for their
product or process in operation.
The Importance of KPI that Can Contribute to Autonomous Quality Control 375

A workflow engine is a software application or tool designed to help users enforce


a series of recurring tasks that make up a ‘business process’ or a ‘workflow’. Workflow
engine takes cues from the workflow’s design and guides the process through its
various steps [9].
Typically, it makes use of a database server to organize and track the implemen-
tation of business workflows as a software or similar codes. Sorting out business goals
and repeatable workflows, these engines are also known as orchestration engines. Easy
explained the engine can be made for a conceptual wording system, however in a
software system they need numerical tracks for pre-processing or post-processing
pattern to recognize complex features or anomaly detections of failure that can occur in
a dataset.
A workflow engine will evaluate the performance indicators (PI) based on a model
that monitor measurements and risks associated with the KPI in a numerical system.
The engine needs to process its decision as feedback to update the most available KPI
in a feedback loop [12].
Workflow performance engine to monitor a system will comprise:
• An analysis engine to mine contextual decisions, structure the dataset and identity
patterns based on current and historical data workflow and identified patterns and
data mined information;
• A statistical modelling engine to dynamically create contextual performance indi-
cators or KPI’s based on the context and pattern information including an ordering
of events in the workflow.
The system claims some of the above points, wherein the workflow decision engine
is to work with a result, effectiveness, or analysis the engine needs to monitor mea-
surements and process feedback.
The engine can also be a part of a connected production where it receives pre-
dictions for further refinement of workflow of contextual analysis, autonomous quality
control and maintenance indicators from planning systems.
In the described workflow engine above, a decision engine will use artificial
intelligence and statistical modelling to evaluate and modify contextual or numerical
performance indicators.
Wherein the workflow decision engine and the statistical modelling engine will
automatically adjust one or more parameter of a performance indicator based on the
statistical modelling of data and optimize the variation of the process.

3 KPI’s for Autonomous Quality Control in Relation


to a Workflow Software Engine

The Tables 1 and 2 below show some contextual data taken from harvested KPI
workflow described in an EU Research Project, where the KPI’s are linked to processes
and products. Similar lists can be found in literature and standards [11]. This can give a
view of some standard control KPI’s for planning and Management and a table for
connected production which can lead to a smart manufacturing system;
376 R. J. Eleftheriadis and O. Myklebust

Table 1. Possible proposed KPI’s for Planning and Management for autonomous quality in
smart manufacturing.
Autonomous quality Planning and management for monitoring and real-time control
control loops services
Plug & produce ZDM production facility
equipment
Augmented human centred Critical Process Visually inspection of tools and
decision Monitoring parts
Control loop services Process analysis and root
cause-spiralling marks
Multistage deep analysis Reduction of: Production Number of defects and repaired
control loop services cost, manufactured per total volume
Defective parts, Rework manufactured
time, Frequency of system or
Waste, Lead time component failure
ZDM orchestration & Production optimization Costs of tests performed in
simulation-based with aggressive machine connection to the manufacturing
composition control loops configuration line
Digital Twin that can simulate the
whole production in Realtime
closing the loop
ZDM embedded Reduction of: Defective Number of scraps per volume of
intelligence and real time parts, rework time, waste multiple times inspected
control loop services components

Table 2. Possible pProposed KPI’s for Connected Production for autonomous quality in smart
manufacturing.
Autonomous quality Connected production
control loops For maintenance operations in smart factories
Plug & produce ZDM Production Facility
Equipment
Augmented human Number of times a Predictive Quality
centred decision control specific part of a Predictive Maintenance on-line
loop services machine has failed
Total cost of
maintenance on the
manufacturing line
Multistage ZDM deep Digital twin of the Number of scraps per volume
analysis control loop machine in Realtime manufactured
services Simulation at planning Scrap reduction
stages Rework reduction
ZDM orchestration & Adaptive machining Holistic system that builds on real-
simulation-based system time data mining in production, by
composition control loops Integrated in an open Big Data and ZDM over the whole
cloud and data value chain
analytics solution
The Importance of KPI that Can Contribute to Autonomous Quality Control 377

An aggregation engine combines streams of data into single values using a selected
aggregation function. Built-in aggregation functions include options like count (*), sum,
mean, median, P95, first/last values, min/max, etc. The engine takes time range, aggre-
gation function, compare groups, and filter conditions as inputs. It figures out which
columns to scan based on the stored and virtual columns used in the query (Fig. 1).

Fig. 1. An engine of knowledge, pattern and data. Where use of quantitative and qualitative
detection based on risk evaluation criteria’s gives an autonomous selection of KP’s for
autonomous quality.

In a complementary way, an operation or process-based monitoring will be carried


out as well performing online anomaly detection processes and machine usage data
aggregation in batch processes.

4 Results and Discussion

In many of latest research papers the stages of pre-processing, now collecting features
and algorithms can be included, however the preparation and the effort of collecting
and structuring data acquisition are complex and new ways to structure KPI’s in
combination with data processing are an interesting research approach. Since this in
earlier stages often has been post-processes of performance and detection of fault and
propagation has been discovered after the product is finished. With the introduction of
AI and deep learning into the ZDM domain different perspectives on machine learning
and data provided by AI algorithms are described as; Descriptive, Diagnostic, Pre-
dictive and Prescriptive [11] Which describes the current state, why something happen
what will happen and a desired status of goals. So the goal will be to achieve auton-
omous decision making process to assure the quality of production processes and
related output in an autonomous way.

5 Conclusions

Detection of anomalies in data-driven quality is becoming increasingly important for


understanding the change of decision points that is moving along the production
development. Since much of the decision points now are moving from the plant
management to the operators of the machines They need to solve and take actions in an
earlier phase and in a more efficient way than before. Quality is crucial for the outcome
of customer satisfaction and failures in a product need to be tracked before it occurs.
378 R. J. Eleftheriadis and O. Myklebust

This means that an operator needs more domain skills, value chain insight and tech-
nological learning of what happen and what will happen if something goes wrong.
Because they will be the key in the operation of what should be done for making
decisions and taking actions of operations.

Acknowledgements. The work is supported by KPN CPS Plant, which is granted the Research
Council of Norway (grant no. 267752).

References
1. Assosiation for Standards Quality: What are quality standards. https://asq.org/quality-
resources/learn-about-standards. Accessed 08 Dec 2019
2. CEN: ISO9000:2015 Quality Management System Requirement. CEN, Brussel (2015)
3. Eleftheriadis, R., Myklebust, O.: A quality pathway to digitalization in manufacturing thru
zero defect manufacturing practises. In: IWAMA. Atlantis Press, Manchester (2016)
4. Rødseth, H., Eleftheriadis, R.J.: Successful asset management strategy implementation of
Cyber Physical systems
5. Porter, M., Heppelman, J.E.: How smart connected product are transforming Competition.
Harv. Bus. Rev. 92, 23 (2014)
6. Mogos, M.F., Eleftheriadis, R.J., Myklebust, O.: Enablers and disabler of Industry 4.0:
results from a survey of industrial companies in Norway. In: CIRP, Elsevier, Ljubliana
(2019)
7. Psarommatis, F., May, G., Dreyfus, P., Kiritsis, D.: Zero defect manufacturing: state-of-the-
art review, shortcomings and future directions in research. Int. J. Prod. Res. 20, 1–17 (2019)
8. Schlegel, P., Briele, K., Schmitt, R.H.: Autonomous data-driven quality control in self-
learning production systems. In: Advances in Production Research, pp. 679–689 (2019)
9. Kissflow Software Company: Is a Workflow Engine the Same as a Business Rule Engine?
Kissflow. https://kissflow.com/workflow/workflow-enginge-businessrule-engine-diference.
Accessed 03 Jan 2019
10. Appelbaum, D., et al.: Impact of business analytics and enterprise systems on managerial
accounting. Int. J. Account. Inf. Syst. 25, 29–44 (2017)
11. Rad, J.S., Zhang, Y., Chen, C.: A novel local time-frequency domain feature extraction
method for tool condition monitoring using S-transform and genetic algorithm. Int. Fed.
Autom. Control 47, 3516–3521 (2014)
12. Wang, J., et al.: Deep learning for smart manufacturing: methods and applications. J. Smart
Manuf.: Methods Appl. 48, 144–156 (2018)
13. Crosby, Philip: Quality is Free. McGraw-Hill, New York (1969)
14. Edward, D.W.: Out of the Crisis, Quality, Productivity and Competitive Position. MIT Press,
Cambridge (1982)
15. Kaplan, R.S., Norton, D.P.: Translating Strategy into action, The Balanced Score Card.
Harvard Business School Press, Boston (1996)
16. Halpin, J.F.: Zero Defects: A New Dimension in Quality Assurance, vol. 166. McGraw-Hill,
New York
Collaborative Fault Diagnosis Decision
Fusion Algorithm Based on Improved
DS Evidence Theory

Xiue Gao1, Bo Chen1(&), Shifeng Chen1, Kesheng Wang2, Yi Wang3,


Wenxue Xie1, Jin Yuan2, Kristian Martinsen4, and Tamal Ghosh4
1
College of Information Engineering, Lingnan Normal University,
Zhanjiang, China
[email protected], [email protected],
[email protected], [email protected]
2
Department of Mechanical and Industrial Engineering, NTNU,
Trondheim, Norway
[email protected]
3
The School of Business, Plymouth University, Plymouth, UK
[email protected]
4
Department of Manufacturing and Civil Engineering, NTNU, Gjøvik, Norway
{kristian.martinsen,tamal.ghosh}@ntnu.no

Abstract. DS evidence theory has in obtaining a correct diagnosis when


confronted with highly conflicting evidence, a collaborative fault diagnosis
decision fusion algorithm based on an improved version of DS evidence theory
is proposed. The algorithm builds upon the closeness of certain kinds of evi-
dence produced by existing DS evidence theory algorithms. According to the
importance of the diagnostic information, weights are assigned to reduce the
conflicting information while retaining the important diagnostic information.
Simulated example shows that the algorithm could reduce the impact of conflicts
in diagnostic information and improve the accuracy of the decision fusion
process.

Keywords: Decision fusion  Collaborative fault diagnosis  DS evidence


theory  Closeness

1 Introduction

Collaborative fault diagnosis technology decomposes complex fault diagnosis into


multiple sub-fault tasks that are easy to handle and completes the collaborative diag-
nosis of each sub-fault task with multiple diagnostic resources. The main processes
undertaken by the technology include data acquisition, task decomposition, task
assignment, and decision fusion. The decision fusion process is particularly responsible
for redundancy, conflict and cooperation issues in relation to the diagnostic information
of each sub-fault task. This is the focus of this research [1, 2].

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 379–387, 2020.
https://doi.org/10.1007/978-981-15-2341-0_47
380 X. Gao et al.

Common decision fusion methods include neural networks, Bayesian, fuzzy


probability, DS (Dempster Shafer) evidence [3, 4]. As the prior knowledge required by
DS evidence theory is more intuitive and easier to obtain, DS evidence theory has great
advantages for decision fusion. However, in the case of highly conflicting evidence, DS
evidence theory can arrive at conclusions that are not consistent with common-sense or
even correct. In response to this problem, a number of researchers have proposed
improved algorithms. These algorithms can be roughly divided into two categories.
One category focuses on potential problems with the fusion rules and recommends
modification of the rules (Yager [5], Ali [6] and Cui [7]). Although such methods have
achieved good fusion results, they destroy the mathematical characteristics of the
original fusion rules. The other category focuses on potential problems with the source
of the evidence itself. Studies, here, typically recommend modifying the evidence
model. There is a particular emphasis placed upon initial pre-processing of the evi-
dence, prior to fusion being conducted using fusion rules (Carlson [8], Sun [8] and
Deng [9]).
The above two types of approaches have their own advantages and disadvantages.
Although there may be conflicts or inconsistencies between multiple objects, most
decision problems relating to individual objects have effective solutions. In that case, it
helps to introduce a notion of ‘closeness’ to indicate the degree of similarity between
objects. Some researchers have combined closeness and DS reasoning [10–12], with
valuable results. However, most of the above studies focus on decision fusion in situ-
ations where there is strong informational conflict and it is difficult to reflect the degree
of similarity between objects and attribute weights. This paper therefore proposes,
instead, a collaborative fault diagnosis decision fusion algorithm based on improved
DS evidence theory. DS evidence theory and closeness analyses are combined for the
decision fusion of diagnostic information. This overcomes the errors in fusion results
caused by conflicts between diagnostic information, while improving the effectiveness
of the decision fusion.

2 Collaborative Fault Diagnosis Decision Fusion Model


Based on DS Evidence Theory

The description of any problem requires a subjective description of objective problems


and the probability of the occurrence of an event is obtained on the basis of subjective
and objective analysis. At first, this kind of subjective factor results in a deviation from
the objective description of problems. However, as objective problems become more
profound and the amount of information increases, subjective understanding is more
complete, its knowledge structure is more complete, and subjective judgments are more
likely to obtain an accurate representation of probability. Shafer proposed the concept
of evidence theory to explain this new approach to probability. DS evidence theory has
received the most attention in the fields of decision fusion and expert systems.
In DS evidence theory, H ¼ fh1 ; h1 ; . . .; hn g is a recognition framework, where H
is composed of objects that are independent and exclusive. The objects can be a
collection of objects that the target recognizes. If the number that can be obtained by
throwing a dice is represented by H, the recognition framework can be expressed as
Collaborative Fault Diagnosis Decision Fusion Algorithm 381

H ¼ f1; 2; 3; 4; 5; 6g. The recognition framework is a collection of targets for all sit-
uations, which are independent and mutually exclusive, thereby turning abstract
problems into mathematical problems.
(1) Basic probability assignment function
The power set 2n of the elements in H represents the possible combination of
targets, where any element is called a focal element of H. Assuming that H is the
recognition framework, the target problem can be represented by the function m :
2n ! ½0; 1 and must satisfy the following:
① The basic probability of an impossible event is 0, i.e. mðhÞ ¼ 0.
n
② The
P sum of the basic probabilities of all the elements in 2 is 1, i.e.
mðAÞ ¼ 1.
Ah

where, m is the BPA (basic probability assignment) function of H, which is also


called the basic reliability assignment function. mðAÞ represents the basic probability
assignment function for the target problem, A, i.e. the degree of support for the
occurrence of target problem A, rather than the supportP for the true subset of A.
mðhÞ ¼ 0 indicates that the BPA for an empty set is 0. mðAÞ ¼ 1 indicates that each
Ah
proposition has its own confidence level, but the sum of the confidence in the
propositions in the recognition frame is 1.
(2) Confidence function
For any proposition, a confidence function, BelðAÞ, is defined as the sum of the
basic probabilities corresponding to all subsets, namely:
(
Bel : 2H !P
½0; 1
BelðAÞ ¼ mðBÞ; A  H ð1Þ
BA

The difference between mðAÞ and BelðAÞ is mainly that mðAÞ means that the
confidence is only assigned to the subset, A, while BelðAÞ represents the sum of the
confidence relating to all subsets of A.
(3) Likelihood function
The likelihood function represents the degree of trust that A is not false, i.e. the
measure of the uncertainty as to whether A is possible, namely:
(
PI : 2H ! P
½0; 1
PIðAÞ ¼ mðBÞ; A  H ð2Þ
B \ A6¼/

PIðAÞ is the sum of BPAs that do not support subsets of Ac , i.e. PIðAÞ ¼ 1  BelðAc Þ.
382 X. Gao et al.

(4) Fusion Rule


The diagnosis information of each subtask is used for decision fusion, again
according to certain rules, until the final result is obtained. Within the same identifi-
cation framework, there may be several different evidence functions. For example,
when there are two pieces of evidence, they can be fused by the fusion rule in DS
evidence theory. The fusion formula can be expressed as follows:
8
< 0;P A¼/
mðAÞ ¼ m1 ðAi Þm2 ðBj Þ ð3Þ
: Ai \ Bj ¼A
1K ; A ¼
6 /
P
where, K ¼ m1 ðAi Þm2 ðBj Þ, with a range of ½0; 1, is called the conflict factor.
Ai \ Bj ¼/

3 Decision Fusion Algorithm Based on Improved DS


Evidence Theory

It allows us to reduce the impact of conflicting information while retaining more


valuable evidence information and obtaining more accurate decision fusion results.
Although there is sometimes a conflict between the evidence, there is also varying
degrees of closeness. This paper introduces the notion of closeness to indicate the
proximity between evidence information and to reflect the degree of conflict between
the evidence information, thus improving the accuracy of decision fusion.
Let us assume an identification framework is H ¼ fh1 ; h1 ; . . .; hn g. For any
proposition, the BPA for obtaining two pieces of evidence is mi ðhk Þ and mj ðhk Þ,
respectively. Then, the closeness of the two pieces of evidence for the proposition is:

minðmi ðhk Þ; mj ðhk ÞÞ


aij ðkÞ ¼ ð4Þ
maxðmi ðhk Þ; mj ðhk ÞÞ

where, minðmi ðhk Þ; mj ðhk ÞÞ is the smaller BPA of the two pieces of evidence and
maxðmi ðhk Þ; mj ðhk ÞÞ is the larger. Therefore, aij ðkÞ has a value greater than 0 and less
than or equal to 1. Given a limiting value, P, if aij ðkÞ\P, it means that the two pieces
of evidence are not close, which can be expressed as:

aij ðkÞ; aij ðkÞ  P
aij ðkÞ ¼ ð5Þ
0; aij ðkÞ \ P

where, aij ðkÞ represents the closeness between mi ðhk Þ and mj ðhk Þ, but does not express the
closeness between iðjÞ and the other mn ðhk Þ in the recognition framework. The closeness
of the propositions between mi ðhk Þ and the other evidence is ai1 ðkÞ; ai2 ðkÞ; . . .; ain ðkÞ.
Collaborative Fault Diagnosis Decision Fusion Algorithm 383

A matrix can be used to visually express the closeness between the individual evidence
for the propositions:
2 3
1 a12 ðkÞ . . . a1n ðkÞ
6 a21 ðkÞ 1 . . . a2n ðkÞ 7
A¼6
4 ...
7 ð6Þ
... 1 ... 5
an1 ðkÞ an2 ðkÞ ... 1

The closeness between the two pieces of evidence in Eq. (6) is 1 and the basic
probability assignment function mi ðhk Þ of a certain piece of evidence i can be obtained
by analyzing the closeness between one particular proposition in the matrix, A, and
other evidence propositions, i.e. Asum ðiÞ ¼ ai1 ðkÞ þ ai2 ðkÞ þ . . . þ aim ðkÞ.
Pieces of evidence i and j have the same closeness to the same proposition, hk , i.e.
aij ðkÞ equals aji ðkÞ, which satisfies the symmetry of matrix A. As the values in matrix A
are always positive, the rules of linear algebra dictate that matrix A must have eigen-
values kðk [ 0Þ and corresponding eigenvectors R, namely:

AR ¼ kRðk [ 0Þ ð7Þ

For some piece of evidence, the closeness of the proposition reflects the credibility
of the evidence associated with the proposition. Thus, the weight of the evidence for a
proposition can be expressed by its closeness. The weight of a proposition, wi ðhk Þ, can
be expressed as follows:

wi ðhk Þ ¼ c1 ai1 ðhk Þ þ c2 ai2 ðhk Þ þ . . . þ cn ain ðhk Þ; ði ¼ 1; 2; . . .; nÞ ð8Þ

wi ðhk Þ can be obtained from Eq. (8), but should also satisfy:

X
n
wi ðhk Þ ¼ 1 ð9Þ
i¼1

The following matrix representation can be obtained through further simplification:

W ¼ AC ð10Þ

where, W ¼ ½w1 ðhk Þ; w2 ðhk Þ; . . .; wn ðhk ÞT and C ¼ ½c1 ; c2 ; . . .; cn T . According to


AR ¼ kR ðk [ 0Þ in Eqs. (7), (10) can be linearly transformed, namely:

W ¼ kP ð11Þ

The matrix, P, in Eq. (11) includes p1 ðkÞ; p2 ðkÞ; . . .; pn ðkÞ. Subsequently, the
weight of the proposition can be obtained according to the following solution matrix:

pi ðkÞ
Wi ðhk Þ ¼ ð12Þ
p1 ðkÞ þ p2 ðkÞ þ . . . þ pn ðkÞ
384 X. Gao et al.

The weight, wi ðhk Þ, of each probabilistic BPA function, mi ðhk Þ, for the proposition
can be calculated in turn, according to Eq. (12). To solve wi ðhk Þ, p1 ðkÞ; p2 ðkÞ; . . .; pn ðkÞ
must be obtained first. For the matrix, P, it can be obtained by multiple transformations
using the rules of linear algebra. In this paper, we use a membership function with a
normal distribution:
xi ðkÞa 2
uðxÞ ¼ eð b Þ ; ða [ 0; b [ 0Þ ð13Þ

The function, xi ðkÞ, in Eq. (13) can represent the BPA function, mi ðhk Þ, of the
proposition, where a is the mean and b is the variance. The above formula can be
substituted into Eq. (13):
xi ðkÞa 2
pi ðkÞ ¼ eð b Þ ; ða [ 0; b [ 0Þ ð14Þ
P
n
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
xi ðkÞ
Pn
where, a ¼ x ¼ i¼1
n ; b ¼ s ¼ n1
1
ðxi  xÞ.
i¼1
pi ðkÞ can be obtained and substituted into Eq. (12). Subsequently, the weight,
wi ðhk Þ, of the evidence for the proposition can be obtained. The BPA is then recal-
culated and, finally, the new BPA can be used to carry out the decision fusion.

4 Numerical Simulation Analysis

Assuming that a recognition framework, H ¼ fh1 ; h1 ; h3 g, is used to indicate three


possible causes of an equipment fault. DS evidence theory-based decision fusion can
now be performed on the newly generated BPA functions. This can be compared with
the outcomes of traditional DS theory, Yager’s method [5], Sun Quan’s method [8],
and Deng Yong’s method [9]. The comparison results are shown in Table 1, 2 and 3.

Table 1. Comparison of fusion results for two diagnostic nodes


m1 m2 m12 ðh1 Þ m12 ðh2 Þ m12 ðh3 Þ m12 ðuÞ
DS evidence theory 0.0 0.01 0.99 0.0
Yager’s method 0.0 0.0001 0.0099 0.99
Sun Quan’s method 0.18 0.004 0.194 0.622
Deng Yong’s method 0.18 0.004 0.194 0.622
Method in this paper 0.1859 0.0044 0.0392 0.7715
Collaborative Fault Diagnosis Decision Fusion Algorithm 385

Table 2. Comparison of fusion results for three diagnostic nodes


m1 m2 m3 m123 ðh1 Þ m123 ðh2 Þ m123 ðh3 Þ m123 ðuÞ
DS evidence theory 0.0 0.0 0.0 1.0
Yager’s method 0.0 0.0 0.00099 0.999
Sun Quan’s method 0.321 0.003 0.188 0.488
Deng Yong’s method 0.3594 0.0038 0.2103 0.4255
Method in this paper 0.4686 0.0027 0.0518 0.4769

Table 3. Comparison of fusion results for four diagnostic nodes


m1 m2 m3 m4 m1234 ðh1 Þ m1234 ðh2 Þ m1234 ðh3 Þ m1234 ðuÞ
DS evidence theory 0.0 0.0 0.0 1.0
Yager’s method 0.0 0.0 0.00099 0.999
Sun Quan’s method 0.42 0.003 0.181 0.369
Deng Yong’s method 0.4557 0.0033 0.1967 0.3442
Method in this paper 0.6481 0.0017 0.0511 0.2991

It can be seen from Table 3 that the accuracy of the decision fusion is greatly
improved when compared with traditional DS evidence theory and Yager’s method.
When compared with Sun Quan’s method, the improvement in the accuracy of the
decision fusion is about 22% and it is about 19% higher than Deng Yong’s method.
Thus, the proposed algorithm improves DS accuracy to a significant degree.
Figure 1 shows the change in the degree of support for the three faults in the
identification framework H ¼ fh1 ; h1 ; h3 g according to the amount of evidence, with u
indicating the degree of support.
It can be seen from Fig. 1 that traditional DS evidence theory always assigns a
degree of support of 0 to h1 when dealing with conflicting information. This is obvi-
ously not consistent with common-sense, indicating that traditional DS evidence theory
is not able to deliver a correct decision fusion result in the face of highly conflicting
evidence. If Yager’s method is compared with traditional DS evidence theory, it simply
discards the conflicting information when it appears. In other words, it is assigned to
mðuÞ and no other processing is performed, so the correct decision fusion result cannot
be obtained. Sun Quan’s method and Deng Yong’s method obtain a decision fusion
result, but the accuracy is low and a large proportion of evidence is assigned to mðuÞ.
By contrast, the algorithm proposed in this paper effectively overcomes the problem of
conflicting evidence. The larger the number of diagnosis nodes, the greater the degree
of support for fault h1 , according to the algorithm proposed in this paper. In comparison
to other algorithms, it effectively overcomes the disruption of decision fusion caused by
conflicting information and can converge rapidly.
386 X. Gao et al.

Fig. 1. Degree of support for the faults h1 , h2 , h3 and u

5 Conclusions

In this paper, the collaborative fault diagnosis decision fusion model based on DS
evidence theory has been discussed. Its limitations have been noted and an improved
DS evidence theory-based algorithm has been proposed. An example simulating use of
the algorithm was used to verify the performance of the algorithm in comparison to
other related algorithms. The results have shown that the improved algorithm performs
better and can obtain more accurate decision results.

Acknowledgements. This work is supported by the Special Funds for Science and Technology
Innovation Strategy in Guangdong Province of China (No. 2018A06001).

References
1. Huang, D., Chen, C., Zhao, L., Sun, G., Ke, L.: Hybrid collaborative diagnosis method for
rolling bearing composite faults. J. Univ. Electron. Sci. Technol. China, 2–18, 47(6), 853–
863
2. Ge, J., Liu, Q., Wang, Y., Xu, D., Wei, F.: Gearbox fault diagnosis method supporting the
combination of tensor and KNN-AMDM decision. J. Vib. Eng. 31(6), 1093–1101 (2018)
3. Gai, W., Xin, D., Wang, W., Liu, X., Hu, J.: A review of data fusion and decision making
methods in situational awareness 40(5), 21–25 (2019)
4. Yager, R.R.: Belief structures, weight generating functions and decision-making. Fuzzy
Optim. Decis. Mak. 16(1), 1–21 (2016)
5. Tajeddini, M.A., Aalipour, A., Safarinejadian, B.: Fusion method for bearing faults
classification based on wavelet denoising and dempster-shafer theory. Iran. J. Sci. Technol.-
Trans. Electr. Eng. 439(2), 295–305 (2019)
6. Ali, T., Dutta, P.: Methods to obtain basic probability assignment in evidence theory. Int.
J. Comput. Appl. 38(4), 46–51 (2013)
Collaborative Fault Diagnosis Decision Fusion Algorithm 387

7. Cui, J., Li, B., Li, Y.: Conflict evidence combination based on the applicable conditions of
Dempster combination rule. J. Inf. Eng. Univ. 16(1), 59–65 (2015)
8. Carlson, J., Murphy, R.R.: Use of Dempster-Shafer conflict metric to detect interpretation
inconsistency. Comput. Sci. 38(4), 46–51 (2012)
9. Deng, Y., Wang, D., Li, Q., Zhang, Y.: A new method of evidence conflict analysis. Control
Theory Appl. 28(6), 839–844 (2011)
10. Wang, A., Zhang, L.: Grid fault diagnosis algorithm based on selection criteria and
closeness. Control Decis. 31(1), 155–159 (2016)
11. Denoeux, T., Li, S., Sriboonchitta, S.: Evaluating and comparing soft partitions: an approach
based on Dempster-Shafer theory. IEEE Trans. Fuzzy Syst. 26(3), 1231–1244 (2018)
12. Zhang, N., Yang, Y., Wang, J., et al.: Identifying core parts in complex mechanical product
for change management and sustainable design. Sustainability 10(12), 4480–4494 (2018)
Hybrid Algorithm and Forecasting Technology
of Network Public Opinion Based on BP
Neural Network

Ronghua Zhang1,2, Changzheng Liu1,2(&), and Hongliang Ma2


1
Intelligent and Distributed Computing Lab, School of Computer Science
and Technology, Huazhong University of Science and Technology,
Wuhan 430074, China
[email protected]
2
College of Information Science and Technology, Shihezi University,
Shihezi 832000, China

Abstract. Based on the hybrid algorithm and BP neural network, the fore-
casting technology of network public opinion was studied. After detailed
introduction of the research background and significance, the concept and
development of hybrid algorithm, Hybrid algorithm is analyzed in detail. The
user network public opinion classification model under the clustering algorithm
is established. Then the data mining experiment of Sina Weibo users’ network
public opinion is conducted. Using an improved BP neural network algorithm,
unsupervised user network clustering and evaluation of the results, finally, the
user’s Internet public opinion is divided into six categories. The characteristic
preferences of each user group are analyzed, with recommendations provided
for operation recommendation and promotion.

Keywords: Data mining  BP neural network  Internet public opinion 


Forecasting technology

1 Introduction

Internet public opinion is a public opinion that regards the Internet as a carrier of com-
munication. It is a new form of expression of public opinion. With the continuous
popularization of China’s Internet, the Internet has become an important channel for the
public to express their opinions [1]. Netizens can express their attitudes, opinions, and
sentiment toward various social issues through the Internet. The Internet public opinion
has the characteristics of anonymity, openness, and other characteristics. According to the
different characteristics of the Internet public opinion, how to realize preprocessing of
web information, public information collection, classification, clustering, and hotspot
acquisition are all key steps in the prediction of Internet public opinion [2]. How to make
the network public information collection and arrangement more reasonable, will affect
the accuracy of forecasting the Internet public opinion. As a branch of the content of
public opinion research, Internet public opinion integrates the knowledge of journalism,
computers, and sociology. Nowadays, netizens, as the main body of online public
opinion, mainly use instant messaging tools, BBS, search engines, news follow-ups, post
© Springer Nature Singapore Pte Ltd. 2020
Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 388–395, 2020.
https://doi.org/10.1007/978-981-15-2341-0_48
Hybrid Algorithm and Forecasting Technology of Network Public Opinion 389

bars, microblogs, and E-mails to transmit their wishes [3]. As everyone knows, there will
be media participation in the dissemination of lyricism. With each additional media, the
faster the lyrical spread, the greater the number of media provided by the Internet, and the
rapid spread of online public opinion [4]. The survey shows that the time of media
coverage can affect the speed of obtaining public opinion information, and its propagation
speed is directly proportional to the popularity rate of Internet. As a new type of public
opinion, Internet public opinion is a prerequisite for determining the source of data and
obtaining information from [5]. Therefore, timely collection of information on Internet
public opinion, analysis of its development trend, accurate identification and screening of
Internet public opinion hotspots. It is of great practical significance to strengthen network
monitoring and correctly guide public opinion on the Internet.

2 State of the Art

Our country began to study the Internet public opinion in 2005. As of 2015, there were
nearly 5000 related literatures, mainly focusing on computers, education, journalism
and communication. This shows that although the related learning of Internet public
opinion started late. However, with the increasing attention paid by scholars in recent
years, the number of related research is increasing rapidly. About the development of
the basic theory of Internet public opinion: Internet public opinion is a new discipline,
but many scholars have carried out relevant learning: Liu Yi is the first writer to publish
books on Internet public opinion in China and discusses in detail the theoretical and
practical applications of Internet public opinion [6]. Xu and others believe that the
analysis of Internet public opinion should start with the content and empirical evidence
it contains [7]. Gu Mingyi analyzes the development mode of Internet public opinion.
He analyzes the development trend of Internet public opinion from three perspectives:
media, public opinion upgrade and audience. Learning about the network public
opinion forecasting index system: The Internet public opinion itself is a complex and
intersecting system. The amount of Internet public opinion data is huge. After data
collection, it still needs a lot of work to process the data to meet different needs. There
are many kinds of indicators which affect the network public opinions. They are
complex and changeable. The demand for network public opinion and the purpose of
monitoring are different. In order to predict Internet public opinion, it is necessary to set
up relevant index system for Internet public opinion [8].

3 Methodology

3.1 Clustering Method Analysis


One Clustering, as the name implies, is to gather objects with similarity in a data set, so
as to realize the classification of data sets. Each class is a collection of data objects, and
the data objects in the set have very high similarity, while different data objects of the
same kind have lower similarity. Secondly, as an unsupervised learning clustering
feature, data objects can be automatically divided without manual processing.
390 R. Zhang et al.

Currently, cluster analysis has been applied in many industries in multiple fields,
including business intelligence, retail, information search, medicine, and security. In
business intelligence, a large number of users can be grouped, and users in the group
are similar in behavioral characteristics. Such group processing can provide accurate
target users for the development of marketing activities. In terms of information search,
documents of the same type can be clustered. When users query documents in that
class, they can be other documents in such documents.
Clustering analysis is an important process for user classification in the research of
social network users’ public opinion behavior characteristics. By clustering the interest
characteristic data of different users, the classification of users with similar interest
features is realized, and the interest characteristics of all kinds of users are analyzed.
At present, researchers in
various fields have proposed K-means
PAM
many clustering methods and K-medoids CLARA
Partition based method
algorithms, but it is difficult Nearest neighbor clustering
CLARANS

to propose an effective clus-


Maximum distance
tering method suitable for clustering

various fields. In spite of this, Splitting method BIRCH


Hierarchy based method
the clustering method still Agglomerate method
GAS

clustering algorithm CURE


has a general classification: a DBSCAN
partition-based method and a DBCLASD

Density based method OPTLCS


hierarchy-based method. FDC

Density-based methods and CLIQUE

COBWEB
grid-based methods, each of Grid based method
CLASSIT
which has its typical and
commonly used algorithms, Fig. 1. Structure diagram of clustering method
are shown in Fig. 1.

3.2 Algorithm and Optimization of BP Neural Network


J.B. Mac Queen proposed the BP neural network algorithm and was selected as the ten
classical algorithms. The algorithm is used by a large number of academic researches
and industry analysis because of its convenient application and simple implementation.
The basic idea of BP neural network algorithm is: set k classes, through continuous
iteration. In order to facilitate the designers to operate through the information algo-
rithm terminal, the final display on the window for students to adopt, ensuring that the
distance is a minimum and achieving a global optimal solution.
In the BP neural network algorithm, the first is the determination of k initial cluster
centers. Through continuous algorithm iteration, the clustering center is continuously
optimized. The data objects are adjusted back and forth in different clusters, and finally
the optimal division of the data set is achieved. The flow of the algorithm is shown in
Fig. 2.
The selection of the initialization center is performed by using the cutting distance
in the sample data set, mainly by calculating the two data points farthest from each
other. The difference between these two sample points is the largest, and almost all
points of the data set can be contained. The initial cluster center of the data can be
Hybrid Algorithm and Forecasting Technology of Network Public Opinion 391

found by equidistant segmentation. First, the distance between all points in the data set
is calculated using the Euclidean distance formula (1). And find the two points farthest
away. Among them, any two points in the data set are t and x, respectively. The data
point has the dimension i, and the distance between the two points is d.
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2
qX
dðv  wÞ ¼ n ðv  w Þ i ¼ 1; . . .; n ð1Þ
i¼1 i i

Then, the cutting distance of each dimension is calculated. The formula is shown in
(2), which is mainly calculated by the known number of clusters k.

i vi  wi
dcut ¼ i ¼ 1; . . .; n; k [ 2 ð2Þ
k1
i
In Eq. (2): dcut t is the cutting distance in this dimension, i is the dimension of the
sample point, and k is the number of clusters. If k = 2, no system calculation is needed.
The initial cluster center is the two points farthest away. If k > 2, then the initial
clustering center needs to be calculated by cutting distance. Since each dimension
needs to be calculated independently, it is also necessary to calculate the cutting
distance in each dimension, and the cutting distance in each dimension is independent.
The formula is shown in (3).

zij ¼ zij1 þ dcut


i
j ¼ 2; . . .; k  1 ð3Þ
start
In formula (3): z represents the cluster center.
For example, t and x are two three-dimensional Input data set

sample points. Determining the similarity


Determine the number of clustering
between users is an important step that needs to
be performed before the user clusters. Then, the Initialization of K cluster centers
similarity calculation between users seems very
necessary. At present, there are many methods Distribution point to K cluster
center

for calculating similarity between vectors in Yes


Computing cluster center
previous research. The similarity of the Internet
public opinion characteristics between two users
Whether the cluster center changes
will be calculated by the cosine similarity. The
No
cosine similarity can be calculated for multi- Output clustering results
dimensional vectors, conforming to the multidi-
mensional characteristics of the public opinion End

characteristics of social network user networks.


In n-dimensional space, the similarity between Fig. 2. Flow chart of the K-means
two users is evaluated by the cosine of the angle algorithm
between two vectors, and the included angle
represents the degree of similarity between the
two user networks, the smaller the angle, the higher the degree of similarity on the user’s
network. Otherwise, it means that the degree of similarity is lower. In the process of
implementation, BP neural network algorithm first determines the number of clusters k,
392 R. Zhang et al.

but for Sina Weibo user data lacking clustering experience. It is not possible to deter-
mine what the optimal number of classes is, and the optimal k value needs to be
determined by multiple clustering of the data and viewing the effect of the cluster. In
order to deal with this problem, the clustering number k is evaluated by the cluster
validity function DBI index to determine the optimal clustering number k. DBI
(Davidson Fortary Index) is an evaluation index for non-ambiguous clusters based on
the principle of geometric operations. Its evaluation is based on the closeness of data
points in the same cluster and the degree of dispersion between different clusters.
Intraclass similarity is positively correlated with exponent size, and similarity between
classes is inversely related to exponential size. When the distance between data points in
a cluster is smaller and the distance between clusters is larger, the DBI value is smaller.
Indicates that the differences between clusters are large and the differences within
clusters are very small. Therefore, the smaller the DBI index for different clustering
numbers is, the more clustering number is close to the number of clusters in the dataset
itself.
The above content gives a method for clustering Weibo users online according to
their interest, by referring to the domain classification system of Sina Weibo’s famous
people. Divide the user’s Internet sentiment into ten categories. These ten categories of
network public opinion contain almost all of the user’s points of interest. As an analysis
factor, analyze the interest characteristics of each classified user, and provide support
and evidence for follow-up operation recommendation and promotion. When analyzing
the public opinion characteristics of the user network, if there are certain linear rela-
tionships between the two analysis factors, then the two analysis factors will have very
similar similarities or connections between the two topics. Therefore, whether there is a
linear relationship between the two analysis factors is very important for the study. The
Pearson correlation test was used to calculate the correlation coefficient of each analysis
factor. If the correlation between the two analysis factors is weak, the value of the
Pearson correlation coefficient is within 0.3. There is no strong correlation between the
factors. Therefore, these ten categories of user interest analysis factors can be used as
factors for user analysis after classification.

4 Result Analysis and Discussion

This section uses the characteristics of Weibo user network sentiment extracted from
the UR-LDA model above as the input data set, and processes the data according to the
improved method in this chapter. The number of clustering parameters of the BP neural
network algorithm is set during the experiment, because the network public opinion of
the microblog users has been classified into 10 categories. Therefore, the number of
user clusters kmax ¼ 10, k starts from 2, and each cycle k increases by 1 until k = 10;
Through the implementation of the clustering algorithm, the initial center is randomly
generated. The value of the current evaluation index is calculated by the end of dif-
ferent k-value clusters. The experimental environment is MatlabR2011. The results are
shown in Figs. 3 and 4:
Hybrid Algorithm and Forecasting Technology of Network Public Opinion 393

30
30
25
DBI value

20
15
10
5
0
1 2 3 4 5 6 7 8 9 10
Cluster number

Fig. 3. The DBI value of the number of Fig. 4. Comparison trend diagram of actual
clusters value and simulated value

From the experimental results, it can be concluded that when k = 6, the DBI value
is 16, which is the number of clusters that meet the requirements. Therefore, when
using k-means clustering, micro-blog users will cluster according to 6 categories. The
above improved BP neural network algorithm is used to cluster the data of micro-blog
users’ Internet public opinion. Taking the 5000 micro-blog users’ Internet public
opinion topics in the third chapter as classified data sets, the users are divided into 6
categories. The dataset is shown in Table 1.

Table 1. Cluster result dataset


Classification Text quantity Classification Text quantity
Data set A 520 Data set D 1110
Data set B 715 Data set E 185
Data set C 640 Data set F 1830

The distance between each cluster center is also calculated, as shown in Table 2.

Table 2. Class center distance


Clustering A B C D E F
A 0.313 0.421 0.334 0.264 0.380
B 0.313 0.580 0.465 0.386 0.556
C 0.421 0.580 0.492 0.457 0.541
D 0.334 0.465 0.492 0.341 0.384
E 0.264 0.386 0.457 0.341 0.512
F 0.380 0.556 0.541 0.384 0.512

According to the data in Table 2, there are significant differences in the distance
between the 6 final cluster centers. The distance between center 2 and center 3 reached
0.580, which is the center of the biggest difference. The distance between center 1 and
center 5 is the smallest, 0.264. According to the result of user classification, the
improved clustering algorithm is evaluated. Cluster F was used to evaluate the results.
394 R. Zhang et al.

The evaluation indexes of F include recall ratio (Recall) and precision ratio (Precision).
The results of clustering are evaluated by recall and precision. It can not only measure
the clustering results accurately, but also observe the clustering effect simply and
intuitively.
Experiments show that the improved BP neural network method has improved the
recall, precision and F value compared with the traditional BP neural network algo-
rithm. It proves that the improved BP neural network algorithm is feasible and can
better achieve the clustering of micro-blog users’ network public opinion.
After completing the clustering analysis of BP neural network, micro-blog user
network public opinion obtained by crawler is divided into six categories. And the six
categories of interest variables are calculated. According to the results, a general
explanation is given to these six categories first, and then the characteristics of interest
variables of micro-blog users after classification are introduced. In terms of recom-
mendation and promotion, different treatment strategies are proposed for each category
of users. To facilitate observation and analysis, we will cluster the results and generate
the average values of the 6 attribute values of each class, as shown in Table 3.

Table 3. Average of interest variables in six categories of users


Category A B C D E F
Life fashion 0.15 0.14 0.17 0.47 0.13 0.12
Gourmet entertainment 0.05 0.07 0.13 0.13 0.04 0.04
Photography tourism 0.11 0.08 0.11 0.06 0.07 0.07
Sports 0.32 0.00 0.03 0.01 0.02 0.02
Film 0.03 0.07 0.12 0.03 0.13 0.04
Music 0.07 0.02 0.02 0.05 0.07 0.02
Anime game 0.12 0.02 0.03 0.02 0.05 0.01
Read 0.10 0.43 0.10 0.10 0.11 0.15
Industry current affairs 0.01 0.13 0.11 0.09 0.05 0.45
Digital electronic 0.01 0.01 0.05 0.03 0.37 0.08
Sample size 520 715 640 1110 185 1830

By discretizing the result data after clustering, the user interest preference is more
intuitive. By defining the value of the range of interest variables, we define the weak
interest as [0–0.1], where the interest is [0.1–0.2], and the strong interest is [0.3–1].
From the previous table, the six categories of users are interested in the two cat-
egories of interest in “life fashion” and “reading”. It shows that these two types of
interest should be widely used in the general users. Six categories of ordinary users of
“life fashion”, “reading” “life fashion” mainly include fashion brands (bloggers),
shopping, Chaoman net red micro-blog, covering the current fashion form. “Reading”
mainly includes literary reading, Life Encyclopedia, soul chicken soup, various skill
sets of micro-blog, covering reading, literary resonance, the need for practical
knowledge. These two kinds of interest are closely related to the life of people, and the
medium interest of users shows that micro-blog users are concerned about their own
Hybrid Algorithm and Forecasting Technology of Network Public Opinion 395

life and constantly improve their quality and knowledge. Therefore, in the process of
recommending push, some representative micro-blog users in “life fashion” “reading”
can be recommended in addition to “possible people”. In addition to these two kinds of
interest variables, each category has different performance on other kinds of interest
variables and does not have universality.

5 Conclusions

The improved BP neural network algorithm is used to cluster the users in an unsu-
pervised way, and the results are evaluated. Finally, the characteristics of each user
group’s network public opinion are analyzed. Suggestions for operation recommen-
dation and promotion are provided. Based on the existing research results and methods,
when extracting the characteristics of Sina micro-blog users’ online public opinion,
considering the characteristics of user behavior information, the interaction charac-
teristics between users are taken into account. The interest of users’ positive reactions
and their side reactions are different, and a UR-LDA based user network public opinion
feature extraction model is proposed. The results are evaluated. The user data extracted
from the topic of interest feature is clustered using the improved BP neural network.
Six clusters of similar user clusters are obtained, and good clustering results are
obtained, and the classification results are analyzed. The recommendation and pro-
motion of six types of users is only an introduction and analysis of sample data. There
is still a lot of work to be done to achieve the full promotion of micro-blog.

Acknowledgements. This work was financially supported by Fund Project of XJPCC (13QN11,
2016AF024 and 2017CA018).

References
1. Nie, F., Zhang, P.: Study on prediction and early warning model of public opinion basing on
K-harmonic means and particle swarm optimization. Inform. Res. 55(102), 111–115 (2017)
2. Wei, J., Zhu, H., Song, R., et al.: Link prediction analysis of internet public opinion transfer
from the individual perspective. New Technol. Libr. Inform. Serv. 36(52), 847–853 (2016)
3. Yan, W.U., Huang, Y., Wei, W.U., et al.: Research on college internet public opinion
prediction based on hybrid artificial neural network. J. Gannan Norm. Univ. 10(78), 441–445
(2016)
4. Lin, L., Wei, D.: Research and application of the internet public opinion assessment model
about government. J. Chongqing Univ. Sci. Technol. 32(12), 110–112 (2016)
5. Chen, Y., Amp, I., Center, N.: Application of internet public opinion monitoring system in
campus network. Comput. Knowl. Technol. 9(46), 958–964 (2016)
6. Fukui, K.: Internet public opinion and a new risk on copyright. J. Inform. Process. Manag. 59
(85), 41–44 (2016)
7. Jin, Y., Xu, H.: On the system construction of government’s governance of internet public
opinion in the era of big data era. J. Party School Tianjin Comm. CPC 36(74), 471–473
(2018)
8. Wang, N., Zhang, W., Niu, L.: Emotion prediction of public sentiment based on ARIMA and
BP neural network model. Electron. Sci. Technol. 5(62), 210–218 (2016)
Applying Quality Function Deployment
in Smart Phone Design

Taylor Barnard and Yi Wang(&)

Plymouth Business School, University of Plymouth, Drake Circus,


Plymouth, Devon PL4 8AA, UK
[email protected], [email protected]

Abstract. Smartphones are ever-developing and it is clear that Samsung, along


with other smartphone manufacturers, will face many obstacles throughout the
design process. As such, the application of the HOQ could be of great relevance
to Samsung and smartphone manufacturers alike. The product itself being a
smartphone opens up a range of features and characteristics that can easily fit
with the application of HOQ. Due to this, the problem of how to design a new
smartphone is a viable and realistic theme for this paper.

Keywords: Maintenance program  Value chain  Smart maintenance

1 Introduction

In 2018, approximately 1.4 billion smartphones were sold worldwide [1]. In a market
landscape where designs are getting more and more outlandish, such as Huawei’s
foldable screens [2], the design process in smartphone production is central to the
success of a smartphone life-cycle. This paper aims to solve the decision-making
problem of how to set about designing a new smartphone. For context, the paper will
take on the perspective of Samsung and how they can tackle product design of a new
flagship smartphone to rival that of Apple’s.
Samsung is currently the world’s biggest smartphone manufacturer [3].This is owed
to the extensive range of smartphones they offer; from the premium S10, to the entry
level J6 [4]. The decision-making problem of how to best design a new phone is a
plausible scenario.
More specifically, the solution to this decision-making problem is how Samsung
can utilise Quality Function Deployment [QFD], focussing on the House of Quality
[HOQ], to analyse a range of variables that can affect smartphone design. Although
there are many different solutions to the decision-making problem, HOQ is the area of
focus for this paper. The paper aims to: define HOQ, critically evaluate its effective-
ness, and continually apply its usefulness to the context of Samsung’s decision-making
problem.
Moreover, the structure of the paper is as follows. The paper will begin with a
review of existing literature on the HOQ, where the concept will be defined, and
differing concepts discussed. Within the literature review, the current research land-
scape on the HOQ will be identified. Following the literature review, the paper will

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 396–401, 2020.
https://doi.org/10.1007/978-981-15-2341-0_49
Applying Quality Function Deployment in Smart Phone Design 397

then discuss the relevance of the HOQ to Samsung’s decision-making problem. HOQ
will be critically evaluated using appropriate theory and research findings. Lastly, a
conclusion will summarise key points discussed and include suggestions for Samsung
with regards to the use of HOQ to tackle its decision-making problem. To begin, it is
key to define the HOQ.

2 Literature Review

A definition Hauser [5] details the HOQ to be one of four houses as part of Quality
Function Deployment [QFD]. Hauser explains how the HOQ is used to ‘understand the
voice of the customer and to translate it into the voice of the engineer.’ [p. 61]. The
makeup of the HOQ consists of matrixes between: customer requirements [CR]; design
requirements [DR]; and the DRs themselves. Each matrix uses a scoring system to
show: the importance of each CR; relationships between CR and DR; and the rela-
tionship between the DRs themselves. Ultimately, the outcome of completing the HOQ
is to give engineers – or whoever is designing a product – a ‘blueprint’ of design
features and their relation to one another, all in the context around customer needs.
As the HOQ was implemented more and more into manufacturing of various
products over time, the concept was picked apart and challenged. One ongoing theme is
how the CRs of the HOQ will be decided. It is up to the QFD team of a firm to
construct the HOQ, which opens up a level of subjectivity. If subjective inputs into the
HOQ are not accurate, the whole process will be flawed, and the final product can
misrepresent CRs. This underlying problem can be adjectivized as ‘fuzzy’. Fuzzy data,
fuzzy databases, or fuzzy concepts are data that is uncertain and can vary according to
context or conditions [6].
Kim et al. [7] produced a mathematical process which best helps define fuzzy data
that surrounds HOQ. The aim of the article was to tackle the issue of fuzzy data and
implement an extra step people could take when producing the HOQ to ensure variables
remained constant and representative of CRs. When concluding, Kim et al. said their
works ‘could allow the design team to mathematically consider trade-offs among the
various performance characteristics and the inherent fuzziness in the system.’ [p. 517].
A big part of Kim et al.’s process was utilising customer competitive analysis – how
competitor’s design features compare to that of your own in relation to CRs. However,
other research shows solely using customer competitive analysis is insufficient on its
own.
Klochkov et al.’s [8], suggest those constructing the HOQ should consider other
factors as well as competitor analysis. They suggest considering how customers graded
the satisfaction of their needs. Chen and Weng [9] supported this notion too, explaining
how solely using competitor analysis would be difficult in cases where a new product
was being developed – there would be no competitors to analyse. Chen and Weng’s
approach is a step forward in the process of creating the HOQ as their approach puts the
relationships between CRs, DRs, and among DRs themselves into ‘linguistic terms’
[p. 569] which accounts for ‘uncertainty in the stage of product design’ [p. 569].
398 T. Barnard and Y. Wang

3 Critical Evaluation

One of the biggest benefits relevant to Samsung in producing a new flagship smart-
phone is that of the potential competitive advantage gained [5]. As the HOQ ‘links the
voice of the customer to the product design attributes.’ [10, p. 343], Samsung will have
knowledge of features the new smartphone should have that customers want. This can
create a competitive advantage if Samsung sees a new trend emerging. For example,
one CR may be a variety of ways to take photos. Then, the designers can create a DR of
a back camera with multiple interchangeable lenses, meeting the CR. If a rival, such as
Apple, has not seen this emerging trend, Samsung’s new smartphone will give them a
competitive edge, encouraging new smartphone sales.
However, smartphone manufacturers that rival Samsung will most likely be con-
ducting ongoing research into customer requirements. Meaning, any emerging trends
may be picked up by them too and any competitive advantage to Samsung could be
minimalised. On the other hand, this does not rule out the usefulness of the HOQ. [11]
shed light on the human factor of the HOQ. Samsung’s smartphone designers can
demonstrate the human factor by interpreting the CRs differently to rivals and out-
putting this into their DRs. As such, a competitive edge can still be gained so long as
the CRs is still met. In contrast to this, the HOQ’s human factor can also tarnish its
usefulness.
The human factor can hinder the HOQ’s usefulness [12]. All aspects of the tradi-
tional HOQ involve the human factor which brings on the issue of subjectivity. In just
one example, there is no set rulebook on how to score each matrix; especially for a new
product, like Samsung’s new smartphone. Meaning, when it comes to scoring the
relationship between DRs, the scoring may be incorrect. This is just one example and
this human factor can be repeated throughout the HOQ process. As a result, Samsung’s
new smartphone may not meet its CRs and be deemed unsuccessful. This could be
detrimental to Samsung’s performance as it opens up a gap for rival firms to move into.
Although the human factor can be seen on Samsung’s sides, it can also be on the
customer’s side too through the fuzzy nature.
CRs can be fuzzy [13]. Samsung could construct the best HOQ possible, but if the
CRs are difficult to interpret, the knock-on effect is huge. For smartphones, this can be
easily done due to the market being a mass market. One common notion heard from the
consumers in the smartphone market is the requirement for longer battery life. To the
designers, there are many DRs that can meet this CR: bigger battery; smaller phone;
and lower quality screen are all DRs that can be implemented. But, this can easily
conflict another CR [DRs of lower quality screens when CRs are high quality screens].
Even though Samsung is the market leader in smartphones, their design team can only
do as good as their market resarch. If CRs are not clear enough, the end product may
not meet customer needs. Particularly with smartphones, the HOQ can have many CRs
and DRs that conflict [14]. It poses significant pressure on Samsung to navigate CRs
and DRs to come up with a new smartphone that compromises on the two yet satisfies
CRs to the best possible end result.
Nonetheless, the benefits of Samsung using the HOQ overshadow the drawbacks.
The HOQ does solve the problem of how to begin designing a new smartphone.
Applying Quality Function Deployment in Smart Phone Design 399

The HOQ shows tradeoffs between design requirements [15] which aids manufacturers
at Samsung to know what is going to be affected if they change a design feature of a
smartphone. Seeing this can save money and help the smartphone be a success. It saves
money as less prototypes will have to be built in order to see how DRs relate to one
another. Also, manufacturers can navigate the design process using the HOQ as a point
of reference. They won’t have to keep conducting research and meetings to discuss
each step in the design process. Time and money are saved dearly when using the
HOQ.
As the HOQ shows you where efforts should be focused, resources can be allocated
efficiently [16]. In designing a new smartphone, Samsung can see from the HOQ where
efforts should be focused. For example, if the biggest CR is a good quality screen,
Samsung knows they should invest the time and money into developing a top quality
new screen. A big part of the design process is allocating resources efficiently.
The HOQ solves this decision-making problem as it signals what customers want most
and what DR meets this CR. Table 1 summarises the main finds and contrast.

Table 1. Benefits and limitations of HOQ


Benefits Limitations
Competitive advantage gained [17] Human factor can skew results [12]
Design trade-offs are recognised [15] Fuzzy nature of CRs [13]
Resources in manufacturing can be allocated Disconnection between CRs and DRs [14]
efficiently [16]

Evidently, it is clear that the HOQ is most suitable for solving Samsung’s decision-
making problem. The benefits of the HOQ discussed address various issues that may
arise during the design process of a new smartphone, such as cost. Implementing the
HOQ at Samsung can help ensure their next flagship smartphone meets CRs to the best
of their ability. With competitors such as Apple in close competition, Samsung should
be considering any opportunity to maintain their market lead, such as the HOQ.

4 Conclusion

Throughout this paper, the HOQ has been discussed thoroughly to help solve Sam-
sung’s decision-making problem. The concept was defined in its earliest version and
the HOQ has been critically analysed. An ongoing theme has been fuzzy data. Seen in
both older literatures and new, fuzzy data has been highlighted as an Achilles heel to
the HOQ. From both CRs to DRs, there is not a set formula for constructing either,
creating an opportunity for subjectivity. Future research into the HOQ could be set on
better managing the fuzzy nature of the HOQ; managing the fuzzy nature could be
difficult as the application of the HOQ changes constantly.
Similarly, the human factor is also a drawback of using the HOQ. The paper has
discussed the human factor, and a cure to this weakness is emerging. A new dynamic
HOQ uses digital algorithms for data in the HOQ compared to manually interpreting
400 T. Barnard and Y. Wang

data. As discussed, this dynamic HOQ looks set to be the driver of future digital
revolutions. Samsung can utilise this as certain CRs relate to digital features of
smartphones. However, for other CRs, along with users of HOQ that are not concerned
with digital, a new dynamic HOQ is not suitable. As such, the human factor needs to be
further researched ensure it does not conflict in the usefulness of the HOQ.
As a whole, the paper does support the use of the HOQ to solve Samsung’s
decision-making problem. The benefits simply outweigh the drawbacks. The oppor-
tunity to see CRs and gain a competitive advantage is invaluable to Samsung, in such a
competitive market. Without the HOQ, it is difficult to see how Samsung can optimise
their smartphone design process. From the initial creation of the HOQ, Samsung can
begin to quantity the resources they need to allocate throughout the design process.
Any conflicting DRs can be spotted early on and accounted for, saving on both time
and money.
As a final recommendation, Samsung should utilise the HOQ to its fullest potential
and take advantage of any new developments, such as the dynamic HOQ. The draw-
backs discussed should be accounted for and future research should be conducted on
how to minimalize and manage these issues.

References
1. Bradshaw, T.: Are foldable phones more than just a gimmick? Financial Times, 26 February
2019. https://www.ft.com/content/a21b8dae-3941-11e9-b72b-2c7f526ca5d0. Accessed Apr
2019
2. Bradshaw, T.: Huawei unveils Mate X foldable phone at Mobile World Congress. Financial
Times, 24 February 2019. https://www.ft.com/content/d5e7aaf2-3840-11e9-b856-
5404d3811663. Accessed Apr 2019
3. Jung-a, S.: Samsung Electronics warns of weak earnings after 30% profit drop. Financial
Times, 31 January 2019. https://www.ft.com/content/b241f728-24fa-11e9-8ce6-
5db4543da632. Accessed Apr 2019
4. Samsung: Our Smartphones. Samsung (2019). https://www.samsung.com/uk/smartphones/
all-smartphones/. Accessed Apr 2019
5. Hauser, J.R.: How Puritan-Bennett used the house of quality. Sloan Manag. Rev. 34, 61–70
(1993)
6. IGI Global: What is Fuzzy Database. IGI Global (2008). https://www.igi-global.com/
dictionary/introduction-trends-fuzzy-logic-fuzzy/11718. Accessed 28 Apr 2019
7. Kim, K.-J., Moskowitz, H., Dhingra, A., Evans, G.: Fuzzy multicriteria models for quality
function deployment. Eur. J. Oper. Res. 121, 504–518 (2000)
8. Klochkov, Y., Klochkova, E., Volgina, A., Dementiev, S.: Human factor in quality function
deployment. In: 2016 Second International Symposium on Stochastic Models in Reliability
Engineering, Life Science and Operations Management (SMRLO), pp. 466–468 (2016)
9. Chen, L.-H., Weng, M.-C.: A fuzzy model of exploiting quality function deployment. Math.
Comput. Model. 38, 559–570 (2003)
10. Verma, R., Maher, T., Pullman, M.: Effective product and process development using
quality function deployment. School of Hotel Administration Collection, pp. 339–354
(1998)
Applying Quality Function Deployment in Smart Phone Design 401

11. Chen, A., Dinar, M., Gruenewald, T., Wang, M., Rosca, J., Kurfess, T.R.: Manufacturing
apps and the dynamic house of quality: towards an industrial revolution. Manuf. Lett. 13,
25–29 (2017)
12. Olewnik, A., Lewis, K.: Limitations of the house of quality to provide quantitative design
information. Int. J. Qual. Reliab. Manag. 25, 125–146 (2007)
13. Bouchereau, V., Rowlands, H.: Methods and techniques to help quality function
deployment. Benchmarking: Int. J. 7, 8–20 (2000)
14. Poel, I.V.: Methodological problems in QFD and directions for future development. Res.
Eng. Design 18, 21–36 (2007)
15. Vonderembse, M.A., Raghunathan, T.: Quality function deployment’s impact on product
development. Int. J. Qual. Sci. 2, 253–271 (1997)
16. Silva, F.L., Cavalca, K.L., Dedini, F.G.: Combined application of QFD and VA tools in the
product design process. Int. J. Qual. Reliab. Manag. 21, 231–252 (2004)
17. Hauser, J.R., Clausing, D.: The house of quality. The Harvard Business Review, May 1988.
https://hbr.org/1988/05/the-house-of-quality. Accessed 25 Apr 2019
eQUALS: Automated Quality Check System
for Paint Shop

Angel Dacal-Nieto1, Carmen Fernandez-Gonzalez1,


Victor Alonso-Ramos1, Gema Antequera-Garcia1(&),
and Cristian Ríos2
1
Processes and FoF Department, CTAG – Centro Tecnológico de Automoción
de Galicia, Pol. Ind. A Granxa, 36475 O Porriño, Spain
{angel.dacal,Carmen.fernandez,victor.alonso,
gema.antequera}@ctag.com
2
Ledisson AIT, Pol. Ind. A Granxa Parc. 259 Nave 15, 36400 Porriño, Spain
[email protected]

Abstract. Paint quality assessment is a necessary step in many industries,


especially in automotive. Checking parameters such as thickness, gloss, levelling
and colour, for instance, is a mandatory task in order to guarantee aspect quality for
the customer and that only the necessary material is being used during manu-
facturing. This paint quality assessment is usually been done manually, using
contact hand devices. However, controlling a complete car in movement means
dozens of measurement points to perform in a very narrow cycle time. Under these
conditions, repeatability, speed and data transference are challenging. In this
paper, we present the eQUALS system, an automated solution for quality
assessment for the Paint Shop. It is composed by a robot able to mount low-cost
measurement contact devices that can perform an automatic inspection of paint
quality magnitudes in a series of points on a car in movement, guaranteeing that
both, the device and especially the car, are not damaged during the process. It is a
modular solution which can adapt different needs, line speeds, devices and
architectures. eQUALS answers one of the challenges of the ESMERA project, a
H2020 consortium devoted to robotics applications for SMEs.

Keywords: Industry 4.0  Collaborative robotics  Quality assessment 


Automation  Paint shop  Automotive

1 Introduction

Controlling paint quality is a well-known operation in manufacturing. It includes the


measurement and analysis of a set of magnitudes and features of the paint result, which
objectively define the perceived quality by a customer on the painted surface.
The control process has traditionally been performed by hand, using contact
devices which obtain a measure which is further analysed. Nevertheless, nowadays
requirements in, for instance, automotive industry, demand a high number of inspection
points in a very limited cycle time. Additionally, it is very usual today that the car is in
movement throughout the manufacturing process, without stopped stations. This means
that repeatability, speed and data transference are challenging issues.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 402–409, 2020.
https://doi.org/10.1007/978-981-15-2341-0_50
eQUALS: Automated Quality Check System for Paint Shop 403

Under this context, a coordinated partnership composed by the Spanish SME


Ledisson AIT and the Research and Technology Centre CTAG have developed the
eQUALS solution, a robotized system based on the usage of low-cost measurement
devices, where the control operations can be automated.
This paper is focused on presenting the first phase of the project, in which a
physical proof-of-concept and a deep analysis of simulated scenarios have been
developed. This first stage has validated the main technical challenges, assessing that a
factory implementation is possible.
eQUALS is a project funded under an open call procedure of the H2020 initiative
ESMERA (European SMEs Robotic Applications) [1]. ESMERA defined a series of
industry challenges to solve using robotics. Among these challenges, one of them
searched a solution for automated inspection of a car body in the paint shop. This was
the trigger for the development of eQUALS.
In this paper, Sect. 2 will be devoted to contextualizing and presenting state of the
art, meanwhile Sect. 3 will present the main components and functionalities of the
solution. Section 4 will show the results, and finally, Sect. 5 will discuss these results
to extract the main conclusions of the work.

2 State of the Art

Paint application is one of the most demanding process on automobile manufacturing.


Paint is not only a protection coating for the body surface, but also a visual appearance
component, especially because of colour and gloss, which means the main contact point
by the car user. The painting process is done by electrostatic principle via automated
robots in ex-proof, in conditioned paint booths where the total paint coating thickness is
usually between 84–280 µm [2]. Besides the thickness, levelling (orange peel), colour
measurement, and gloss rate are other important factors that affect the car paint quality [3].
Automatic control of paint quality is a challenging task, partially addressed in the
past [4], not very different to operations such as welding or bead forming [5], which
require a huge accuracy due to aspect and functional reasons [6]. A device is mounted
in a robot, so a group of points are inspected using a measurement device. However,
these operations are usually executed with the car body in a static position. On the other
hand, in-movement applications are usually related to non-contact systems, mainly
machine vision based, in which accuracy is not so critical [7].
eQUALS aims to perform contact quality control in movement, which means one
of the main challenges of this application. This has traditionally been achieved by using
mechanical solutions [8]. However, these solutions lack flexibility in order to control
different references under the same manufacturing line.
The new generation of collaborative robots [9] provide sensitive capabilities [10].
This is the case of Kuka LBR iiwa and Fanuc CR-35iA. An equivalent method in a
standard robot would be a force sensor [11]. This means that it is possible to access
torque data from the 6 axes of the robot to know accurately if the robot is touching an
object, and how much this contact is.
Additionally, there are complementary sensory which help to identify the car body
position with accuracy. This is the case of line encoders, geometric machine vision
systems and so on [12].
404 A. Dacal-Nieto et al.

The main contribution of eQUALS to the state of the art is the usage of a multi-
modal approach, which joins the sensitive capabilities of a collaborative robot and
external sensoring from the manufacturing line.
Being a contact application, other challenge is not damaging the measurement
device or the car body. This is much related to performing the process in movement,
but it has additional issues, such as guaranteeing that the applied force can be mea-
sured. Again, the sensitive capabilities of new collaborative robots are the key to
achieve this: the application can actually measure the force applied during the mea-
surement [13].
Finally, regarding cost, there are two main groups of solutions in the market to
control paint features: manual-based and automatic ones. In the first group, we identify
devices such as [14–16]. They are non-expensive devices, oriented to be used by hand,
although many of them provide a SDK to handle and invoke measurements
automatically.
On the other hand, there are automatic solutions, based on the usage of controllers
and probes, so the probes are mounted on a robot or similar equipment. These systems
are usually more expensive, but help to partially automate the control process.
Examples are the Fisher models “MMS INSPECTION” or “MP0/MP0R SERIES”
[17]. eQUALS aims to use non-expensive solutions to allow a rapid integration in the
factory.

3 The eQUALS System


3.1 Proof of Concept
As a first step in the solution, a proof of concept (POC) has been performed in order to
validate technologies in a controlled scenario, so that a limited version of a real in-line
automotive use case is assessed.
The main idea (Fig. 1) is that one robot inspects a set of inspection points in a car
body, using one hand-based measurement device. A PLC masters the process, and a PC
registers the results in an information system.

Fig. 1. Architecture of the system


eQUALS: Automated Quality Check System for Paint Shop 405

The inspection cycle is:


(1) a sensor detects that the car body is in the expected initial position.
(2) the robot moves to the first point in a predefined trajectory, so that the sensor gets
closer to the measurement point, in two stages. The first one a little far away at
high speed, and the second one 10 mm far from the measurement point, but at
reduced speed. At this point the robot starts to accompany the car movement
(thanks to the line encoder),
(3) once PLC and robot considers that the position has been reached, the PC invokes
the measurement on the device via USB.
(4) the next points repeat the same loop.
This proof of concept has been performed in the facilities of CTAG, where a pilot
manufacturing line is available to be for pre-industrialization projects involving in-
movement applications. It has configuration possibilities such as car height and line
speed, and it is independent of robot or PLC brands and architectures.
In this proof of concept, the collaborative robot is a Kuka LBR iiwa [18], the hand
device is an Elcometer, and the PLC is a Siemens S7-1200 [19]. An ad-hoc physical
interface between the measurement device and the robot has been designed and
manufactured through 3D printing (Fig. 2). This gripper and the attached device can be
seen in Fig. 3.

Fig. 2. 3D design of the POC gripper

Fig. 3. Image of the POC gripper.

In Fig. 4, a measurement in a point is presented on a car body in movement. It is


relevant to note that the sensitivity data from the collaborative robot are used in order to
both reach a good measurement position and to apply the adequate force between the
device and the car body.
406 A. Dacal-Nieto et al.

Fig. 4. System measuring.

3.2 Virtual Validation


Using the approach presented in the Sect. 3.1, an industrialisation analysis using a
robotics simulation environment has been performed. Different configurations to solve a
real in-line challenge has been simulated, such as using 2 robots (one each side), 4 robots
(two each side), using more than one measurement device at one time, etc. Each option
has been studied using a cyclogram, a complete simulation, and an approximated cost.
Depending on what every OEM needs, a different solution would be valid.
Assuming a cycle time around 1 min, and 100 external points in a car body, at least 4
robots are necessary. However, it seems more reasonable to measure different point sets
in different cars, so less resources are necessary, keeping this way a huge improvement
comparing with the manual approach, but with limited investment.
In this phase, Robcad [20] has been used, although other solutions like Delmia or
Process Simulate [21] would also work, depending on the OEM standard (Fig. 5).

Fig. 5. Offline virtual simulation.

3.3 Phase II
Using the concepts analysed through the virtual validation, a further phase II will
develop a final inspection system composed by a number of robots, so each robot in the
system has specific inspection points to measure with one or more measurement
eQUALS: Automated Quality Check System for Paint Shop 407

devices. In this phase, any robot, PLC, device, etc. could be adapted to be used in this
solution.
In this second phase, some additional features have been identified as relevant,
taking into account manufacturing constraints:
– Flexibility: eQUALS will be developed so the execution of specific inspections to
specific references will be performed.
– User friendliness: a GUI will allow easy interaction, maintainability and additional
operations launching, such as system calibration.
– Data storage: Phase II will include a data monitoring system to store process
execution data.
– The communication PLC-PC will be adapted to specific needs of the OEM, so
ongoing solutions are used. An example is using OPC, OPC-UA, MQTT or ROS as
messaging tools.

4 Results

During the execution of the proof of concept, the main validation points have been
assessed:
• Force applied during measurement: the tests have confirmed that it is possible to
maintain the applied force between 20 N and 40 N. The low value is the minimum
necessary by the measurement device to perform the measurement. The high value
is a confidence range, which is considered as enough to keep safe both car and
device.
• Line speed limit: it has been assessed that the current approach is able to inspect in
movement at a max line speed of 60 cars per minute. Above this threshold, the
necessary approximation procedure is insufficient to achieve a fine measurement
position.
• Measurement time: it is highly related to the measurement device. However, there is
a previous approximation step which requires 4 s. This means that, in an average
1 min cycle time, no more than 10 inspection points can be performed.

5 Conclusions

In this paper, a new automated system to perform paint quality assessment has been
presented.
A multimodal system using sensitivity data from the robot has been developed in
order to accurately measure the force applied with the measurement device on the car
body. This has been key for allowing to measure without damaging the device or the
car, especially important in aspect zones.
Offline robotics programming has been used in order to simulate number and
position of robots to make the system able to inspect all the control points in a short
cycle time.
408 A. Dacal-Nieto et al.

Finally, the application of this process in movement has represented the main
challenge. Measuring in movement has provided interesting knowledge which can be
transferred into other robotic applications, such as bead forming or welding, that are
currently developed only when the car is stopped.
The second phase of eQUALS will demonstrate the application of this new
knowledge into a real in-line automotive scenario. In the future, eQUALS will serve as
a basis for further robotic developments regarding in-movement applications, paint
quality assessment, and similar applications.

Acknowledgments. The authors want to thank the ESMERA project (European SMEs Robotic
Applications), which has received funding from the European Union Horizon 2020 Research and
Innovation program, under grant agreement No. 780265 and supports the work of this project.

References
1. ESMERA project website. http://www.esmera-project.eu/. Accessed 30 July 2019
2. Moore, J.R.: Automotive paint application. In: Wen, M., Dušek, K. (eds.) Protective
Coatings. Springer (2017)
3. Svejda, P.: Paint shop design and quality concepts. In: Streitberger, H., Dossel, K. (eds.)
Automotive Paints and Coatings. Wiley (2008)
4. Javad, J., Alborzi, M., Felor, G.: Car paint thickness control using artificial neural network
and regression method. J. Ind. Eng. Int. 7(14), 1–6 (2011)
5. Pires, J.N., Godinho, T., Ferreira, P.: CAD interface for automatic robot welding
programming. Ind. Robot: Int. J. 31(1), 71–76 (2004)
6. Jones, M.G., Erikson, C.E., Mundra, K.: U.S. Patent No. 6,521,861. U.S. Patent and
Trademark Office, Washington, DC (2003)
7. Wang, Y., Chen, T., He, Z., Wu, C.Z.: Review on the machine vision measurement and
control technology for intelligent manufacturing equipment. Control Theory Appl. 32(3),
273–286 (2015)
8. Taniguchi, N.: Current status in, and future trends of, ultraprecision machining and ultrafine
materials processing. CIRP Ann. 32(2), 573–582 (1986)
9. Ferraguti, F., Pertosa, A., Secchi, C., Fantuzzi, C., Bonfè, M.: A methodology for
comparative analysis of collaborative robots for industry 4.0. In: 2019 Design, Automation
and Test in Europe Conference and Exhibition, pp. 1070–1075. IEEE (2019)
10. Chawda, V., Niemeyer, G.: Toward torque control of a KUKA LBR IIWA for physical
human-robot interaction. In: 2017 IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS), pp. 6387–6392. IEEE (2017)
11. Fanuc force sensor product website. https://www.fanuc.eu/es/en/robots/accessories/robot-
vision/force-sensor. Accessed 29 July 2019
12. Pérez, L., Rodríguez, Í., Rodríguez, N., Usamentiaga, R., García, D.: Robot guidance using
machine vision techniques in industrial environments: a comparative review. Sensors 16(3),
335 (2016)
13. Magrini, E., Flacco, F., De Luca, A.: Control of generalized contact motion and force in
physical human-robot interaction. In: 2015 IEEE International Conference on Robotics and
Automation (ICRA), pp. 2298–2304. IEEE (2015)
14. Elcometer website. https://www.elcometer.com. Accessed 29 July 2019
eQUALS: Automated Quality Check System for Paint Shop 409

15. Fischer MMS hand device product website. https://www.fischer-technology.com/en/united-


states/products/coating-thickness-measurement/portable-measurement-instruments/mms-ins
pection-designed-for-corrosion-protection/. Accessed 29 July 2019
16. BYK website. https://www.byk.com/es/instrumentos/productos/?a=1&b=14&f=0&faction.
Accessed 29 July 2019
17. Fischer MMS auto product website. https://www.fischer-technology.com/en/united-states/
products/coating-thickness-measurement/automated-measuring-systems/mms-auto-2/.
Accessed 29 July 2019
18. Kuka LBR iiwa product website. https://www.kuka.com/es-es/productos-servicios/sistemas-
de-robot/robot-industrial/lbr-iiwa. Accessed 29 July 2019
19. Siemens S7 product website. https://w5.siemens.com/spain/web/es/industry/automatizacion/
simatic/controladores_modulares/controlador_basico_s71200/pages/s7-1200.aspx. Accessed
29 July 2019
20. Robcad Tecnomatix product website. https://www.plm.automation.siemens.com/global/es/
products/tecnomatix/. Accessed 29 July 2019
21. Gan, Y., Dai, X., Li, D.: Off-line programming techniques for multirobot cooperation
system. Int. J. Adv. Robot. Syst. 10(7), 282 (2013)
Equipment Fault Case Retrieval Algorithm
Based on Mixed Weights

Yonglin Tian1, Chengsheng Pan1, Yana Lv1, and Bo Chen2(&)


1
Key Laboratory of Communication and Network,
Dalian University, Dalian, China
[email protected], [email protected], [email protected]
2
College of Information Engineering, Lingnan Normal University,
Zhanjiang, China
[email protected]

Abstract. Case-based reasoning (CBR) has been applied to all walks of life as
an emerging intelligent fault diagnosis method. For the existing fault situation,
the feature weight is mainly given by experts, which has the disadvantages of
strong subjectivity and low accuracy of search results. A case retrieval algorithm
based on mixed weight (CRAMW) is proposed. Firstly, the entropy weight
method is used to obtain the objective weights of the feature attributes of each
fault case, and the mixed attribute weight value of the comprehensive attribute is
given in combination with the expert experience. Secondly, the similarity
between the cases is calculated by using the mixed weight value and the
Euclidean distance to find the case base. The most similar case to the case to be
tested; finally, the engineering equipment fault diagnosis algorithm simulation
of the algorithm is designed. The simulation results show that the algorithm has
higher resolution and accuracy than traditional algorithms. Applying the algo-
rithm to equipment fault diagnosis helps the smooth progress of maintenance
work and improves equipment support efficiency.

Keywords: Case-based reasoning  Fault diagnosis  Case retrieval  Entropy


weight method  Euclidean distance  Equipment support

1 Introduction

With the continuous enhancement of China’s military strength and the gradual
advancement of military information construction, the complexity of various military
equipment has been continuously improved, and the requirements for equipment
support have become higher and higher. Equipment fault diagnosis technology is one
of the important links of equipment support. It provides equipment maintenance per-
sonnel with fault information, ensures the smooth operation of equipment support,
enables the equipment to restore wartime status as soon as possible, and lays a solid
foundation for the ultimate victory of the war.
Due to the complexity of the battlefield environment and the particularity of
weapons and equipment, it is not only difficult to establish an accurate fault diagnosis
model, but also difficult to effectively analyze the input and output signals. Artificial

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 410–417, 2020.
https://doi.org/10.1007/978-981-15-2341-0_51
Equipment Fault Case Retrieval Algorithm Based on Mixed Weights 411

intelligence has the function of machine learning, and some case-based reasoning fault
diagnosis methods based on artificial intelligence are proposed. The core idea of CBR
(Case Based Reasoning) is to use the past experience of solving problems to solve
emerging problems. Through the case search, find the case most similar to the new
situation in the historical case database [1]. CBR consists of four basic processes: case
retrieval, case reuse, case modification, and case studies.
CBR fault reasoning has been widely used in various industries. As early as 1971,
Kling [2] proposed a memory network model and case retrieval algorithm, which is
considered to be the first to develop analogy information and use it to solve problems.
program. Jaime [3–5] proposed the concept of conversion analogy and derived analogy
in case-based reasoning, and developed the PRODIGY system in 1991. In recent years,
domestic and foreign scholars have conducted a lot of research on fault diagnosis and
case retrieval. Baogang [6] constructed a case-based reasoning-based intelligent
diagnosis model for airborne missiles based on case-based reasoning. Yong [7] applied
the case-based reasoning method to the fault diagnosis of aerospace measurement and
control equipment, effectively improving the efficiency of equipment fault diagnosis;
Mingju [8] Applying the formal concept to the fault case retrieval process, an improved
similarity calculation method is proposed. Bin [9] proposed an overall similarity cal-
culation method based on edit distance.
Most of the above-mentioned documents directly give the weight of the fault case
characteristics by expert experience, and there are some disadvantages such as strong
subjectivity and low accuracy of fault case retrieval. Based on this, an equipment fault
case retrieval algorithm based on hybrid weights is proposed, which is applied to the
equipment fault case retrieval to improve the accuracy and resolution of the search
results.

2 CBR Based Fault Diagnosis Method

The CBR-based fault diagnosis method essentially searches for the case library that has
failed, finds a case similar to the fault to be diagnosed, and then diagnoses the faulty
equipment through correction. In the CBR-based fault diagnosis method, case retrieval
is its most important step, and the choice of case retrieval algorithm directly affects the
accuracy of fault diagnosis. The CBR search usually calculates the degree of similarity
between each problem attribute of the problem case and the fault case to obtain the
local similarity LS (Local similarity), and then calculates the global similarity GS
(Global similarity) according to the weight and local similarity of each feature attribute.
Finally, choose the most similar historical case according to the similar size. The
formula is as follows:

X
n
GSðX; YÞ ¼ Wj  LSðX; YÞ ð1Þ
j¼1

where, X; Y represents two different cases, n represents the number of case feature
attributes, Wj represents the weight of the feature attribute of item j.
412 Y. Tian et al.

3 CRAMW Algorithm
3.1 Determination of Attribute Weight Based on Entropy Weight
Method
In the past, methods for empowering case attributes mainly include domain expert
scoring method, analytic hierarchy process, and comprehensive evaluation method.
These methods are mainly given by experts based on their experience. Although the
principle is simple and easy to understand, it is subjective. The entropy weight method
based on information entropy can be used to determine the objective weight of each
symptom attribute of the case, which can further improve the accuracy of fault diag-
nosis [10]. The calculation steps of the entropy weight method are as follows:
(1) Build m fault cases, failure symptom matrix Q of n symptom attributes:
 
Q ¼ qij mn ði ¼ 1; 2; . . .; n; j ¼ 1; 2; . . .; mÞ: ð2Þ

(2) Normalize the vector of the fault symptom matrix Q to obtain a normalized matrix B:

q
bij ¼ rffiffiffiffiffiffiffiffiffi
ij
P m
ð3Þ
2 qij
i¼0

(3) Determining the information entropy of the symptom attribute H:

1 X m
Hj ¼  fij ln fij ð4Þ
ln m i¼1

among them: 0  Hj  1, and

bij
fij ¼ P
m ð5Þ
bij
i¼1

(4) Determine the weight of the j symptom attribute wj :

1  Hj X
n
wj ¼ n   ; and wj ¼ 1 ð6Þ
P
1  Hj j¼1
j¼1
Equipment Fault Case Retrieval Algorithm Based on Mixed Weights 413

The final failure symptom attribute weighting result given in this paper consists of
two parts of weights. The comprehensive weight calculation method is as follows:

ð1Þ ð2Þ
X
n
ð1Þ ð2Þ
Wj ¼ Wj Wj = Wj Wj ð7Þ
j¼1

Where, W ð1Þ is the objective weight calculated from the fault symptom data using
the information entropy weight method. W ð2Þ is the subjective weight directly given by
expert experience.

3.2 Equipment Fault Case Retrieval Algorithm Based on Mixed Weights


Describe a new incident to be diagnosed equipment failure as a multidimensional array C:
2 3
x11 x12 ... x1n
6 x21 x22 ... x2n 7
6 7
C ¼ 6 .. .. .. .. 7 ð8Þ
4 . . . . 5
xm1 xm2    xnm

Where, ci represents the i th case in the case library, and xi1 ; xi2 ; xi3 ; . . .xin represents
the feature attribute value of the i th case.
The similarity matching algorithm based on Euclidean distance is the most classic
and practical when performing similarity calculation. The complete failure case simi-
larity calculation process should include two steps:
First, calculate the Euclidean distance of the symptom value of each case in the case
to be diagnosed and the case library:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
d¼ ðxij  x0j Þ2 ð9Þ

Where, d is the Euclidean distance of the corresponding fault symptom attribute


value between Case Ci and Case C0 .
Secondly, according to the local Euclidean distance calculated in the previous step,
combined with the different weights of each fault symptom attribute, calculate the
global similarity simðC0 ; C1 Þ between cases.
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
uX
u n  2
DðC0 ; Ci Þ ¼ t wj xij  x0j ð10Þ
j¼1

1
simðC0 ; Ci Þ ¼ ð11Þ
1 þ DðC0 ; Ci Þ
414 Y. Tian et al.

4 Results and Discussion

In order to verify the feasibility of the hybrid case-based equipment fault case retrieval
algorithm, this paper uses different case retrieval algorithms to diagnose a certain
engineering equipment, and select 15 typical cases ðC1  C15 Þ, for the case to be
diagnosed, each case has 9 The symptom attribute ðV1  V9 Þ, the symptom of the fault
to be diagnosed and the fault case are shown in Table 1:

Table 1. Failure data of a project equipment failure


Cases V1 V2 V3 V4 V5 V6 V7 V8 V9
C1 730 2 112 155 77 21 311 16 123
C2 651 11 50 309 58 79 15 100 251
C3 1175 12 107 229 463 34 21 6 55
C4 1214 3 48 217 673 53 49 80 77
C5 385 8 117 334 804 20 18 83 45
C6 797 6 58 132 122 58 44 87 42
C7 736 5 43 282 493 28 32 13 235
C8 1009 10 125 280 409 62 48 42 158
C9 1094 7 83 161 23 64 14 29 150
C10 1155 1 69 220 296 68 27 78 44
C11 1318 12 84 100 590 98 55 104 275
C12 131 8 24 55 13 6 12 6 7
C13 240 10 51 42 94 10 13 10 9
C14 1466 11 128 157 863 84 51 59 113
C15 655 7 102 364 98 18 20 19 27
C0 456 10 77 93 170 15 23 32 30

The attribute weight vector determined by the expert scoring form is (0.25, 0.025,
0.025, 0.1, 0.2, 0.05, 0.05, 0.1, 0.2), and the attribute weight vector determined by the
entropy weight method is (0.0695, 0.0629, 0.0685, 0.0857, 0.1651, 0.1108, 0.1390,
0.1482, 0.1502), the attribute blending weight vector determined by the expert scoring
and entropy weight method is (0.1453, 0.0131, 0.0143, 0.0717, 0.2761, 0.0463,
0.0581, 0.1239, 0.2512). After normalizing the data in Table 1, the Expert Weighted
Euclidean Algorithm (EWEA), the Entropy-Euclidean Algorithm (EEA) and the
Algorithm (CRAMW) performs similarity calculation on equipment failure cases.
Table 2 gives the fault diagnosis results of different similarity algorithms. Figure 1
shows the comparison results of different similarity algorithms.
Equipment Fault Case Retrieval Algorithm Based on Mixed Weights 415

Table 2. Fault diagnosis results of different similarity algorithms


Cases EWEA algorithm EEA algorithm CRAMW algorithm
C1 0.9053 0.9034 0.9037
C2 0.8098 0.8097 0.7984
C3 0.8742 0.8953 0.8802
C4 0.8306 0.8346 0.8259
C5 0.8298 0.8351 0.8144
C6 0.9019 0.8812 0.9009
C7 0.8378 0.8510 0.8284
C8 0.8506 0.8532 0.8491
C9 0.8682 0.8880 0.8694
C10 0.8725 0.8693 0.8841
C11 0.7705 0.7742 0.7593
C12 0.9246 0.9214 0.9249
C13 0.9459 0.9430 0.9462
C14 0.7924 0.8039 0.7851
C15 0.9033 0.9083 0.9157

Fig. 1. Comparison results of different similarity algorithms

It can be seen from Table 2 and Fig. 1 that the results obtained by different sim-
ilarity algorithms are consistent, that is, the fault case to be tested has the highest
similarity with Case C13 . By comparison, the CRAMW algorithm has higher resolution
than other algorithms.
In this paper, the definition of “accuracy” is the number of cases in which the
similarity of the case is not less than 0.9/the total number of cases retrieved. Select
different similarity algorithms and different cases to be diagnosed for comparison
experiments. The total number of cases retrieved is the number of cases with similarity
416 Y. Tian et al.

not less than 0.85. When the number of cases is N = 15, the search results are shown in
Table 3. In the case When the number N = 30, the search results are shown in Table 4.
As can be seen from the table, the CRAMW algorithm has higher accuracy than other
algorithms; meanwhile, the total number of cases retrieved by the CRAMW algorithm
is compared with the other two. The algorithm is relatively rare, and in the case of the
same effect, the retrieval case redundancy can be effectively avoided, and the case
retrieval time can be shortened; in addition, as the number of case library cases
increases, the case retrieval capability is also continuously improved.

Table 3. Comparison of case retrieval accuracy between different case retrieval algorithms at
N = 15
Algorithm Total number of Retrieved case Accuracy Search case
cases retrieved Sim  0:9 maximum similarity
EWEA 9 5 0.556 0.9459
algorithm
EEA 10 5 0.4 0.9430
algorithm
CRAMW 8 5 0.625 0.9462
algorithm

Table 4. Comparison of case retrieval accuracy between different case retrieval algorithms at
N = 30
Algorithm Total number of Retrieved case Accuracy Search case
cases retrieved Sim  0:9 maximum similarity
EWEA 24 17 0.708 0.9442
algorithm
EEA 28 15 0.536 0.9321
algorithm
CRAMW 19 7 0.789 0.9514
algorithm

Using the CRAMW algorithm proposed in this paper, the fault case data of Table 1
is retrieved, and the top 8 cases with similarity to the case to be diagnosed are selected
to be C13 [ C12 [ C15 [ C1 [ C6 [ C10 [ C3 [ C9 , the corresponding case modi-
fication and adjustment of the retrieved case results, and finally get the optimal diag-
nosis strategy of the case to be diagnosed, so that the equipment can return to normal
state as soon as possible, and help the final victory of the war.

5 Conclusions

In order to improve the accuracy of equipment fault case retrieval, this paper proposes
an equipment fault case retrieval algorithm based on hybrid weights. The entropy
weight method is used to calculate the objective weight of the case symptom attribute,
Equipment Fault Case Retrieval Algorithm Based on Mixed Weights 417

and the subjective weight determined by the expert experience is used to calculate the
similarity between the case to be diagnosed and the case in the fault case library by
using the Euclidean distance. The algorithm overcomes the subjectivity of determining
weights based on expert experience in the past. The calculation results show that under
a certain number of case bases, the algorithm has higher resolution than the other two
algorithms, and under different number of case bases, the algorithm of this paper The
case retrieval accuracy is higher, and it is applied to the equipment fault diagnosis, and
the equipment is repaired in time, which is conducive to the smooth implementation of
the equipment support work.

References
1. Bingyang, L., Liting, H., Tingmei, Z.H.: Fault diagnosis system for locomotive based on
case- based reasoning. J. Wuhan Univ. Technol. (Inf. Manage. Eng. Edn. 37(01), 38–42
(2015)
2. Robert, E.K.: A paradigm for reasoning by analogy. Artif. Intell. 2(2), 147–178 (1971)
3. Jaime, G.C.: A computational model of analogical problem solving. In: The 7th International
Joint Conference on Artificial Intelligence, pp. 147–152. Morgan Kaufmann Publishers Inc,
San Francisco (1981)
4. Jaime, G.C.: Derivational analogy and its role in problem solving. In: The Third AAAI
Conference on Artificial Intelligence (AAAI 1983). AAAI Press, Washington, pp. 64–69
(1983)
5. Jaime, G.C., Oren, E., et al.: PRODIGY: an integrated architecture for planning and learning.
SIGART Bull. 2(4), 51–55 (1991)
6. Baogang, L.: Design of aviation missile fault intelligent diagnosis model based on CBR.
Ordnance Autom. 34(03), 13–17 (2015)
7. Yong, L., Haichao, L., Xizhong, G., Bingguang, Z.H.: Fault diagnosis of space TT&C
equipment based on case-based reasoning. Telecommun. Technol. 57(02), 236–242 (2017)
8. Mingju, F., Yun, Z.H., Zhengguo, X.: Case retrieval for faults of steam turbines based on
formal concept. J. Shandong Univ. Sci. Technol. (Nat. Sci. Edn) 36(04), 24–30 (2017)
9. Bin, S.H., Shuyu, Z.H.: A case-based reasoning method for fault diagnosis of CNC machine
tools based on edit distance. Chin. J. Constr. Mach. 15(04), 359–364 (2017)
10. Barnum, H., Barrett, J., Clark, L.O., et al.: Entropy and information causality in general
probabilistic theories. New J. Phys. 3, 1–32 (2010)
Cascaded Organic Rankine Cycles (ORCs)
for Simultaneous Utilization of Liquified
Natural Gas (LNG) Cold Energy
and Low-Temperature Waste Heat

Fuyu Liu1, Xiangping Hu2(&), Haoshui Yu3(&),


and Baosheng Zhang1
1
School of Economics and Management, China University of Petroleum,
Beijing 102249, China
2
Industrial Ecology Program, Department of Energy and Process Engineering,
Norwegian University of Science and Technology, 7491 Trondheim, Norway
[email protected]
3
Department of Energy and Process Engineering, Norwegian University
of Science and Technology, 7491 Trondheim, Norway
[email protected]

Abstract. Liquified Natural Gas (LNG) is a good way to transport natural gas
from suppliers to end consumers. LNG contains a huge amount of cold energy
due to the energy consumed in the liquefaction process. Generally, the LNG cold
energy is lost during the regasification process at the receiving terminal. Power
generation with LNG as the heat sink is an energy-efficient and environment-
friendly way to regasify LNG. Among different kinds of power generation
technologies, Organic Rankine Cycle (ORC) is the most promising power cycle
to recover LNG cold energy. ORC has been widely used to convert low-
temperature heat into electricity. If low-temperature waste heat and LNG cold
energy utilization are utilized simultaneously, the efficiency of the whole system
can be improved significantly. However, due to the large temperature difference
between the low-temperature waste heat source and LNG, one stage ORC cannot
exploit the waste heat and LNG cold energy efficiently. Therefore, a cascaded
ORC system is proposed in this study. The optimization of the integrated system
is challenging due to the non-convexity and non-linearity of flowsheet and the
thermodynamic properties of the working fluids. A simulation-based optimiza-
tion framework with Particle Swarm Optimization algorithm is adopted to
determine the optimal operating conditions of the integrated system. The maxi-
mum unit net power output of the integrated system can reach 0.096 kWh per
kilogram LNG based on the optimal results.

Keywords: Organic Rankine Cycles  LNG cold energy  Waste heat  Process
optimization

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 418–423, 2020.
https://doi.org/10.1007/978-981-15-2341-0_52
Cascaded (ORCs) for Simultaneous Utilization 419

1 Introduction

Due to the increasing attention to the environmental effects of human activities,


improving energy efficiency and reducing the CO2 emission is more and more
important for the conventional energy industry. Natural gas plays an increasingly
important role in the global energy market. Due to the low energy density of natural gas
compared with oil, natural gas has to be liquified to Liquified Natural Gas (LNG) for
global trade. LNG contains huge amounts of cold energy, which is obtained at the cost
of considerable mechanical work during the liquefaction process. One ton of LNG
consumes about 850 kWh electricity [1]. However, the cold energy of LNG is generally
discarded to seawater or air directly without reutilization. If the cold energy of LNG
can be recovered properly, the energy efficiency and profit of LNG industry can be
improved substantially. Due to the cryogenic temperature level of LNG, it is an ideal
heat sink for Organic Rankine Cycle (ORC). Therefore, ORC could be a promising
technology to recover LNG cold energy [2]. ORC has been widely investigated for the
applications in solar energy [3], engine waste heat recovery [4], biomass energy [5],
industrial waste heat recovery [6], etc. However, there are limited studies focusing on
the ORC system for the LNG cold energy recovery. Yu et al. [7] investigated 22
working fluids for the ORC recovering LNG cold energy. Lin et al. [8] proposed a
transcritical CO2 cycle to recover LNG cold energy and waste heat from the gas turbine
exhaust. However, only one stage power cycle is considered in these studies. To
improve the utilization efficiency of both waste heat and LNG cold energy, the flow-
sheet of the system should be more integrated to avoid too large temperature approach
in the heat exchanger. Cascaded cycles can improve the efficiency of the system to
some extent. However, the process synthesis and optimization of the system become
more complex. Therefore, this study proposes to use an evolutionary optimization
algorithm to optimize the cascaded ORC system for simultaneous utilization of low-
temperature waste heat and LNG cold energy.

2 Process Description

The layout of the proposed cascaded ORC system is illustrated in Fig. 1. The higher
temperature cycle converting waste heat into electricity is called Top Cycle (TC) and
the lower temperature cycle utilizing LNG cold energy is called Bottom Cycle (BC).
Low-temperature waste heat acts as the heat source in the top ORC, and the conden-
sation heat is the heat source of the bottom ORC. LNG is pumped to the evaporation
pressure, which is a key variable in the system. And then the LNG evaporates in the
condenser of the bottom ORC acting as the heat sink. Since the temperature of LNG is
still below ambient temperature after evaporation, LNG is heated up by seawater. It is
assumed that the LNG is heated up to 10 °C by the seawater in this study. To improve
the utilization efficiency of low-temperature waste heat, natural gas superheater is set
between the waste heat and LNG as shown in Fig. 1. Therefore, the top cycle mainly
focuses on recovering the low-temperature waste heat and the bottom cycle aims at
recovering the LNG cold energy. In this study, the LNG is assumed to be used for
power plant, and the waste heat is assumed to be the treated flue gas at 150 °C. Since
420 F. Liu et al.

the LNG mass flowrate is assumed to be 1620 kg/h, the molar flowrate of flue gas
(mostly CO2) should be 9509 kg/h based on the combustion of natural gas. The
composition of LNG is the same as that in [7]. Due to the different operating tem-
perature ranges of the top cycle and bottom cycle, working fluid should be chosen
carefully for the top and bottom cycle, respectively. The top cycle is like a conventional
ORC for low-temperature waste heat recovery. Based on a new pinch based working
fluid selection study [9] and the waste heat conditions, R600 is chosen as the working
fluid for the top cycle in this work. For the bottom cycle, the condensation temperature
of the working fluid should be as close as possible to the temperature of LNG. Based on
the saturation temperature at 1 bar, R1150 has the lowest saturation temperature among
22 working fluids investigated by Yu et al. [7]. Therefore, R1150 is chosen as the
working fluid for the bottom cycle. The integrated process is simulated in
Aspen HYSYS, and Peng-Robinson equation of state is chosen for the thermodynamic
property’s calculation of working fluids and LNG.

Fig. 1. Flowsheet of the cascaded ORC system


Cascaded (ORCs) for Simultaneous Utilization 421

3 Process Optimization

Due to highly non-linearity and non-convexity of the problem, a derivative-based


optimization algorithm is inappropriate in this case. The evolutionary algorithm is more
suitable for solving this problem. In this study, we adopt the Particle Swarm Opti-
mization (PSO) algorithm, which was originally developed by Eberhart and Kennedy
[10], to optimize the integrated system. PSO is a population-based optimization
technique inspired by the social behavior of bird flocking or fish schooling. This
optimization algorithm has been successfully applied in energy system, such as heat
exchanger network design [11], distillation column design [12], and ORC system for
engine waste heat recovery [13]. Therefore, it can be adopted in this study to optimize
the cascaded ORC system. Based on the analysis of the degree of freedom, there are 7
independent variables for the integrated system as listed in Table 1. The lower and
upper bounds of these variables are given in Table 1 as well. The upper bound of the
evaporation pressure of both top and bottom ORC is set as the 90% of the critical
pressure of the working fluids to guarantee the stable operation of subcritical ORC
system [14]. To avoid too high capital cost and guarantee the stable operation of the
system, the following constraints should be added in the optimization model. (1) The
minimum temperature approach of TC evaporator and natural gas superheater should
be greater than 5 °C. (2) The minimum temperature approach of TC condenser/BC
evaporator and LNG evaporator/BC condenser should be greater than 3 °C. (3) The
vapor fraction of TC pump and BC pump inlet streams must be 0. (4) The vapor
fraction of TC turbine and BC turbine outlet streams must be greater than 95%. Once
these constraints are violated during the optimization, a large penalty number will be
added to the objective function to drive the search direction within the feasible region.

Table 1. The lower and upper bounds and optimal values of independent variables
Variables Lower bound Upper bound Optimal value
Condensation pressure of TC (bar) 1 5 1.37
Evaporation pressure of TC (bar) 5 34.1 14.69
Working fluid flowrate of TC (kmol/h) 5 50 18.43
Condensation pressure of BC (bar) 1 5 3.24
Evaporation pressure of BC (bar) 10 45.5 38.96
Working fluid flowrate of BC (kmol/h) 10 50 32.17
LNG evaporation pressure (bar) 5 150 79.52
Heat load of TC evaporator (kW) 50 400 181.69
Heat load of natural gas superheater (kW) 0 150 77.20

4 Results and Discussion

The optimal values of the independent variables are listed in Table 1. The maximum
power output of the integrated system is 155.5 kW. Since the mass flowrate of LNG is
assumed as 1620 kg/h, the unit power output is 345.6 kW with the LNG flowrate being
422 F. Liu et al.

1 kg/s. Therefore, the power output is 0.096 kWh/kg (LNG based metrics). Compared
with the electricity consumed during the liquefaction process, the power output of the
system is still quite low. The Logarithmic Mean Temperature Difference (LMTD) of
top cycle evaporator, top cycle condenser, bottom cycle condenser, and natural gas
superheater are 20.44 °C, 11.85 °C, 19.72 °C, and 15.67 °C respectively. The LMTD
of the top cycle evaporator is larger than other heat exchangers. The final temperature
of waste heat is 44.17 °C, which means that the waste heat recovery ratio is very high.
The top cycle power output, bottom cycle power output and natural gas expander
power output are 29.56 kW, 36.88 kW and 102.30 kW respectively. It is clear that the
expansion of natural gas generates 61% of the total power output. However, the LNG
pump consumes 10.07 kW pumping work. The direct expansion of natural gas is an
effective way to recover the LNG cold energy as well. These are the optimal operating
conditions obtained from the PSO algorithm with a maximum of 100 generations. If the
population size and the iteration numbers are increased, better results could be
obtained. There is still space to improve the efficiency of the system. Other than the
cascaded ORC system, series ORC system could probably result in higher efficiency.
However, series ORC system is out of the scope of this paper but will be investigated in
the future work.

5 Conclusion

In this study, a cascaded ORC system to recover low-temperature waste heat and LNG
cold energy is simulated and optimized. Cascaded ORC system can improve LNG cold
energy recovery efficiency. R600 and R1150 are chosen as the working fluids for the
top and bottom cycle respectively since they have appropriate critical properties in their
corresponding operating temperature range. Process optimization is performed based
on a simulation-based optimization framework, which adopts PSO as the optimization
algorithm. The optimal operation conditions of the system are derived based on this
optimization framework. The maximum netpower output is 343.2 kW with the LNG
flowrate being 1 kg/s, which is equivalent to 0.096 kWh/kg. The rigorous techno-
economic optimization of the integrated energy system and series ORC system will be
investigated in future work.

References
1. Liu, H., You, L.: Characteristics and applications of the cold heat exergy of liquefied natural
gas. Energy Convers. Manage. 40, 1515–1525 (1999)
2. Sung, T., Kim, K.C.: LNG cold energy utilization technology. In: Zhang, X., Dincer, I. (eds.)
Energy Solutions to Combat Global Warming, pp. 47–66. Springer, Cham (2017)
3. Patil, V.R., Biradar, V.I., Shreyas, R., Garg, P., Orosz, M.S., Thirumalai, N.C.: Techno-
economic comparison of solar organic Rankine cycle (ORC) and photovoltaic (PV) systems
with energy storage. Renew. Energy 113, 1250–1260 (2017)
4. Wang, E., Zhang, H., Fan, B., Liang, H., Ouyang, M.: Study of gasoline engine waste heat
recovery by organic rankine cycle. Adv. Mater. Res. 383, 6071–6078 (2012)
Cascaded (ORCs) for Simultaneous Utilization 423

5. Drescher, U., Brüggemann, D.: Fluid selection for the organic rankine cycle (ORC) in
biomass power and heat plants. Appl. Therm. Eng. 27, 223–228 (2007)
6. Yu, H., Feng, X., Wang, Y., Biegler, L.T., Eason, J.: A systematic method to customize an
efficient organic rankine cycle (ORC) to recover waste heat in refineries. Appl. Energy 179,
302–315 (2016)
7. Yu, H., Kim, D., Gundersen, T.: A study of working fluids for organic rankine cycles
(ORCs) operating across and below ambient temperature to utilize liquefied natural gas
(LNG) cold energy. Energy 167, 730–739 (2019)
8. Lin, W., Huang, M., He, H., Gu, A.: A transcritical CO2 Rankine cycle with LNG cold
energy utilization and liquefaction of CO2 in gas turbine exhaust. J. Energy Res. Technol.
131, 042201 (2009)
9. Yu, H., Feng, X., Wang, Y.: A new pinch based method for simultaneous selection of
working fluid and operating conditions in an ORC (Organic Rankine Cycle) recovering
waste heat. Energy 90, 36–46 (2015)
10. Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory. In: Proceedings of
the Sixth International Symposium on Micro Machine and Human Science MHS, pp. 39–43.
IEEE (1995)
11. Pavão, L.V., Costa, C.B.B., Ravagnani, M.A.S.S.: Heat exchanger network synthesis
without stream splits using parallelized and simplified simulated annealing and particle
swarm optimization. Chem. Eng. Sci. 158, 96–107 (2017)
12. Javaloyes-Antón, J., Ruiz-Femenia, R., Caballero, J.A.: Rigorous design of complex
distillation columns using process simulators and the particle swarm optimization algorithm.
Ind. Eng. Chem. Res. 52, 15621–15634 (2013)
13. Liu, H., Zhang, H., Yang, F., Hou, X., Yu, F., Song, S.: Multi-objective optimization of fin-
and-tube evaporator for a diesel engine-organic Rankine cycle (ORC) combined system
using particle swarm optimization algorithm. Energy Convers. Manage. 151, 147–157
(2017)
14. Braimakis, K., Preißinger, M., Brüggemann, D., Karellas, S., Panopoulos, K.: Low grade
waste heat recovery with subcritical and supercritical organic rankine cycle based on natural
refrigerants and their binary mixtures. Energy 88, 80–92 (2015)
Manufacturing Technology
Effect of T-groove Parameters on Steady-State
Characteristics of Cylindrical Gas Seal

Junfeng Sun1, Meihong Liu1(&), Zhen Xu2, Taohong Liao3,


and Xiangping Hu4(&)
1
Faculty of Mechanical and Electrical Engineering,
Kunming University of Science and Technology, Kunming 650504, China
[email protected], [email protected]
2
Faculty of Mechanical and Electrical Engineering, Yunnan Open University,
Kunming 650500, China
[email protected]
3
Department of Marine Technology, Norwegian University of Science
and Technology, Trondheim, Norway
[email protected]
4
Industrial Ecology Programme, Department of Energy
and Process Engineering, Norwegian University of Science and Technology,
Trondheim, Norway
[email protected]

Abstract. Gas film seal technology is becoming increasingly important as an


advanced new rotary shaft seal technology in aviation engines and industrial gas
turbines. In this paper the impacts of several parameters of T-groove cylindrical
gas seal such as groove number, groove depth, groove width ratio, dam groove
width ratio and floating ring length on the steady-state characteristics of cylinder
film seal are studied in detail by the method of control variable using compu-
tational fluid dynamics software, and the focuses are on the pressure distribu-
tion, the gas film stiffness, and the leakage. Results show that with the increase
of the number of grooves, the gas film stiffness increases gradually, but the
leakage and leakage stiffness ratio decrease. The results also show that with the
increase of groove depth, there is a maximum value for the gas film stiffness and
a minimum value forleakage. This research plays an important role in guiding
the design and the application of cylindrical gas seal.

Keywords: Gas cylinder film seal  T-groove parameters  Gas film stiffness 
Leakage

1 Introduction

In the recent development of sealing technology, the new and advanced gas film seal
technology is gaining more attentions in industrial gas and aviation engines turbines
[1], since the gas film seal technology, comparing with other technologies, has lower
leakage, wear, and energy consumption, but longer life expectation, simpler and more
reliable operation [2]. Ma et al. carried out a series of researches on cylindrical gas seal
and obtained results of gas film reaction force, film stiffness, and friction torque and
seal leakage of the seal [3]. The dynamic characteristics of the cylindrical gas seal were
© Springer Nature Singapore Pte Ltd. 2020
Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 427–433, 2020.
https://doi.org/10.1007/978-981-15-2341-0_53
428 J. Sun et al.

also studied by the perturbation method [4]. Ma et al. further compared the perfor-
mance of several common spiral groove forms and found that the cylindrical gas film
seal is more suitable for sealing the key parts of the aero engine [5].
As a new structure of cylindrical gas seal, there is relatively few researches on the
T-groove cylindrical gas seal. The T-groove cylindrical gas seal, therefore, is studied in
this paper. The focuses are on the influence of T-groove parameters on the steady state
characteristics of the cylindrical gas film by the control variable method. This research
plays an important role in guiding the design and the application of cylindrical gas seal.

2 Model

2.1 T-groove Cylinder Gas Film Seal

Fig. 1. The sketch of the expansion diagram of T-groove cylinder gas film seal

Figure 1 illustrates the sketch of the expansion diagram of T-groove cylinder gas film
seal. Sealing medium is air, and the detailed description of each parameter is presented
in Table 1.

Table 1. Parameters definition


Parameter type Parameters (Unit) Value
Slot parameters Wg (mm) 5.73
Wr (mm) 17.19
Br (mm) 0.91
Bg (mm) 9.09
Structural parameters Static ring radius Rk (mm) 58.4
Moving ring radius Rj (mm) 58.41
Seal clearance c (lm) 10
Eccentricity rate e 0.5
Eccentricity e (lm) 5
Operating parameters Rotational speed n (r/min) 8000
Pressure difference△P(MPa) 0.01
Viscosity l(Pa•s) 1.79  105
Effect of T-groove Parameters on Steady-State Characteristics 429

2.2 Model Structure


First, the static and moving rings of T-groove gas film sealing were designed and
established. The inner surface of the gas-film sealing static ring was processed with
T-groove, and the parameters of the groove were shown in Table 1. Then the model of
the T-groove gas film is built and illustrated in Fig. 2.

Fig. 2. The model of the T-groove cylinder gas film seal

2.3 Mesh Structure


To mesh the T-groove cylinder gas film seal, the HYPERMESH and ANSA are used in
this paper. One important characteristics of the cylinder gas film seal is that the overall
structure and the film thickness is very different, since the maximum dimension of the
model is tens of millimeters, but the minimum dimension is only tens of microns.
Therefore, the thickness direction should be classified as a dense grid. Furthermore, a
simple method for building the meshing cannot be used, since this might lead to poor
quality mesh, which cannot satisfy the requirements of FLUENT calculation, and the
results from simulation using the poor-quality mesh might be not accurate. As shown in
Fig. 3, by using HYPERMESH and ANSA, the model is divided into two parts, the
groove and the platform. In this paper, the hexahedral mesh structure is used, and the
quality of the mesh is controlled by the method of meshing from surface to body. After
the model is built, it is then not difficult to set up the boundary condition.

(a) Macro grid (b) Local grid

Fig. 3. The mesh structure


430 J. Sun et al.

2.4 Model Assumptions


To perform the simulation and analysis, some assumptions of the model are necessary
and described here. The first assumption is that the fluid film between the moving and
stationary rings is a continuum medium and the Newtonian viscosity law is satisfied.
The second assumption is that no relative sliding exists between the film and the
moving/static ring’s surface. The third one is that there is no disturbance and vibration,
in other words, the film is stable. The forth assumption is that there is no pressure and
thermal deformations during the process of operation. The last assumption is that both
the volume and inertia forces are negligible.

2.5 Boundary Conditions


Several boundary conditions are required, such as rotation speed of the moving ring,
the moving and static ring walls, and the designated z axis. In this paper, the rotational
speed is set to x = 8000 rpm. The moving and static ring wall is set to wall motion and
to stationary wall boundary condition, respectively. The rotation axis of the moving
ring is used as the designated z axis.

2.6 Solution Set


Using FLUENT to simulate and analyze the sealing performance of gas film is feasible.
The analytical method is accurate, and the result of analysis is reliability, and it has
been verified in the literature [6–10].

3 Results

3.1 Effect of Number of Grooves


Figure 4 illustrates the relationship between the maximum pressures, the gas film
stiffness, the leakage and the number of grooves. Figure 4(a) show that the maximum
pressure decreases gradually when number of groove increases. When the number of
grooves is 22, the maximum pressure tends to be stabilized. However, the film stiffness
increases with the number of grooves. When the number of grooves reaches 22, the
film stiffness tends to be stabilized. Figure 4(b) shows the effect of the number of
grooves on the leakage. It can be seen from this figure that the leakage decreases when
the number of grooves increases. This is because the hydrodynamic pressure effect is
enhanced when the number of grooves is increased. Therefore, the pressure at the outlet
is increased, and the leakage of the sealing gas is prevented. Since the leakage is
gradually decreased, the sealing performance is improved.
Effect of T-groove Parameters on Steady-State Characteristics 431

Fig. 4. Effect of number of T-groove on the maximum pressure and the film stiffness (a) and the
leakage (b)

3.2 Effect of Groove Depth


Figure 5 shows the variation of the maximum pressure and the film stiffness (a) and the
leakage (b) with the groove depth. The results in Fig. 5(a) show that the maximum
pressure and film stiffness reach maximum values simultaneously at the groove depth
about 20 lm, and then gradually decrease.

Fig. 5. Effect of groove depth on the maximum pressure and the film stiffness (a), and the
leakage (b)

The impact of groove depth on the leakage is shown in Fig. 5(b). It can be seen
from the figure that the leakage reaches the minimum value when the groove depth is
also about 20 lm. This is due to that with the increase of the groove depth, the
hydrodynamic pressure effect is enhanced, and then the leakage is reduced. However,
when the depth exceeds 20 lm, negative pressure zone is formed when gas flows
through the groove. The reason is that the formation of the negative pressure zone
makes the hydrodynamic pressure effect no longer increase. The sealing gap between
432 J. Sun et al.

the moving ring and the stationary ring increases when the groove depth increases.
Therefore, the leakage begins to increase when the groove depth reaches more than
20 lm.

3.3 Effect of Groove Width Ratio (c)


Figure 6 shows that the response of the maximum pressure and the film stiffness
change (a), the leakage (b) to the groove width ratio. Results show that when the
groove width ratio is 0.6, the maximum pressure reaches the maximum value, and then
gradually decreases. The film stiffness increases with the increase of the groove width
ratio. When the groove width ratio reaches about 0.5, the film stiffness tends to be
stabilized.

Fig. 6. Effect of groove width ratio on the maximum pressure and the film stiffness (a), and the
leakage (b)

Figure 6(b) shows the impact of the groove width ratio (c) on the leakage. The
leakage increases when the groove width ratio increases. This is because increasing of
the groove width ratio causes increase of the sealing gap in the circumferential
direction, which leads to higher leakage.

4 Summary

The effect of several parameters of the T-groove parameters on the sealing performance
is investigated, and from the analysis, the following conclusions can be drawn:
1. The film stiffness increases when the number of grooves increases, and it tends to be
stabilized when the number of grooves reaches 22. However, as the number of
grooves increases, the leakage decreases. These indicate that when other parameters
are constant, there are an optimal number of grooves. At the optimal number of
grooves, both the leakage and the film stiffness reach the stable values.
Effect of T-groove Parameters on Steady-State Characteristics 433

2. With the increase of the groove depth, there is a maximum value of the film stiffness
and a minimum value of the leakage. These two extreme values achieved simul-
taneously at the groove depth 20 lm. These show that when other parameters are
constant, there is an optimal groove depth. At this optimal groove depth, the leakage
is reduced to the minimum value, and the film stiffness rises to the maximum value.
3. The film stiffness increases with the increase of the groove width ratio. When the
groove width ratio is 0.5, the film stiffness tends to be stabilized, and when the
groove width ratio is 0.6, the maximum pressure reaches the maximum value. The
leakage increases when the groove width ratio increases. This is because increasing
of the groove width ratio causes increase of the sealing gap in the circumferential
direction, which leads to higher leakage.

Acknowledgement. This research was fully supported by the National Natural Science Foun-
dation of China (No. 51765024) and the Youth Project of Science and Technology Department of
Yunnan Province (No. 2017FD132). We gratefully acknowledge the relevant organizations.

References
1. Liu, J., Zhang, Z.: Prospect of aero engine power transmission system in the 21st century.
J. Aerosp. Power 16(2), 4 (2001) 108–114. (in Chinese)
2. Cai, R., Gu, B., Song, P.: Process Equipment Sealing Technology, 2nd edn. Chemical
Industry Press, Beijing (2006). (in Chinese)
3. Ma, G., Xi, P., Shen, X., Hu, G.: Analysis of quasi2dynamic characteristics of compliant
floating ring gas cylinder seal. J. Aerosp. Power 25(5), 1190–1196 (2010). (in Chinese)
4. Gang, M., Guangzhou, X.U., Shen, X.: Design and analysis for spiral grooved cylindrical
gas seal structural parameter. Lubr. Sealing 32(4), 127–130 (2007)
5. Nakane, H., Maekawa, A.: The development of high-performance leaf seals. J. Eng. Gas
Turbine Power 126(3), 42–350 (2004)
6. Ma, G., Sun, X.-J., Luo, X.-H., He, J.: Numerical simulation analysis of steady-state
properties of gas face and cylinder film seal. Beijing: J. Beijing Univ. Aeronaut. Astronaut.
40(4), 439–443 (2014)
7. Ma, G., Cui, X., Shen, X., Hu, G.: Analysis of performance and interface structure of
cylindrical film seal. J. Aeronaut. Power 26(1), 2610–2615 (2011)
8. Ma, G., Sun, X., He, J., Shen, X.: Simulation analysis of gas face and cylinder film seal by
parametric m odeling. Lubr. Sealants 38(7), 8–11 (2013)
9. Jing, X.U.: Ananlytical and Experimental Investigations on the End Face Deformation
Mechanisms for High Pressure Sprial Grooved Dry Gas Seals. Zhejiang University of
Technology, Hangzhou (2014)
10. Wang, X., Liu, M., Hu, X., Sun, J.: The Influence of T groove layout on the performance
characteristic of cylinder gas seal. In: Wang, K., Wang, Y., Strandhagen, J., Yu, T.
(eds) Advanced Manufacturing and Automation VIII. Lecture Notes in Electrical Engineer-
ing, vol. 484, pp. 701–707 (2018)
Simulation Algorithm of Sample Strategy
for CMM Based on Neural Network Approach

Petr Chelishchev(&) and Knut Sørby

Department of Mechanical and Industrial Engineering,


NTNU, 7491 Trondheim, Norway
[email protected]

Abstract. This paper proposes the algorithm for analyses of sample strategies.
The back propagation artificial neural network approach is employed to approx-
imate CMM measurements of the circular features of the aluminum workpieces
machined with milling process. The discrete data is transformed into continuous
nondeterministic profiles. The profiles are used for simulation to estimate the
maximum possible error in different sample strategies for various diameters.

Keywords: Neural network  Deep learning  Nondeterministic profile 


Sample strategy  CMM

1 Introduction

Coordinate Measuring Machines (CMMs) play an important role in the part inspection
and quality control [1]. One of the important parameters in measuring strategy with
CMM is the sample size of measuring points. The sample-point measurements provide
discrete coordinates of the workpiece surface. The point-coordinates are used for
assessment whether a form or dimension deviation are inside or outside of tolerance
limits. The optimal choice of discrete points also relatives with applied evaluation
methods and tolerance types [2, 3]. The reliability and quality of CMM sample
assessment depends on density and location of measured points [4]. Thus, the
inspection is often a compromise between a required time, cost, and the measuring
uncertainty.
The result of measuring inspection depends on manufacturing process errors as
well. Mesay et al. [5] have classified the process error into systematic and random
components. In another paper, Qimi, Mesay et al. [6] have estimated the frequency of
systematic errors by use of Fourier analyses. Other authors have investigated the
measuring uncertainty due to the sample size based on approximation of aperiodic
deterministic profile with Fourier series [7, 8].
Moschos et al. [9] suggested a Bayesian regularized artificial neural network
(BRANN) model trained with relatively small sample size to predict a variability of
large data sample. Other authors determine an optimal inspection sample size based on
measuring errors approximated by ANN for various machine processes and nominal
sizes [10].

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 434–441, 2020.
https://doi.org/10.1007/978-981-15-2341-0_54
Simulation Algorithm of Sample Strategy for CMM 435

A workpiece profile cannot be prescribed before it is measured, thus the profile is


characterized as nondeterministic. This paper deals with the nondeterministic profiles
derived from coordinate measurements of real workpieces. In order to investigate an
influence of the sample size, Artificial Neural Network (ANN) was employed to create
the continues nondeterministic profiles based on the discrete CMM data.

2 Artificial Neural Network Approach

The state of the art in Artificial Neural Network is based on our understanding level of
biological neurons function [11]. One of the most important advantage of ANN is that
it can mimic processes with unknown relation of input and output data. In the case of
limited information about a complex process, the ANN can provide relatively precise
solution based on limited experimental data. The artificial network composed of dif-
ferently connected artificial neurons, which are named as processing elements (PE).
The PEs are connected into input layers, hidden layers, and output layers to create the
artificial network [12]. Multilayers ANN can include many layers but to reduce the
computation time, most commercial systems do not exceed two layers. It is important
to notice that the final solution of ANN is not unique, but the one that satisfies the
minimal error requirements.
A multiple feed forward back-propagation (PB) ANN [13], was created in
MATLAB program environment (Fig. 1). The design of PB ANN includes a number of
steps [14] such as: preparing and pre-processing of training data; creating of a network
structure; configuring the network; initialization weights and biases; training, validation
and testing of the network. Let us have a look at each of these steps in detail.
We denote uk as the network input and Rk as the target, thus we have a network
with one input and one output. In order to achieve a better accuracy we apply a deep
leaning strategy in this work. There are two hidden layers, with 260 neurons in the first
and 12 neurons in the second layer. The chosen number of layers, neurons is a trial and
error procedure.
In our case, we utilize the tan-sigmoid transfer function (tansig) for both hidden
layers. The tansig function has a following form:

2
y ¼ f ðaÞ ¼  1; ð1Þ
1 þ e2a

where the argument a is a summation of weights wij and biases si (threshold value) and
given by such expression:

X
n
a¼ wij uj  si ; ð2Þ
i¼1

where uj is the input.


The linear transfer function (purelin) was used for the output layer. The activation
functions are defined over the interval [−1, 1].
436 P. Chelishchev and K. Sørby

Fig. 1. A 1-260-12-1 feed forward ANN architecture for approximation of the measuring
profiles

The Levenberg-Marquardt training algorithm was performed for ANN learning


[15]. In spite of the fact that this method has larger memory requirements than other
approaches, it is the fastest supervised optimization algorithm with an efficient
implementation in MATLAB. The algorithm regulates whether the Newton or the
Gradient Decent method is performed. In such case we must use the mean squared error
(MSE) obtained as following:

X
m
ðyi  ti Þ2
MSE ¼ ; ð3Þ
i¼1
m

where m is a total set of all entries.


The measurement data was divided as the following: training – 85%; validation –
10%; test – 5%. The procedure described above was repeated until maximum absolute
error jemax j reaches a certain value. In this work jemax j\2 lm was applied.

3 Model Implementation

For implementation of our approach, we measured circular holes milled in a 20 mm


aluminium plate. There are 9 holes of various diameters from 40 mm to 500 mm. The
inspection was performed in a Leitz PMM-C-600 coordinate measuring machine with
an analogue probe and PC-DMIS software. The middle section of each hole was
measured with 480 uniformly distributed points. The least squares circle (LSC) method
was used to calculate the circle centre coordinates ðXc ; Yc Þ and radius values of each
section. Usually, the LSC method overestimates the roundness value compared to the
minimum zone (MZ, Chebyshev) method, which is recommended method by ISO 1101
standard. However, in the case of small sample sizes, the true value might be under-
estimated. Thus, the LSC may be more preferable. Besides, the LSC method is set as
default in all previous versions of PC-DMIS.
The radius distance Rk from the circle centre ðXc ; Yc Þ to each individual measured
point ðXk ; Yk Þ was calculated as following:
Simulation Algorithm of Sample Strategy for CMM 437

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Rk ¼ ðXk  Xc Þ2 þ ðYk  Yc Þ2 ; ð4Þ

The value of the radial angle was set as uk ¼ 2p  k=480; k ¼ 0. . .479, where k is
the index number of measured points. The few examples of such estimated radius
variable versus the angle are shown on the polar plots in Fig. 2.

Fig. 2. The measured profiles of three radial sections

These estimated profiles were approximated with ANN model. An example of the
approximated nondeterministic profile is illustrated in Fig. 3. The lowest graph of
Fig. 3 shows the approximations error in each particular point. The total range of the fit
errors is within 0:9  104 mm for this particular profile. The nondeterministic profile
equivalents to a continuous function, that provides an opportunity to simulate the
measuring strategies based on real measurements and perfect repeatability conditions.

4 Simulation Procedure

In order to estimate the maximum measuring error due to the sample size, an additional
procedure was developed. A common practice in CMM measuring is using the sample
consisting of equally distributed measuring points and the LSC as the default method.
An example of the simulation, using a five-point sample (n = 5), is illustrated in Fig. 4.
A sample of n equally distributed points is taken from the ANN profile (Sect. 3), and
the n-point sample is rotated clockwise with m = 103 iteration number. In each iteration
the sample is rotated by the angular step s ¼ 2p=n m. When the first point p1 position is
defined, the other ðn  1Þ sample points (p2 ; p3 ; . . .pn ) are determined uniquely with
equal space 2p=n . The sample of n radius values rkANN (k ¼ 1; . . .n) is generated from
the trained network corresponding to uniform point locations.
438 P. Chelishchev and K. Sørby

Fig. 3. The continues nondeterministic profile approximated with ANN (D3 ¼ 100 mm)

The corresponding xk ; yk coordinates are calculated from radius variables rkANN by


following equations:

xk ¼ Xc þ rkANN cosðuk Þ
ð5Þ
yk ¼ Yc þ rkANN sinðuk Þ

A new circle centre was calculated with transformed coordinates as:


ðxc ; yc Þ ¼ ðuc ; vc Þ þ ðx; yÞ; ð6Þ

where a “best fit” circle of least squares of simulated points with the circle centre
ðuc ; vc Þ was found from following system:
8  
> P 2 P P 3 P 2
>
< cu u k þ v c u v
k k ¼ 1
2 u k þ u v
k k
k k  k k  ð7Þ
>
> P P 2 1 P 3 P 2
: uc v k uk þ v c v k ¼ 2 vk þ v k uk
k k k k

Where uk ¼ xk  x and vk ¼ yk  y are the transformed coordinates. Then, the


radius values for each point can be calculated as following:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
qk ¼ ðxk  xc Þ2 þ ðyk  yc Þ2 : ð8Þ

The circle centre ðxc ; yc Þ and radius values qk are calculated in the each iteration. Then
the radius variation range of the n-sample for each particular location is estimated by the
residual Dq ¼ qmax  qmin . Eventually, after all iterations were completed, the smallest
estimated radius variation range Dqmin for the particular sample size n was defined. The
maximum estimation error dmax due to the sample size n was calculated as
dmax ¼ DRANN  Dqmin , where DRANN ¼ RANN max  Rmin is the precise radius variation
ANN

range estimated from 480 variables, which were simulated with the continuous virtual
profile.
Simulation Algorithm of Sample Strategy for CMM 439

Fig. 4. The five-point sample: p2 ; p3 ; . . .pn – the measured points; q1 ; q2 ; . . .q5 – the estimated
radius variables; ref.lsc – reference least squares circle; r1ANN – a radius of the original reference
circle; ðXc ; Yc Þ – a reference circle center based on 480 points; ðxc ; yc Þ – a new circle center based
on 5 points.

5 Results and Discussion

The simulation procedure described in Sect. 3 was applied with different sample sizes
from 5 to 400 measuring points, and for 9 circle sections with nominal diameter from
40 mm to 500 mm. The final simulation results are tabulated in Table 1.

Table 1. The maximum estimated error dmax due to the sample size n for various diameters Di
Sample size n 5 15 30 60 93 150 200 300 400
D1 ¼ 40 mm* 11.2 7.5 5.0 4.3 3.1 2.8 2.5 1.3 0.7
D2 ¼ 80 mm 10.2 5.4 5.0 4.1 2.5 1.9 1.6 1.1 0.5
D3 ¼ 100 mm 7.1 4.9 4.0 3.1 2.5 2.0 1.3 0.7 0.4
D4 ¼ 150 mm 21.4 11.5 9.1 7.3 6.6 6.6 4.5 2.1 0.8
D5 ¼ 200 mm 15.1 4.4 2.0 0.9 0.7 0.3 0.2 0.1 0.1
D6 ¼ 250 mm 30.2 7.8 2.6 1.9 1.1 0.7 0.7 0.4 0.0
D7 ¼ 300 mm 22.2 8.7 5.9 3.7 2.1 1.4 1.3 1.0 0.8
D8 ¼ 400 mm 21.5 8.7 4.1 1.8 1.6 1.1 0.7 0.4 0.2
D9 ¼ 500 mm 24.3 10.6 7.8 7.8 5.0 3.2 3.2 1.9 0.8
*dmax is given in lm

The graphical interpretation of the results (see Fig. 5a) shows that relation between
the maximum estimated error dmax and the sample size n has nonlinear, asymptotic
behavior. This behavior appears relatively predictable. However, the relation between
the maximum estimated error and the diameter size for a given sample size does not
follow a clear trend, when a five point-sample is used (see Fig. 5b). The maximum
estimated error for different diameters varies between 7.1 µm and 30.2 µm. In our tests,
the maximum estimated error is up to 6.6 µm for the ninety three points sample size,
and up to 2.1 lm for three hundred measuring points.
440 P. Chelishchev and K. Sørby

(a) The maximum estimated error vs sample (b) The maximum estimated error vs diam-
size for diameters D1 , D2 ,...D9 eter size for samples of 5, 93 and 300 points

Fig. 5. The relationship of the maximum estimated error with the sample size and the diameter
size of circle profiles

6 Conclusion

According to the simulation results, the error due to the sample size can be a significant
contributor to the measurement uncertainty and thus it must be considered in the
measuring strategy for CMM.
The simulation procedure presented in this paper is a new algorithm for estimating
the maximum error due to number of the measuring points. As shown with the test
pieces, the diameter size is not the main factor for defining the sample strategy.
The presented ANN approach can be adapted to profile forms generated by any
machine operations. The approximated nondeterministic profile can be further used as
the continuous function for other simulations regarding the sample strategy, alignment,
filtration methods and measuring uncertainty estimation.

References
1. De Chiffre, L.: Geometrical Metrology and Machine Testing. DTU Mech. Eng. (2015)
2. Summerhays, K.D.: Optimizing discrete point sample patterns and measurement data
analysis on internal cylindrical surfaces with systematic form deviations. Precis. Eng. 26(1),
105–121 (2002)
3. Changcai, C.: Research on the uncertainties from different form error evaluation methods by
CMM sampling. Int. J. Adv. Manuf. Technol. 43(1), 136–145 (2009)
4. Moroni, G.: Coordinate measuring machine measurement planning. Springer, London
(2010)
5. Desta, M.T.: Characterization of general systematic form errors for circular features. Int.
J. Mach. Tools Manuf 43(11), 1069–1078 (2003)
6. Qimi, J.: A roundness evaluation algorithm with reduced fitting uncertainty of CMM
measurement data. J. Manuf. Syst. 25(3), 184–195 (2006)
Simulation Algorithm of Sample Strategy for CMM 441

7. Cho, N.: Roundness modeling of machined parts for tolerance analysis. Precis. Eng. 25(1),
35–47 (2001)
8. Ruffa, S.: Assessing measurement uncertainty in CMM measurements: comparison of
different approaches. Int. J. Metrol. Qual. Eng. 4, 163–168 (2013)
9. Papananias, M.: A novel method based on Bayesian regularized artificial neural networks for
measurement uncertainty evaluation. In: EUSPEN Proceedings of the 16th International
Conference of the European Society for Precision Engineering and Nanotechnology,
Nottingham, UK, pp. 97–98. EUSPEN (2016)
10. Zhang, Y.F.: A neural network approach to determining optimal inspection sampling size for
CMM. Comput. Integr. Manuf. Syst. 9, 161–169 (1996)
11. Grossberg, S.T.: Studies of the Mind and Brain. Reidel Press, Drodrecht (1982)
12. Wang, K.: Applied computational intelligence in intelligent manufacturing systems. In:
International Series on Natural and Artificial Intelligence, vol. 2, 2nd edn. Advanced
Knowledge International, Adelaide (2007)
13. Jones, W.: Back-propagation: a generalized delta learning rule. Byte 12(11), 155–162 (1987)
14. Beale, M.H.: Neural Network Toolbox, User guide (2017)
15. Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear parameters.
SIAM J. Appl. Math. 11, 431–441 (1963)
Digital Modeling and Algorithms for Series
Topological Mechanisms Based on POC Set

Lixin Lu1, Hehui Tang1, Guiqin Li1(&), and Peter Mitrouchev2


1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai 200072, China
[email protected]
2
University Grenoble Alpes, G-SCOP, 38031 Grenoble, France

Abstract. The algorithm and software implementation of POC set automatic


analysis for serial mechanism based on position and orientation characteristic
equations are studied. Firstly, a digitized matrix which can describe the complete
topological structure of the serial mechanism is proposed. The POC set of the
serial mechanism is represented by two-dimensional binary matrix, which not
only describes the dimension of the motion output, but also gives the rela-
tionship between the output characteristic azimuth and the axis of the motion
pair. Based on the digital description method, the suitability calculation is
proposed on the basis of binary “and”, “or” operation. The automatic analysis
algorithm of POC set of computer automatic analysis is realized, and then the
automatic calculation and analysis of POC set of serial mechanism is realized.
Finally, the feasibility of the algorithm is verified by digital modeling analysis.

Keywords: Series mechanism  POC set  Digital matrix  Algorithm

1 Introduction

According to the topology design theory and decoupling-reducing design method of


PM based on the orientation characteristic (POC) equations, the complex positional
relationship of the parallel topology mechanism is solved. Computer-aided analysis can
be traced back to the analysis of planar parallel mechanisms. In 1963, Dobrjanshyj [1]
and Freudenstein [2] first proposed graph theory-based analysis theory, and analyzed
the automatic configuration of planar mechanisms; Olson [3] and Belfiore [4]
respectively focused on the planar mechanism kinematic chain. Automated mapping
has done some related research; Saura [5] studied an automatic configuration of planar
structures including low and high pairs; and Mrutyunjay [6] based on digital config-
uration, on the plane kinematic chain The program automatic algorithm is studied; Li
[7, 8] is based on the Assur rod group theory and uses it as the basic element of the
mechanism, thus creating a mechanism topology matrix-rod group adjacency matrix,
which is used to describe the Assur rod group. Plane body. Computer-aided analysis of
spatial organization topology is not much research at home and Han Yali [9] proposed
a method for the synthesis of parallel mechanism configuration using VB, and used it to
synthesize the three-degree-of-freedom parallel mechanism; Liao [10] proposed The
symbolic representation method of the parallel mechanism expresses each mechanism

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 442–450, 2020.
https://doi.org/10.1007/978-981-15-2341-0_55
Digital Modeling and Algorithms for Series Topological Mechanisms 443

of the mechanism as a symbol polynomial form, and the algebraic operation function
between the symbol polynomials, but this method is complicated, which is not con-
ducive to computer reading and recognition. In this paper, according to the symbolic
habits related to institutional science, a digital matrix is proposed to represent the order
of f motion pairs in the mechanism and the positional relationship between them. At the
same time, a 2 * 6 matrix is established to represent the mechanism. The POC set [11],
the first row and the second row respectively represent the number of moving elements
and the number of rotating elements of the mechanism; and then the automatic gen-
eration algorithm of the mechanism POC set is established to realize the automatic
analysis of the organization POC set. The automatic analysis of the series mechanism
POC set is realized by programming, and the correctness of the program is verified by
an example.

2 Digital Modeling of Series Mechanism Topologies


2.1 Sports Subtype and Scale Constraint
The most common motion pairs in the series mechanism are the mobile pair (P pair)
and the rotating pair (R pair). The rest, such as the cylindrical pair (C pair), the spiral
pair (H pair), and the ball pair (S pair), can be regarded as the combination of the P pair
and the R pair. For example, the S pair can be regarded as an R pair whose three axes
intersect at the same point, and the C pair can be regarded as two R pairs and a P pairs
of the same axis. Since the C pair, the S pair, etc. can be regarded as consisting of the R
pair and the P pair, the series mechanism can be regarded as consisting of only the R
and P pairs of a single degree of freedom. The P pairs are sequentially stored in the
character string, as shown in Table 1.
The positional relationship between the motion pairs can be divided into six basic
scale constraint types, namely: parallel Coaxial, coplanar, spatial co-point and general
position. To facilitate programming, digital representation can be followed by the
following rules: parallel is 1, vertical is 2, coaxial is 3, coplanar is 4, and space is 5; The
general position is recorded as 0. Thus, the type, arrangement, and orientation of the
motion pair can be represented by an ordered topology matrix L, namely:
2 3
J11 N1i    N1f
6 Ni1 J22     7
6 7
6       7
L¼6
6 
7 ð1Þ
6      77
4       5
Nfi     Jff

Jii—Diagonal element, indicating the type of motion pair (R/P vice), Jii= R or P;
f—Number of institutional movements;
Nij—Orientation relationship between the i, j movement pairs
444 L. Lu et al.

Table 1. Digital modeling of topological structure of serial mechanism


Institutional topology Institutional topology

Institutional topology SOC fP1 ==R2 Institutional topology SOC fP1  R2


==R3 ==R4 g R3 ==R4 g
String PRRR String PRRR
2 3 2 3
Azimuth relation matrix P 1 1 1 Azimuth relation matrix P 0 0 0
61 R 1 17 60 R 5 0 7
6 7 6 7
41 1 R 15 40 5 R 1 5
1 1 1 R 0 0 1 R

2.2 A Digital Matrix Description of Series Mechanism POC Set


The defined POC set matrix must contain not only the end output motion dimension
information, but also the direction or reference of the motion output axis. On the basis
of the ordered description matrix of the mechanism topology, the organization POC set
matrix adopts a digitization with a row number of 2, a column number f (the number of
motion pairs), and a bitmap-ordered topology matrix L. structure.
" #
t1 t2 . . . tf
P¼ ð2Þ
r1 r2 . . . rf

ti—Moving element output;


ri—Rotate element output;
The description rules for the two-dimensional matrix P are as follows:
(1) The elements in the first and second rows represent independent movement and
independent rotation output, respectively;
(2) The non-zero elements in the i-th (i = 1, 2,…, 6) column represent the direction of
the motion and rotational motion: (1) When ti = 1, it represents the i-th motion
pair (ri = 0). The axis or vertical i-th motion pair (ri = 1) axis has an independent
moving output; When ti = 2, there are two independent moving outputs in the
plane of the vertical i-th moving auxiliary axis.
(3) The rotation or moving element in any direction is described only once, and the
sum of the number of moving sub-elements and rotating sub-elements is equal to
the degree of freedom of the series mechanism;
(4) When there are both the rotating pair and the moving pair in the series mechanism,
the direction of motion of the rotating pair is preferentially used to indicate the
direction of motion;
Digital Modeling and Algorithms for Series Topological Mechanisms 445

3 Planar Substring and Its POC Set

A plane substring is a sub-series mechanism in which two or more motion pairs are
connected in series. Therefore, the 10 kinds of plane substrings which are commonly
used are stored in the computer, so that when the mechanism POC set is automatically
analyzed, it can be identified and called. The common motion subtypes, orientation
relationship matrix C and POC set matrix of 10 plane substrings are shown in Table 2.

Table 2. Planar series sub-mechanisms


Number Sports subtype Topological matrix LC POC set matrix
  " #
G21 R||R R 1 1 0
LC21 ¼ PC21 ¼
1 R 1 0
  " #
G22 R⊥P R 2 1 0
LC22 ¼ PC22 ¼
2 P 1 0
  " #
G23 P⊥R P 2 0 1
LC23 ¼ PC23 ¼
2 R 0 1
2 3 " #
G31 R//R//R R 1 1 2 0 0
LC31 ¼ 4 1 R 1 5 PC31 ¼
1 0 0
1 1 R
2 3 " #
G32 R//R⊥P R 1 2 2 0 0
LC32 ¼ 1 R 2 5
4 PC32 ¼
1 0 0
2 2 P
2 3 " #
G33 P⊥R//R P 2 2 0 2 0
LC33 ¼ 4 2 R 1 5 PC33 ¼
0 1 0
2 1 R

(1) For a single-degree-of-freedom motion pair, the POC set matrix structure is
2  1; (2) For a series mechanism with a degree of freedom less than 6, the POC set
matrix structure is 2  f (f is the number of motion pairs); (3) The first motion pair of
the plane substring is not necessarily the first motion pair of the series mechanism, and
may be the second, third or fourth motion pair of the series mechanism. The rules for
judging various possible situations are as follows:
① The two-degree-of-freedom plane substring corresponds to the numbers G21,
G22and G23 in Table 2, which is R//R, R⊥P or P⊥R. If their first motion
pair is also the first motion pair of the series mechanism, the POC set matrix
is the same as Table 2; if their first motion pair is the second motion pair of
the series mechanism, The elements of the rotating pair and the moving pair
in the POC set matrix are shifted to the right by one bit, the first column is
complemented by 0; if their first motion pair is the third motion pair of the
series mechanism, the rotating pair and the moving in the POC set matrix
The secondary element is shifted to the right by two digits, while the first
446 L. Lu et al.

and second columns are complemented by 0; if their first motion pair is the
fourth motion pair of the series mechanism, the elements of the rotating pair
and the moving pair in the POC set matrix are to the right. Move three
places, and add 0 to the first, second, and third columns.
As shown in Fig. 1, in the series mechanism, R4//R5 is a plane substring, and the
first rotating pair R4 is the fourth motion pair of the series mechanism, and the POC set
matrix of the plane substring R4//R5 is shown.
 
0 0 0 1 0 0
PR==R ¼ ð3Þ
0 0 0 1 0 0

t4 = 1—One-dimensional movement perpendicular to the R4 axis;


r4 = 1—One-dimensional rotation around the R4 axis;

Fig. 1. R//R//R-R//R series mechanism

② The three-degree-of-freedom plane substring corresponds to G31–G33 in


Table 2, that is, R//R//R, R//R⊥P or P⊥R//R. If their first motion pair is also
the first motion pair of the series mechanism, the POC set matrix is com-
plemented by 0 in the last 3 columns of the matrix of Table 2; if their first
motion pair is the tandem mechanism In the case of two motion pairs, the
elements of the rotating pair and the moving pair in the POC set matrix are
shifted to the right by one bit, and the first and fifth and sixth columns are
complemented by 0; if their first motion pair is the third motion pair of the
series mechanism When the elements of the rotating pair and the moving
pair in the POC set matrix are shifted to the right by two bits, the first two
columns and the sixth column are filled with zeros.
As shown in Fig. 2, the series mechanism includes a plane substring of R3//R4//R5,
and the first rotating pair R3 is the third motion pair of the series mechanism, and the
rotating pair and the moving in the POC set matrix The secondary element is shifted to
the right by two. The POC set matrix of the plane substring R3//R4//R5 is
 
0 0 2 0 0 0
PR==R==R ¼
0 0 1 0 0 0
Digital Modeling and Algorithms for Series Topological Mechanisms 447

t3 = 2—There is a two-dimensional movement perpendicular to the direction of the


R3 axis;
r3 = 1—One-dimensional rotation around the R3 axis;

Fig. 2. R//R//R-R//R series mechanism

4 Automatic Generation of Series Mechanism POC Sets


4.1 Identification of Planar Substrings in Series Branches of Automatic
Generation Algorithm Flow and Extraction of POC Set Matrix
The topology matrix L is split into the following sub-matrices according to the main
diagonal direction.
(1) Two degrees of freedom tandem branch L is a 2  2 matrix, so it can be split into
1 2  2 submatrix.
(2) Three-degree-of-freedom series branch L is a 3  3 matrix, so it can be split into
one 3  3 submatrix and two 2  2 submatrices.
(3) Four degrees of freedom tandem branch L is a 4  4 matrix, so it can be split into
two 3  3 sub-matrices and three 2  2 sub-matrices.
(4) Five degrees of freedom tandem branch L is a 5  5 matrix, so it can be split into
three 3  3 sub-matrices and four 2  2 sub-matrices.
Now take the five-degree-of-freedom series mechanism as an example to illustrate
how to extract and identify the planar substring in the series mechanism. The ordered
topology matrix of the five-degree-of-freedom series mechanism is:
2 3
J11 a12 a13 a14 a15
6 7
6 a21 J22 a23 a24 a25 7
6 7
L¼6
6 a31 a32 J33 a34 a35 7
7
6 7
4 a41 a42 a43 J44 a45 5
a51 a52 a53 a54 J55

In the main diagonal direction, the Eq. (3) is decomposed into three third-order
matrices and four second-order matrices, as shown by the dashed boxes in Fig. 3,
which are defined as L31, L32, L33 and L21, L22, L23, respectively.
448 L. Lu et al.

Fig. 3. Planar substring extraction of series mechanism

The extracted sub-matrices are:


2 3 2 3 2 3
J11 a12 a13 J22 a23 a24 J33 a34 a35
6 7 6 7 6 7
ð1Þ L31 ¼ 6 7 6 7 6
4 a21 J22 a23 5; ð2Þ L32 ¼ 4 a32 J33 a34 5; ð3Þ L33 ¼ 4 a43 J44 a45 5;
7

a31 a32 J33 a42 a43 J44 a53 a54 J55


       
J11 a12 J22 a23 J33 a34 J44 a45
ð4Þ L21 ¼ ; ð5Þ L22 ¼ ; ð6Þ L23 ¼ ; ð7Þ L24 ¼ :
a21 J22 a32 J33 a43 J44 a54 J55

Substring identification and POC set matrix extraction:


2 3
R 2 1
6 7
ð1Þ L31 ¼ 4 2 P 2 5 ¼ LC34 ;
1 2 R
" #
2 0 0
Then output the POC set matrix as: PC34 ¼ ; i¼1
1 0 0
 
R 1
ð2Þ L24 ¼ ¼ LC21 ;
1 R
" #
1 0
Then output the POC set matrix as: PC21 ¼ ; i¼4
1 0
Since L31 corresponds to i = 1, that is, the first motion pair of the substring is the
first motion pair of the series mechanism, according to the rule defined in Sect. 3, the
output POC complement matrix is:
   
2 0 0 0 0 0 0 0 0 1 0 0
P31 ¼ P24 ¼
1 0 0 0 0 0 0 0 0 1 0 0

The bitwise OR operation means that the elements are added together; if the sum of
the moving element or the rotating element is equal to 3, then ti = 3, ri = 3. The bitwise
OR of this tandem mechanism is:
Digital Modeling and Algorithms for Series Topological Mechanisms 449

   
2 0 0 0 0 0 0 0 0 1 0 0
P ¼ P31 þ P24 ¼ þ
1 0 0 0 0 0 0 0 0 1 0 0
 
3 0 0 0 0 0
¼
1 0 0 1 0 0
 
3 0 0 0 0 0
The output is: P ¼
1 0 0 1 0 0

5 Conclusion
(1) Based on the theory of POC set, a digital description matrix of mechanism
topology is proposed, which not only includes the type of motion pair and axis
orientation of the constituent mechanism, but also facilitates the identification and
analysis of the computer.
(2) A POC set matrix of series mechanism is established. The POC set of the series
mechanism is represented by a 2 * 6 matrix. The output motion information is
complete, including not only the movement/rotation type and dimension of the
relative motion output, but also the characterization. The axis orientation or ref-
erence of the output motion is clear, the geometric meaning is clear, and the
computer calculation, storage, and query are convenient;
(3) Our work Converts the end-of-chain POC set operations into algebraic additions
of the same-dimensional matrix by extracting the branched planar sub-strings and
spherical sub-strings in order;

References
1. Dobrjanskyj, L., Freudenstein, F.: Some applications of graph theory to the structural
analysis of mechanisms 89(1), 153 (1967)
2. Sohn, W.J., Freudenstein, F.: An application of dual graphs to the automatic generation of
the kinematic structures of mechanisms, 392–398 (1986)
3. Olson, S.T., Francis, A.M., Sheffer, R., et al.: Parallel mechanisms of high molecular weight
kininogen action as a cofactor in kallikrein inactivation and prekallikrein activation
reactions. Biochemistry 32(45), 12148 (1993)
4. Chakarov, D., Parushev, P.: Synthesis of parallel manipulators with linear drive modules.
Mech. Mach. Theory 29(7), 917–932 (1994)
5. Dasgupta, B., Mruthyunjaya, T.S.: The Stewart platform manipulator: a review. Mech.
Mach. Theory 35(1), 15–40 (2000)
6. Li, S., Dai, J.: Topological description of planar mechanism based on Assur rod group
elements. J. Mech. Eng., 8–13 (2011)
7. Li, S., Dai, J.: The composition principle of the metamorphic mechanism based on the
extended Assur rod group. J. Mech. Eng., 22–30 (2010)
8. Ding, H., Huang, Z.: Motion chain topology diagram and automatic generation of feature
description based on loop characteristics. J. Mech. Eng., 40–43 (2007)
450 L. Lu et al.

9. Han, Y., Ma, L., Yang, T., et al.: Research on mechanism type of parallel robot based on VB
programming. J. Agric. Mach., 139–142 (2007)
10. Liao, M., Liu, A., Shen, H., et al.: Symbol derivation method for azimuth feature set of
parallel mechanism. J. Agric. Mach., 395–404 (2016)
11. Yang, T.: Robotic Mechanism Topology Design. Science Press (2012)
Optimization of Injection Molding for UAV
Rotor Based on Taguchi Method

Xiong Feng, Zhengqian Li, and Guiqin Li(&)

Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,


Shanghai University, No. 99 Shangda Road, Shanghai 200072, China
[email protected]

Abstract. The injection molding of fiber-reinforced UAV rotor blades is put


forward in this paper. Plastic injection molding for fabricating plastic products
with complex shape is one of the significant operations in industry for its high
precision and low cost. The simulation of the rotor forming process is conducted
by using Moldflow software. The excessive deformation in the process of
TAGUCHI test is optimized to obtain the combination of process parameters.
The experiment results show that the deformation of rotor blade decreases,
which meets the requirements of UAV flight. The method has good feasibility
for the production of fiber-reinforced rotor blades in large quantities.

Keywords: UAV  Injection molding  Process optimization  Taguchi method

1 Introduction

The rapid rise of multi-rotor UAV drives the development of drone-related industries.
The drone rotor is the most delicate part, with the highest replacement rate [1]. The
winding forming is one of the most common methods of making drone rotors. How-
ever, this method can’t produce high precision parts but some small batches. The cost is
relatively high for some complicated structures. Though the compression molding
method is also used to form fiber reinforced parts, this method is known for high cost
and low efficiency. Plastic injection molding is one of the most important methods
applied for forming plastic products in industry. The mechanical properties of materials
don’t be destroyed by using the method. Moreover, this method is very efficient and
suitable for mass production.
Some domestic researchers have been also studied in the aspect of fixed wing
forming. Wang Xiadan [2] optimized the fixed wings of carbon fiber-reinforced drones
by using orthogonal experiments and BP neural network. Wang Xiadan, Li Linyang,
etc. [3, 4] studied the optimal gate of the UAV fixed-wing injection molding process,
which had obvious effects on fixed-wing molding. However, the blades of the multi-
rotor UAV have not been received enough attention. In this paper, we performed
numerical simulations of the drone rotor using Moldflow software. Many experiments
were performed by utilizing the combination of process parameters based on four-level
of L16 Taguchi.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 451–456, 2020.
https://doi.org/10.1007/978-981-15-2341-0_56
452 X. Feng et al.

2 Construction of Injection Molding Systems


2.1 Process Analysis
The width and length of the drone rotor were 240 mm and 264 mm respectively. The
nominal thickness of the rotor was 1.2 mm. And the maximum wall thickness was
5.18 mm while the minimum wall thickness was less than 0.5 mm. It was a typical thin
and easy-warping part. The surface quality of the rotor was very high. Large defor-
mation was not allowed during injection molding. The drone blade material was nylon
66 reinforced with carbon fibers and its properties were given in Table 1.

Table 1. Material properties of PA66/30%CF


Material properties Value Material properties Value
Melt density/(g/cm3) 1.236 Poisson rate 0.4735
Water absorption rate/% <=1.0 Shrinkage rate/% 0.2
Melt temperature/°C 280–310 Mold temperature/°C 90–120

2.2 Establishment of a Finite Element Model


CAD model was built by using CATIA software, and imported into Autodesk Mold-
flow CAD Doctor software for surface inspection and repair. Further analysis of the
rotor was completed by using Moldflow software to detect design defects and predict
process parameter. Then the repaired model was simulated in Moldflow software. Due
to a thin-walled part the rotor, the use of a two-layer mesh was most suitable. We
supposed the global mesh side length to 1 mm. If there was a side length less than
0.1 mm in the mesh, the mesh was merged by default. The finite element model was
shown in Fig. 1, including 41,790 mesh elements, and the maximum aspect ratio was
5.7. The percentage of mesh matching in this model was 94% [5], which satisfied the
requirement that the mesh matching rate must be greater than 90% in the deformation
analysis. The finite element model of the rotor was established as shown in Fig. 2.

Fig. 1. Grid model Fig. 2. The finite element model


Optimization of Injection Molding for UAV Rotor Based on Taguchi Method 453

3 Experimental Design by Taguchi Method

Four factors which determine the final quality of injected parts were mold design, part
design, material and process parameters such as injection temperature, mold temper-
ature, and injection time. Thousands of experiments were performed if all of them were
combined. Product development cycle were extended. Taguchi method had advantages
for this kind of multi-factor optimization problem. It possessed a special design of
orthogonal arrays to learn the whole parameters space with a small number of exper-
iments. We found an optimal process parameters by using the Taguchi experiments,
which greatly reduced the production cost and obtained better product quality [6]. In
this study, orthogonal array experiment of L16 were conducted find out the optimum
levels of process parameters. Firstly, the total deformation (index Td) of the part was
selected as the experiment index. And the auxiliary analysis index was the deformation
caused by the shrinkage (index Sd) and the fiber orientation (index Od). L16 orthogonal
array was created for 16 finite element analyses of the rotor blade by utilizing five
process parameters such as melt temperature (factor A), mold temperature (factor B),
injection time (factor C), holding pressure (factor D) and holding time (factor E). It
corresponded to the process parameters concerned in actual production. The ranges of
five process parameters were given in Table 2. While determining of the value ranges
of process parameters, the optimal values recommended by Moldflow material library
were considered. The other process parameters were selected by exploiting the expe-
riences in plastic injection molding industry.

Table 2. The factors and levels of Taguchi DOE


Level Factor
A/°C B/°C C/°C D/s E/MPa
1 300 115 0.6 14 50
2 295 110 0.65 12 70
3 290 105 0.7 10 90
4 285 100 0.75 8 110

4 Analysis of Experimental Data


4.1 Experimental Results
The production practice showed that the initial condition hypothesis with constant melt
temperature made more accurate deformation prediction than mold temperature. The
initial condition of the cooling + filling + packing + warping analysis sequence had a
constant melt temperature, so the sequence was selected for warping analysis [7]. Each
group of the Taguchi experiments was tested. The simulation results of the preset target
deformation were obtained, and the deformation values caused by the shrinkage and
fiber orientation were also obtained. The experiments data was shown in Table 3.
454 X. Feng et al.

Table 3. L16(54) orthogonal array of Taguchi


A B C D E Sd/mm Od/mm Td/mm
Exp. 1 300 115 0.6 14 50 6.833 7.299 0.650
Exp. 2 300 110 0.65 12 70 6.099 6.691 0.732
Exp. 3 300 105 0.7 10 90 5.747 6.353 0.760
Exp. 4 300 100 0.75 8 110 5.301 5.975 0.806
Exp. 5 295 115 0.6 14 50 5.219 5.869 0.778
Exp. 6 295 110 0.65 12 70 5.721 6.365 0.756
Exp. 7 295 105 0.7 10 90 6.357 7.256 1.032
Exp. 8 295 100 0.75 8 110 7.224 8.027 0.964
Exp. 9 290 115 0.7 8 70 7.704 8.568 0.959
Exp. 10 290 110 0.75 10 50 7.812 8.755 1.171
Exp. 11 290 105 0.6 12 110 5.055 5.803 0.902
Exp. 12 290 100 0.65 14 90 5.491 6.276 0.998
Exp. 13 285 115 0.75 12 90 7.543 8.543 1.14
Exp. 14 285 110 0.7 14 110 5.793 6.672 1.093
Exp. 15 285 105 0.65 8 50 7.672 8.69 1.203
Exp. 16 285 100 0.6 10 70 7.744 8.593 1.105

4.2 Effect of Process Parameters on Deformation of Plastic Parts


Range analysis was performed according to the test results. The average value k of the
total deformation (Td) corresponding to each experiment level was calculated, for
factor A, the order of the mean value was k1 < k2 < k3 < k4. For factor B, k1 <
k2 < k4 < k3. For factor C, k1 < k2 < k3 < k4. For factor D, k4 < k2 < k1 < k3, and
for factor E, k4 < k3 < k2 < k1 < k3, as shown in Table 4. The objective of this study
reduced the deformation through the optimal levels of process parameters. The smaller
the better quality characteristic was employed to solve deformation problem. In the
end, the optimum forming scheme was A1B1C1E4F4.

Table 4. Date processing in Taguchi DOE


A B C D E
k1 0.737 0.882 0.853 0.943 0.997
k2 0.882 0.938 0.927 0.934 0.957
k3 1.007 0.974 0.944 0.953 0.913
k4 1.135 0.968 1.037 0.931 0.894
Range R 0.398 0.093 0.184 0.022 0.102
A1B1C1E4F4
A>C>E> B>D
Optimization of Injection Molding for UAV Rotor Based on Taguchi Method 455

5 Verification and Analysis

The final optimal combination of process parameters A1B1C1E4F4 was verified by


using Moldflow. The result was shown in Fig. 3. The maximum deformation was
0.517 mm. The deformations of rotor blades were relatively small, which met the
requirement of 0.8 mm rotor deformation in production. Compared with the minimum
value in 16 orthogonal experiments, the deformation value decreased by 25.7%. It
proved that the injection molding had a great advantage in the process of fiber rein-
forced rotor forming. And the injection molding had certain quality assurance in
manufacturing fiber reinforced rotor.
Further analysis showed that the deformation caused by the shrinkage factor was
mainly distributed in the negative direction of the Z-axis of the rotor, and the fiber
orientation was positive. From the variation of the curve in Fig. 4, we knew that the
deformation caused by shrinkage and fiber orientation was consistent with the total
deformation. Therefore, it was judged that the warping deformation of the rotor was
caused by two factors: plastic shrinkage and fiber orientation. Consequently, future
research can focus on the deformation principle of shrinkage and fiber orientation.

Fig. 3. Deformation of the parts under optimal Fig. 4. Deformation comparison chart
process

6 Conclusion

The injection molding method for UAV rotor blade molding is put forward to meet the
manufacturing requirements. The results of the experiments show that the main factors
affecting the deformation are uneven shrinkage and fiber orientation. The melt tem-
perature has the greatest influence on the deformation of the rotor blade, followed by
the injection time and holding pressure, and the effect of mold temperature and holding
time is the smallest. The optimal molding process parameters of the rotor are as
follows: melt temperature 300 °C, mold temperature 115 °C, injection time 0.5 s,
holding pressure 110% of filling pressure, and holding time 8 s. Under the optimal
parameters combination, the maximum warping deformation of the rotor blade is
456 X. Feng et al.

0.517 mm, which is 25.7% lower than the initial results of simulation. It is proved that
injection molding is suitable for manufacturing UAV rotor blades.

References
1. Han, X., Liu, J., Wang, X., Lu, B., Yang, J., Liao, Y., Li, F.: Design and application of carbon
fiber composites on general aviation aircraft. Dual Use Technol. Prod. (07), 8–11 (2015)
2. Wang, X.: Injection molding and process parameters optimization for carbon fiber reinforced
unmanned aerial vehicle fixed wing. Xi’an University of Science and Thecnology (2018)
3. Yu, Y., Wang, X., Li, L., Lu, Y.: Optimization design for gate of UAV fixed-wing based on
MPI. Plast. Sci. Technol. 45(12), 87–91 (2017)
4. Yu, Y., Wang, X.: Optimization of injection molding process for fixed wing of unmanned
aerial vehicle based on BP neural network. Plast. Sci. Technol. 45(09), 74–78 (2017)
5. Amiruddin, H., Mahmood, W.M.F.W., Abdullah, S., Mansor, M.R.A., Mamat, R., Alias, A.:
Application of Taguchi method in optimization of design parameter for turbocharger vaned
diffuser. Ind. Lubr. Tribol. 69(3), 409–413 (2017)
6. Xu, C., Zhou, J.: Mold design and process optimization of automobile clip injection molding.
Plastics (01), 92−96+101 (2019)
7. Anugraha, R.A., Wiraditya, M.Y., Iqbal, M., Darmawan, N.M.: Application of Taguchi
method for optimization of parameter in improving soybean cracking process on dry process
of tempeh production. In: IOP Conference Series: Materials Science and Engineering, vol.
528, no. 1 (2019)
Assembly Sequence Optimization Based
on Improved PSO Algorithm

Xiangyu Zhang1(&), Lilan Liu1, Xiang Wan1, Kesheng Wang2,


and Qi Huang3
1
School of Mechanical Engineering and Automation, Shanghai University,
Shanghai, China
{shuxyz,lancy,wanxiang}@shu.edu.cn
2
Norwegian University of Science and Technology, Trondheim, Norway
3
Shanghai Baosight Software Corporation, Shanghai, China

Abstract. For the structural characteristics of products, the interference matrix


is established according to the assembly direction, and the optimization of
product sequence assembly planning is studied with the aim of maximizing the
number and stability of parts assembled without interfering with the assembled
parts and minimizing the number of changes in assembly direction. Aiming at
the problem of low convergence speed and precision of basic PSO algorithm, a
population initialization method based on Feigenbaum iteration is proposed, and
a new inertia weight update function is designed to improve the basic PSO
algorithm with reference to Sigmoid function. The performance of the proposed
algorithm is verified by an assembly example. The results show that the
improved PSO (IPSO) algorithm is effective and stable in solving assembly
sequence optimization problems.

Keywords: Assembly sequence optimization  Interference matrix 


Feigenbaum iteration  Improved PSO algorithm

1 Introduction

Assembly is a crucial link in the process of product manufacturing, accounting for 20%
of the total manufacturing cost and 50% of the total production cycle [1]. Assembly
sequence planning (ASP) is to obtain the optimal assembly sequence of products under
certain constraints. It is a typical NP-hard problem. Reasonable optimization of
assembly sequence can not only reduce the accumulation time of parts and improve the
production line balance, but also have important significance for reducing product cost
and improving production efficiency.
With the increasing complexity of products, the number of parts that need to be
assembled increases, and the scale of solution will appear combination explosion [2].
Therefore, intelligent optimization algorithm has been widely used in solving assembly
sequence optimization. Xie et al. [3] applied ant colony algorithm to solve ASP
problem. Zeng et al. [4] proposed an improved ASP method of firefly algorithm, and
constructed the objective function with the interference times of tool parts in assembly
sequence as the evaluation index. Zhang [5] used immune algorithm to overcome the

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 457–465, 2020.
https://doi.org/10.1007/978-981-15-2341-0_57
458 X. Zhang et al.

premature convergence of particle swarm optimization (PSO) to solve the ASP prob-
lem; Somay’e [6] studied the solution space distribution of ASP problem, and proposed
a jump-out local search (BLS) algorithm based on the near-optimal solution’s
approximate uniform distribution in solution space.
In the existing swarm intelligence algorithms, the diversity of the population is
poor, and it depends strongly on the quality and size of the initial population. In this
paper, the interference matrix is established under the condition that the geometric
constraints of the product are satisfied. Combining with co-evolution theory, a popu-
lation initialization method based on Feigenbaum iteration is proposed to improve the
quality of the initial population. The existing particle swarm optimization algorithm is
easy to fall into local optimum when solving ASP problem, and the convergence speed
is not ideal. To solve this problem, a new inertia weight adjustment function is pro-
posed based on “Sigmoid function”, which can improve the convergence speed while
ensuring the convergence accuracy and quickly generate the optimal assembly
sequence. The improved PSO (IPSO) algorithm is validated by an assembly example,
and the experimental results are compared with the basic PSO algorithm.
The remaining part of this paper is organized as follows. Section 2 establishes the
mathematical model of assembly sequence optimization. Section 3 describes in detail
the improvement process of PSO algorithm and its performance verification. Section 4
verifies the practicability and superiority of PSO algorithm through an example. The
last part is the conclusion. Finally, the conclusions are given in Sect. 5.

2 Problem Statement
2.1 Constraints
The interference matrix is used to describe the geometric constraints of parts in all
assembly directions, so the interference matrix can be expressed as:
2 3
I11k I12k    I1nk
6 I21k I22k    I2nk 7
6 7
IMk ¼ 6 .. .. .. .. 7 ð1Þ
4 . . . . 5
In1k In2k    Innk

Where k 2 fx; y; zg represents the direction of assembly; Iijk is a binary
variable, if the part pi interferes with the part pj in the k direction of assembly, Iijk ¼ 1,
otherwise, Iijk ¼ 0. In particular, Iiik ¼ 0.
The main function of interference matrix is to derive the next assembly part pi and
its assembly direction. Assuming that m parts have been assembled, the temporary sub-
assembly can be expressed as Xsub ¼ ½p1 ; p2 ;    pm , whether the part pi can be
assembled smoothly depends on the value of Iik , which is calculated by formula (2):
Assembly Sequence Optimization Based on Improved PSO Algorithm 459

Iik ¼ Ii1k _ Ii2k _    _ Iimk ð2Þ

In which, _ is Boolean operation “OR”. If Iik ¼ 0, the part does not interfere with
all parts in the assembly direction; otherwise, the part interferes with at least one part in
the assembly direction.    
The connection matrix CM ¼ Cij nn and support matrix SM ¼ Sij nn are used
to express the assembly stability relationship. Elements Cij and elements Sij represent
the connection type and support relationship between pi and pj respectively.
8
< Cij = 2 Stable connection
Cij = 1 Contact connection
:
 Cij = 0 Disconnection
Sij = 1 pi can support pj stably
Sij = 0 pi can not support pj stably

2.2 Fitness Function


Suppose that an assembly sequence is Xl ¼ ½xl1 ;    ; xli ;    ; xln  and pi þ 1 is a part to be
assembled. Let the number of times the sequence satisfies geometric constraints, that is,
the number of parts assembled without interfering with the assembled parts:

X
n
f1 ¼ PIik ð3Þ
i¼1

Where PIik can be expressed as:



1; Iik ¼ 0;
PIik ¼ k 2 fx; y; zg ð4Þ
0; Iik ¼ 1;

If the assembly direction of the pl;i þ 1 and pl:i in the sequence is the same, then the
assembly does not need to change the direction,in this case Ql:i ¼ 0. Otherwise, Ql:i ¼
1 ,the assembly is represented as formula (5):

0; dl;i þ 1 ¼ dl;i;
Ql:i ¼ ð5Þ
1; dl;i þ 1 6¼ dl;i:

Set the number of times that the sequence needs to change direction to complete
assembly as follows:

X
n
f2 ¼ Ql:i ð6Þ
i¼1

In addition, the judging rules of whether the assembly sequence is stable are as
follows: (1) Cij ¼ 2; j 2 ½1; i  1 ) stable; (2) Cij ¼ 0 ) unstable; (3) stable if Cij ¼
460 X. Zhang et al.

0 or Cij ¼ 1 when Sij ¼ 1; j 2 ½1; i  1; (4) unstable if Cij ¼ 0 or Cij ¼ 1 when Sij ¼ 0.
f3 is the number of unstable operations in all assembly operations.
The objective function of the assembly sequence optimization problem studied in
this paper is to minimize the number of assembly direction changes while maximizing
the stability and the number of assembly under geometric constraints, that is:

fitness ¼ minfk1 ðn  i  f1 Þ þ k2 f2 þ k3 f3 g ð7Þ


l

In which, k1 , k2 and k3 are the weights of three objective functions, ranging from 0
to 1, and k1 þ k2 þ k3 ¼ 1. To synthesize the importance of each factor, take
k1 ¼ 0:35, k2 ¼ 0:25, k3 ¼ 0:4.

3 Algorithmic Design

3.1 Basic PSO Algorithms


Mathematically, the basic PSO algorithms can be expressed as follows:
   
vkidþ 1 ¼ xvkid þ c1 g1  pid  xkid þ c2 g2  pgd  xkid ð8Þ

xkidþ 1 ¼ xkid þ vkidþ 1 ð9Þ

Where c1 and c2 are called acceleration coefficients or learning factors; g1 and g2


are random numbers between [0, 1], which are used to increase the randomness of
search. x is used to balance the inertia weight of global search and local search.

3.2 Improved PSO Algorithm


In order to enhance the diversity and randomness of population, combining with the
theory of co-evolution, chaos is introduced into evolutionary computation, which is a
unique motion form of nonlinear dynamic system with randomness, ergodicity and
regularity. This paper uses Feigenbaum iteration to construct chaotic sequence and
initialize particle position and velocity, as shown in formula (10), the initial value of the
xn can not take the fixed point of the chaotic iteration equation.

xn þ 1¼f ðu;xn Þ ¼ uð1  xn Þn ¼ 0; 1; 2    ð10Þ

Inertial weight is used to control the search behavior of particles in the search space.
In order to improve the accuracy of operation and reduce the possibility of PSO falling
into local optimum, the method of adjusting inertia weight is improved.
Study on “Sigmoid Function”:

1
f ð vÞ ¼ ð11Þ
1 þ eav
Assembly Sequence Optimization Based on Improved PSO Algorithm 461

For the “Sigmoid function” in formula (11), at that time av\  10, f ðvÞ ¼ 0; on
the contrary, at that time av [ 10, f ðvÞ ¼ 1. Based on the above analysis, a method of
inertia weight adjustment is proposed, which is defined as formula (12):
xmax  xmin
xðtÞ ¼ xmax  ð12Þ
1 þ e0:08ðt2tmax Þ
1

Where xmax and xmin represents the maximum and minimum inertia weight,
respectively, t represents the number of iterations.
For the inertia weight adjustment method proposed by formula (12), in the case of
xmax = 0:9, xmin = 0:4, in the initial stage of evolution, the inertia weight is close to
0.9, while in the end stage of evolution, the inertia weight is gradually close to 0.4. In
the intermediate stage, the inertia weights calculated by formula (12) are in the range of
(0.4, 0.9) at any time in the evolution process, which is very consistent with the
conclusion that PSO has better optimization effect when the inertia weights in docu-
ment [7] are in the range of [0.4, 0.95].
Figure 1 is a comparison of several inertia weight adjustment curves, in which x0 ,
x1 , x2 x3 and x4 represent the value of x decreasing linearly with the number of
iterations, convex function decreasing, concave function decreasing, exponential
function decreasing, and the proposed method decreasing. Compared with the four
curves, it can be found that in the initial stage of particle evolution, the inertia weight
curve keeps a larger value, which can avoid PSO falling into the local optimum and
make it search in the whole space, which is conducive to obtaining more suitable seeds.
In the latter stage of particle evolution, on the contrary, the convergence accuracy of the
algorithm is guaranteed by keeping it in a small numerical range.

0.9
ω0
ω1
0.8
ω2

ω3
0.7
ω (t)

ω4

0.6

0.5

0.4
0 200 400 600 800 1000
t

Fig. 1. Several inertial weight curves

The acceleration factor is controlled by the way that c1 decreases linearly with time
and c2 increases linearly:
  
c1 ðtÞ ¼ c1;min  c1;max ttmax þ c1;max
ð13Þ
c2 ðtÞ ¼ c2;max  c1;min t tmax þ c2;max
462 X. Zhang et al.

In order to verify the superiority of the improved PSO algorithm, the following two
test functions are optimized by using basic PSO improved PSO algorithm.
(1) Rosenbrock function

nP
1  2
f 1 ð xÞ ¼ 100 xi þ 1  x2i þ ð1  xi Þ2 2:048  xi  2:048 ð14Þ
i¼1

(2) Griewangk function

P
n Q
n  
x2i xiffi
f2 ð xÞ ¼ 4000  cos p
i
þ 1 600  xi  600 ð15Þ
i¼1 i¼1

The above two test functions all have global optimal solutions minð f Þ ¼
f ð0;    ; 0Þ ¼ 0, and the more variables, the more difficult it is to converge when PSO
is used to search and optimize. The parameters for standard PSO are: learning factor is
1.5, inertia weight is 0.8, variable dimension is 10, particle number is 30, maximum
iteration number is 1000. In order to facilitate comparison, only the inertia weight of
the improved PSO is set to formula (12), and the other parameters are the same as the
standard PSO. The results of optimization are shown in Table 1. Figure 2 are the
iterative process of optimization.

Fig. 2. Iterative process diagram for Rosenbrock function (top) and Griewangk (bottom)
function optimization by PSO (left) and IPSO (right)

Table 1. Test function optimization results


Functions Basic PSO Improved PSO
Rosenbrock 5.52E−3 1.21E−3
Griewangk 5.21E−7 9.80E−8
Assembly Sequence Optimization Based on Improved PSO Algorithm 463

4 Examples Verification
4.1 Experimental Settings
The practicability of the proposed algorithm is verified by an example of seat assembly.
The assembly structure consisting of 12 parts are shown in the Fig. 3.
The parameters of the algorithm are as follows: the learning factor is 1.5, the inertia
weight is 0.8, the dimension of variables is 10, the number of particles is 30, and the
maximum number of iterations is 200. The algorithm code in this paper is written in
MATLAB R2014a, and the computer parameters of simulation operation are: 64-bit
Windows 7 operating system Intel (R) Core (TM) i5-4460 CPU @ 3.20 Hz 8.00 GB
memory.

Fig. 3. Assembly drawing of base

4.2 Running Results


Based on the same fitness function and the program running environment (population
capacity is 30, 200 trials), the average fitness changes of the two algorithms are shown
in Fig. 4 and Table 2.

12
IPSO algorithm
10 PSO algorithm

8
fitness

0
0 50 100 150 200
Number of iteration

Fig. 4. Algorithmic contrast map


464 X. Zhang et al.

Table 2. Comparisons of running results


Algorithms Population Program running Average fitness
number time/s value
Improved PSO 30 42 1.30
algorithm
Basic PSO algorithm 30 105 1.41

It can be seen that the improved PSO algorithm tends to converge in the 68th
generation, when the average fitness value is 1.3, at this time, the corresponding
assembly sequence results as follows:

p1 ! p11 ! p5 ! p10 ! p4 ! p6 !
p2 ! p8 ! p3 ! p7 ! p9 ! p12

While the basic PSO algorithm converges to the optimal solution 1.41 in the 110th
generation. The improved PSO algorithm converges faster and has higher accuracy;
and the basic PSO algorithm is easy to fall into local convergence. Therefore, the
quality, performance and efficiency of the improved PSO algorithm are significantly
improved compared with the basic PSO algorithm.
The analysis shows that the above assembly sequence planning meets the assembly
requirements and meets the needs of engineering applications.

5 Conclusions

Aiming at the characteristics of ASP problem, this paper proposes an assembly


sequence planning method based on improved PSO algorithm on the basis of basic
PSO algorithm. Firstly, the mathematical model of assembly sequence planning is
established. In order to improve the diversity and randomness of initial population of
PSO algorithm, the Feigenbaum iteration is used to construct chaotic sequence and
initialize the position and velocity of particles. Aiming at the low convergence effi-
ciency of PSO algorithm, a new inertia weight updating method is proposed to improve
the algorithm. It is applied to the assembly sequence planning of the machine base. The
results show that the assembly sequence planning method based on the improved PSO
algorithm is an effective method and achieves the purpose of improving the opti-
mization ability of PSO algorithm.

Acknowledgements. The work is supported by the Ministry of industry and information


technology for the key project “The construction of professional CPS test and verification bed for
the application of steel rolling process” (No. TC17085JH).
Assembly Sequence Optimization Based on Improved PSO Algorithm 465

References
1. Mrashid, M.F.F., Hutabarat, W., Tiwari, A.: A review on assembly sequence planning and
assembly line balancing optimisation using soft computing approaches. Int. J. Adv. Manuf.
Technol. 59(1–4), 335–349 (2012)
2. Wang, K., Wang, Y.: How AI affects the future predictive maintenance: a primer of deep
learning. In: Wang, K., Wang, Y., Strandhagen, J., Yu, T. (eds.) Advanced Manufacturing
and Automation VII, IWAMA 2017. Lecture Notes in Electrical Engineering, vol. 451,
pp. 1–9. Springer, Singapore (2018)
3. Xie, L., Fu, Y.L., Ma, Y.L.: Assembly sequence generation strategy based on ant colony
algorithms. J. Harbin Univ. Technol. 38(2), 180–183 (2006)
4. Zeng, B., Li, M.F., Zhang, Y.: Assembly sequence planning based on improved firefly
algorithms methods. Comput. Integr. Manuf. Syst. 20(4), 799–806 (2014)
5. Zhang, H.Y., Liu, H.J., Li, L.Y.: Research on a kind of assembly sequence planning based on
immune algorithm and particle swarm optimization algorithm. Int. J. Adv. Manuf. Technol.
71(5), 795–808 (2014)
6. Ghandi, S., Masehian, E.: Breakout local search (BLS) method for solving the assembly
sequence planning problem. Eng. Appl. Artif. Intell. 39(3), 245–266 (2015)
7. Wang, T., Li, Q.Q.: Parallel evolutionary algorithm based on spatial contraction. China Eng.
Sci. 5(3), 57–61 (2003)
Influence of Laser Scan Speed on the Relative
Density and Tensile Properties of 18Ni
Maraging Steel Grade 300

Even Wilberg Hovig(&) and Knut Sørby

Department of Mechanical and Industrial Engineering,


Norwegian University of Science and Technology, Trondheim, Norway
{even.w.hovig,knut.sorby}@ntnu.no

Abstract. Laser powder bed fusion (LPBF) enables tool makers to design tools
with complex geometries and internal features. To exploit the possibilities
provided by LPBF it is necessary to understand how the processing parameters
influences the properties of the end-product. This study investigates the effect of
laser scan speed on relative density and tensile properties of 18Ni300 in the as-
built condition. The results show that there is a relatively wide processing
window which gives satisfactory relative density and tensile properties. Fur-
thermore, it was shown that the scan speed which produced the highest relative
density in this study did not provide satisfactory tensile properties, indicating
that processing parameters can not be established based on relative density
measurements alone.

Keywords: Laser powder bed fusion  Laser melting  Relative density 


18Ni300  Maraging steel  Tensile properties

1 Introduction

18Ni maraging steel grade 300 (18Ni300) is a precipitation hardening tool steel with
excellent mechanical properties in the aged state, which is easy to machine in the
solution annealed state [1]. The high strength and hardness of the material makes it
suited for tooling in applications such as aluminium casting, plastic injection molding,
and extrusion applications [2, 3]. Tooling components are excellent candidates for laser
powder bed fusion (LPBF) processing, since the geometry is often complex, and can
benefit from possibilities enabled by LPBF, such as conformal cooling channels and
vent slots for casting applications [4–6].
In order to efficiently process the material with LPBF it is necessary to identify
processing parameters which results in high density microstructures and satisfactory
mechanical properties. Several authors have investigated the effect of processing
parameters on the density of 18Ni300 processed by LPBF [7–10], focusing on the
effect of laser power, P, laser scan speed, v, hatch spacing, h, and layer thickness, t, on
the relative density of the end-result. A commonly used parameter to relate the laser
parameters to each other is volumetric energy density, Ed ¼ P=vht. The validity of
using volumetric energy density to explain the response of LPBF materials to changes

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 466–472, 2020.
https://doi.org/10.1007/978-981-15-2341-0_58
Influence of Laser Scan Speed on the Relative Density and Tensile Properties 467

in the processing parameters have been questioned however [7, 11]. Suzuki et al.
suggests using Pv−1/2 to correlate laser parameters with material density [7]. This
model does not account for hatch spacing, laser spot diameter, and layer thickness,
however. It is not within the scope of this study to derive a model for accurate density
prediction based on laser parameters, and as such only the effect of laser scan speed on
the relative density and tensile properties of 18Ni300 will be investigated.

2 Method

Twelve flat tensile specimens were prepared as blocks in a Concept Laser M2 Cusing
(installed 2009) LPBF machine, and then machined to dog-bone specimens with a
cross-section of 6  6 mm2, with a reduced section length of 32 mm, and radii of
6 mm. The tensile specimens were processed with a laser power of 180 W, hatch
spacing of 105 lm, layer thickness of 30 lm, and scan velocities ranging from
600 mm/s to 725 mm/s with 25 mm/s increments. The scan strategy applied is the
‘island’ scan strategy by Concept Laser, with 5  5 mm2 islands with an angular shift
of 45° and an XY shift of 1 mm. The tensile specimens were built in parallel to the
build direction (Z-oriented). In addition to the tensile specimens, cubes with a volume
of 10  10  10 mm3 were prepared for density analysis. The 18Ni300 powder
feedstock was supplied by Sandvik Osprey. The chemical composition, as supplied by
the material vendor, is listed in Table 1.

Table 1. Chemical composition of 18Ni300 as supplied by Sandvik Osprey.


Fe C Mn Si Cr Ni Mo Co Ti Al
wt% Bal. 0.03 0.1 0.1 0.3 18.0 4.8 9.0 0.7 0.1

The relative density was determined by investigating the cross section of the cubes
with optical microscopy and image manipulation software (similar to the process
described by the current authors in a previous work [11]). The tensile specimens were
tested in an MTS 809 Axial Test System with a 100 kN load cell at room temperature.
The displacement rate was 1 mm/min for all specimens.

3 Results and Discussion

Density
Figures 1 and 2 shows contrast images of polished cross section of cubes processed at
600 mm/s and 725 mm/s respectively. In both images spherical pores are visible, and
in Fig. 1 there are no signs of cracks or pores of large irregular shapes, as observed by
other authors when the processing parameters are sub-optimal [7, 9]. Furthermore, in
Fig. 1 the pores appear to be arbitrarily distributed in the cross section, while at the
higher scan speed in Fig. 2 the pores appear to have a systematic pattern.
468 E. W. Hovig and K. Sørby

Fig. 1. Contrast image of XY-cross section of a cube processed with v = 600 mm/s. The width
and height of the cross-section is 10  10 mm2.

Fig. 2. Contrast image of XY cross-section of a cube processed with v = 725 mm/s. The width
and height of the cross-section is 10  10 mm2.
Influence of Laser Scan Speed on the Relative Density and Tensile Properties 469

Figure 3 shows the measured relative density with respect to scan speed. As can be
seen, the highest relative density is measured for the sample processed at 725 mm/s,
and the lowest relative density is for the sample processed at 625 mm/s. The lowest
measured relative density is above 99.9%, however, which indicates that there is a large
processing window which results in satisfactory relative density. There appears to be a
steady increase in relative density with the increase of scan speed, except for the scan
speed of 625 mm/s, which should be considered an outlier. Based purely on the
measured relative density, a scan speed of 725 mm/s appears to be favorable.
It is interesting to note that the periodic pattern in the top right, and bottom right,
corners of Fig. 2 appears to be along straight lines with a length of 5 mm at a 45° angle
to the perimeter of the cube. This corresponds to the perimeter width, length, and angle
of an ‘island’ in the scan strategy. During laser melting, the core of each island is
scanned first, going from one corner to the opposite corner in a zig-zag pattern. Once
the core is melted, the contour is scanned as a continuous line along the perimeter of the
island. Based on the observed porosity in Fig. 2 it appears that as the scan speed
increases, and thus the energy input is reduced, the material fails to completely melt
and bond between the core and contour of the islands. A possible reason for this can be
that the energy input is not sufficiently high to give a wide enough melt pool to form a
dense material in this region. Within the islands the density appears to be higher in
Fig. 2 however. In a future work experiments can be conducted to verify this, either by
changing the hatch spacing between the contour and the core, or by modifying the laser
power or scan speed in the contour scan. It appears that the higher scan speed results in
a dense core, but porous perimeter, which leads to unsatisfactory tensile properties, as
will be demonstrated in the next section.

99.97
99.96
99.95
RelaƟve density (%)

99.94
99.93
99.92
99.91
99.9
99.89
99.88
99.87
600 625 650 675 700 725
Scan speed (mm/s)

Fig. 3. Relative density as a function of scan speed.


470 E. W. Hovig and K. Sørby

Tensile Properties
The tensile properties of 18Ni300 with respect to scan speed is shown in Fig. 4. The
tensile properties do not vary significantly as the scan speed increases from 600 mm/s
to 700 mm/s, but at 725 mm/s the tensile properties drop. If the processing parameters
are evaluated on the relative density alone, a scan speed of 725 mm/s appears to give
the best results. For lower scan speeds, when the microstructure is decorated with
arbitrarily spaced pores, the tensile properties are not significantly influenced by a small
change in relative density. The porosity observed at the perimeters of the scanned
islands significantly reduces the tensile properties, however. This is likely due to a
concentration of stress in the porous region as the specimen is being loaded, leading to
pre-mature failure.

1200 16
14
1150
12
1100

Elongation (%)
Stress (MPa)

10
1050 8
6
1000
Yield Strength 4
950 UTS 2
Elongation at break
900 0
600 625 650 675 700 725
Scan speed (mm/s)

Fig. 4. Mechanical properties of 18Ni300 as a function of scan speed.

When the yield strength is plotted against the relative density in Fig. 5 there is no
obvious correlation between relative density and tensile properties. If anything, it
appears that the yield strength drops as the relative density increases.
Influence of Laser Scan Speed on the Relative Density and Tensile Properties 471

1060
1055
1050
Yield strength (MPa)

1045
1040
1035
1030
1025
1020
1015
1010
99.89 99.9 99.91 99.92 99.93 99.94 99.95 99.96 99.97
RelaƟve density (%)

Fig. 5. Yield strength as a function of relative density for 18Ni300.

4 Conclusions

This work investigates the effect of scan speed on the relative density and tensile
properties of 18Ni300 processed by LPBF. The density results show that there is a wide
processing window which results in relative density of above 99.9%. Furthermore, the
tensile results indicate that an increase in relative density is not necessarily accom-
panied by an increase in tensile properties. On the contrary, the scan speed which
resulted in the highest relative density was accompanied with significantly lower yield
strength, ultimate tensile strength, and elongation at break. Both relative density and
tensile properties were satisfactory for scan speeds between 600 mm/s and 700 mm/s
with the laser parameters used in this study.

Acknowledgements. The authors would like to thank SINTEF Industry, Oslo, Norway for
performing the relative density analysis. This work is funded in part by the Norwegian Research
Council through grant number 248243, and by the TROJAM project in the INTERREG A/ENI
program.

References
1. Fortunato, A., Lulaj, A., Melkote, S., Liverani, E., Ascari, A., Umbrello, D.: Milling of
maraging steel components produced by selective laser melting. Int. J. Adv. Manuf. Technol.
94(5–8), 1895–1902 (2017)
2. Pereira, M.F.V.T., Williams, M., Du Preez, W.B.: Application of laser additive manufac-
turing to produce dies for aluminium high pressure die-casting: general article. S. Afr. J. Ind.
Eng. 23(2), 147–158 (2012)
472 E. W. Hovig and K. Sørby

3. Ahn, D.-G.: Applications of laser assisted metal rapid tooling process to manufacture of
molding & forming tools — state of the art. Int. J. Precis. Eng. Manuf. 12(5), 925–938
(2011)
4. Hovig, E.W., Brøtan, V., Sørby, K.: Additive manufacturing for enhanced cooling in moulds
for casting. In: 6th International Workshop of Advanced Manufacturing and Automation.
Atlantis Press (2016)
5. Brøtan, V., Berg, O.Å., Sørby, K.: Additive manufacturing for enhanced performance of
molds. Procedia CIRP 54, 186–190 (2016)
6. Hovig, E.W., Sørby, K., Drønen, P.E.: Metal penetration in additively manufactured venting
slots for low-pressure die casting. In: Wang, K., Wang, Y., Strandhagen, J.O., Yu, T. (eds.)
Advanced Manufacturing and Automation VII, pp. 457–468. Springer, Singapore (2018)
7. Suzuki, A., Nishida, R., Takata, N., Kobashi, M., Kato, M.: Design of laser parameters for
selectively laser melted maraging steel based on deposited energy density. Add. Manuf. 28,
160–168 (2019)
8. Bai, Y., Yang, Y., Wang, D., Zhang, M.: Influence mechanism of parameters process and
mechanical properties evolution mechanism of maraging steel 300 by selective laser melting.
Mater. Sci. Eng. A 703, 116–123 (2017)
9. Casalino, G., Campanelli, S.L., Contuzzi, N., Ludovico, A.D.: Experimental investigation
and statistical optimisation of the selective laser melting process of a maraging steel. Opt.
Laser Technol. 65, 151–158 (2015)
10. Kempen, K., Yasa, E., Thijs, L., Kruth, J.P., Van Humbeeck, J.: Microstructure and
mechanical properties of Selective Laser Melted 18Ni-300 steel. Phys. Procedia 12, 255–263
(2011)
11. Hovig, E.W., Holm, H.D., Sørby, K.: Effect of processing parameters on the relative density
of AlSi10Mg processed by laser powder bed fusion (2019)
Application of Automotive Rear Axle Assembly

Shouzheng Liu1(&), Lilan Liu1, Xiang Wan1, Kesheng Wang2,


and Fang Wu3
1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai, China
[email protected], {lancy,wanxiang}@shu.edu.cn
2
Norwegian University of Science and Technology, Trondheim, Norway
3
Huayu-Intelligent Equipment Technology Co., Ltd., Shanghai, China

Abstract. With the mutual integration of digitalization and information tech-


nology, in the assembly process of automotive rear axle, the connection between
assembly process planning and on-site process implementation is gradually
strengthened, the assembly process of automotive rear axle is increasingly
automated and intelligent. Aiming at the practical problems existing in the
assembly process of automotive rear axle, combining the lightweight processing
technology of geometric model, heterogeneous data acquisition technology and
digital twinning technology, this paper focuses on the planning and simulation
of automotive rear axle assembly process in digital space, the interaction
mechanism between digital space and physical space, and finally realizes the
real-time monitoring of the assembly process of the rear axle and the assembly
guidance for the workers, and put forward new ideas and methods for the
closed-loop control of process planning and process execution.

Keywords: Digital twin  Assembly process  Real time monitoring 


Feedback-based optimization

1 Introduction

With the integration and application of new generation information technologies (such
as Cloud Computing, Internet of Things, Big Data, Mobile Internet, Artificial Intelli-
gence, etc.) and manufacturing, countries around the world have successively proposed
their own manufacturing development strategies at the national level, representative
examples are Industry 4.0, Industrial Internet, CPS-based manufacturing, Made in
China 2025 and Internet + Manufacturing, Service Oriented Manufacturing or Service
Manufacturing [1]. As a benchmark in the manufacturing industry, the automotive
industry has a tremendous impact on economic development and social progress.
Automobile assembly is the last stage of automobile manufacturing. The assembly
process of automotive rear axle is an important part of the assembly process of the
whole automobile. Therefore, the assembly quality of the rear axle directly affects the
product quality of the automobile. In the assembly process of automotive rear axle,
many assembly parts are involved, and the assembly process is complicated. In recent

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 473–479, 2020.
https://doi.org/10.1007/978-981-15-2341-0_59
474 S. Liu et al.

years, with the development of robot technology and new generation information
technology, the use of robots instead of workers in some factories has reduced the labor
intensity of workers, but the large amount of data in the factory has not been effectively
utilized, real-time monitoring of assembly process and real-time adjustment of
assembly tasks are not implemented.
The concept of digital twin was first proposed by Professor Grieves in 2003 and has
been rapidly developed in recent years. Digital twin is a multi-physical, multi-scale,
multi-probability simulation process that uses historical data and real-time updated data
from sensors to characterize and reflect the full life cycle of physical objects. The
virtual assembly line is established by the 3D modeling method, and the bidirectional
real mapping and real-time interaction between the real assembly line and the virtual
assembly line are realized by digital twin technology, thereby realizing real-time
monitoring of the assembly process. According to the current order, material, and
machine running state, the multi-objective particle swarm optimization algorithm is
used to predict the number of rear axles currently required to be assembled, so as to
achieve optimal production. The key research content of this paper is to realize the real-
time monitoring of the assembly process of automotive rear axle based on digital twin
technology, and provide the basic content for the subsequent in-depth study of the rear
axle assembly line.

2 System Overall Technical Framework

Real-time simulation of assembly process of automobile rear axle assembly line based
on digital twin technology mainly needs to break through three technical points,
namely, lightweight processing of geometric model of rear axle assembly line, data
acquisition of heterogeneous equipment during assembly execution, interaction
between physical space and virtual space.
The geometric model of the rear axle assembly line of the automobile has problems
such as large number of geometric vertices, large number of patches, and a large
number of hidden bodies. It is difficult to be used in the development of application
systems, and it is necessary to lighten the geometric model.
The real-time simulation of the virtual model to automotive rear axle assembly
process is to realize the real-time driving of the data to the model. Therefore, the
collection of heterogeneous data becomes the key technology.
The most important point of this paper is to provide a method for real-time sim-
ulation of actual assembly line driven virtual models. Therefore, the interaction
between physical space and virtual space is the most important technical point (Fig. 1).
Application of Automotive Rear Axle Assembly 475

Fig. 1. System overall framework diagram

2.1 Lightweight Treatment of Automobile Rear Axle Assembly Line


In the design process of the rear axle assembly line of the automobile, the geometric
model designed generally has higher resolution requirements, so the modeling software
such as Solidworks and Catia is usually used, but at the same time, the geometric model
has a large number of geometric vertices and patches. The problems require high
computational performance and memory capacity, and there is a high delay in real-time
simulation of the virtual model to the actual assembly line, affecting the user experi-
ence. Therefore, in actual development, it is necessary to lighten the geometric model
of the rear axle assembly line, reduce the number of vertices and the number of patches
of the model, improve the system realization effect, and improve the user experience.
After the geometric model is lightly processed, the number of points and the number of
faces can be significantly reduced, and the frame rate in the Unity 3D drive engine is
also greatly improved. The tool software for the lightweight model selected in this
paper is 3dmax, and the lightening process is as follows:
Step 1: Determine Export the geometry model created in Catia V5 to the .stl format that
3dmax can import.
Step 2: Import the geometric model of .stl format into 3dmax, analyze the imported
model, determine whether there are redundant points, delete the redundant points if
they exist, and then use the PolygonCruncher tool to reduce the surface of the model,
complex models can be shelled.
476 S. Liu et al.

Step 3: Correct the reduced UV map according to the position of the UV map before the
patch is reduced to ensure that the patch is consistent before and after the patch is
reduced.
Step 4: Finally, the model is exported in fbx format in 3dmax software. The export
setting parameters include geometry parameter setting, scale factor parameter setting,
fbx file format setting, animation setting, and lighting setting [2].

2.2 Data Acquisition During Assembly Execution


With the application of electronic identification technology represented by RFID,
electronic tags, barcodes, in the assembly shop, the induction and collection of
workshop resource information is more convenient [3]. In the assembly process of
automotive rear axle, the electronic identification technology is introduced, and the data
of the assembly elements (assembly components, assembly aids, quality inspection
equipment, etc.) are collected in real time, the collected data are used in digital twin
technology has greatly changed the process execution process and data application
method of the traditional automobile rear axle assembly shop, and the production,
management and control methods of the assembly shop have been greatly improved.
In the assembly process, because the structure of automotive rear axle is very
complicated, the cost is extremely high and the accuracy of the key parts is difficult to
be ensured by the robot alone. Therefore, the assembly process is completed by the
workers and the robots (The workers mainly complete the pre-assembly of the parts and
the quality inspection of the key parts, and the robots complete the fastening of the
parts).
In the station where the workers perform the assembly task, the workers first
specify whether the required parts are in place, and the parts are labeled with a barcode
containing the material information, thereby associating the parts with the barcode.
After the required assembly parts are assembled to the specified position, the worker
scans the barcode using the scan code gun. If the workpiece is assembled successfully,
the fixture under the assembly will follow the line body to the next station, otherwise
the fixture will be locked by the line body. Line body will not be released until the
component is properly assembled. In the key components, infrared detection technol-
ogy is also used to judge whether the components are accurately installed (Fig. 2).
The robot workstation adopts data acquisition architecture based on OPC_UA for
data acquisition. The OPC_UA architecture adopts a newer SOA architecture. The
OPC_UA architecture supports multiple platforms such as Windows, Linux, and
microcontrollers, supports information encryption and mutual access authentication,
and security monitoring. It has good scalability and can be in XML or binary format.
Advantages of passing data, communication protocols compatible with multiple
communications [4]. The data of multiple robots is transmitted to the controller of the
robot workstation, and the digital twin system communicates with the controller of the
robot workstation through the OPC_UA architecture to drive the digital twin system.
Application of Automotive Rear Axle Assembly 477

Parts name picture of real products association

Bridge Bridge barcode

Stabilizer Stabilizer bar code

Upper control arm Upper arm barcode

Adjustment link Adjust the link bar


code

Brake caliper Brake caliper bar-


code

…… …… ……

Fig. 2. Schematic diagram of the rear axle to be assembled


478 S. Liu et al.

2.3 Interaction Mechanism Between Physical Space and Virtual Space


The interaction mechanism between the physical space and the virtual space of the rear
axle assembly process, It’s core is to realize virtual manufacturing based on digital twin
technology and real manufacturing synchronization based on manufacturing execution
system [5]. Virtual manufacturing is a real-time simulation and analysis of the actual
assembly process through the data-driven geometric model generated in the actual
manufacturing process, and the actual assembly process is optimized through the
analysis results to form an effective closed-loop assembly.
The production order is generated according to the current assembly task, and the
rear axle assembly line begins to assemble the rear axle. In the assembly process, the
data acquisition tools deployed on site are used to collect data of assembly line ele-
ments such as line body, robot workstation, parts, fixtures, etc., and store the data in the
database (such as MySQL). This model simulation platform uses the Unity 3D driver
engine to obtain the corresponding real-time information by continuously retrieving the
data in the database. Combined with the programming of Visual Studio, the single-
column data obtained is converted into JSON format, realizing accurate acquisition of
real-time information, and real-time monitoring of the assembly process by real-time
data-driven virtual assembly lines following the real-time movement of the actual
assembly line.
The simulation analysis system of the digital twin platform can simulate the
assembly process according to the production plan and the rear axle assembly line
model, and can feedback the simulation analysis results to the user, and the user can
optimize the assembly line according to the simulation analysis result. In the actual rear
axle assembly line, the assembly plan is optimized under the consideration of assembly
materials, robot quantity, production cycle and other factors to achieve the optimal
assembly plan [6]. In the algorithm for solving the optimal solution in the flexible
assembly process of mechanical products, the multi-objective particle swarm opti-
mization algorithm is more common. The dispatcher temporarily adjusts the current
production plan according to the optimal production plan obtained by the multi-
objective optimization algorithm. At the same time, the corresponding functional
modules can complete the work of establishing and maintaining data information,
working time statistics, and equipment load accounting, and provide basis for dynamic
scheduling.

3 Framework Application Examples and Effects

The real-time simulation system of automobile rear axle assembly process based on
digital twinning technology studied in this paper has been effectively applied in the rear
axle of E2XX, A2XX and other models of Shanghai Huayu-intelligent Equipment
Technology Co., Ltd. Through the corresponding assembly line model and the light-
weight processing of the product model, the collection of heterogeneous data, the
interaction between the physical space and the virtual space, the assembly process
planning and simulation of the rear bridge in the digital space is realized, and the
simulation process can be released to wearable device to guide workers in the assembly
Application of Automotive Rear Axle Assembly 479

of the rear axle. Secondly, through this process, the simulation process of the assembly
process is associated with the process execution process in real time, real-time moni-
toring of the assembly process is realized, and the process execution process can be
optimized through the simulation process, in particular, the production orders are
optimized to form an effective closed-loop assembly.

4 Conclusions

In this paper, the digital twin technology of the rear axle assembly line of the auto-
mobile is studied. The main achievement is to complete the virtual and real syn-
chronous simulation of the assembly process of automotive rear axle, and provide a
way to optimize the actual assembly process for the simulation analysis process.
Through the lightweight processing of assembly line and product geometric model, the
real-time collection of heterogeneous data in the workshop, and the research on the
interaction mechanism between physical space and virtual space. The basic structure of
the digital twin system of the rear axle assembly line of the automobile is constructed,
and the real-time monitoring of the entire assembly process is completed, which lays a
foundation for a more intelligent assembly line in the future.

Acknowledgements. The authors would like to express appreciation to mentors in Shanghai


University and Huayu-intelligent Equipment Technology Co., Ltd. for their valuable comments
and other helps. Thanks for the pillar program supported by Shanghai Economic and Information
Committee of China (No. 2018-GYHLW-02009).

References
1. Tao, F., Zhang, M., Cheng, J.: Digital twin workshop: a new paradigm for future
workshop. Comput. Integr. Manuf. Syst. 23(01), 1–9 (2017)
2. Zhang, X.: Design and implementation of workshop management and control system based
on digital twins. Zhengzhou University (2018)
3. Zhang, P.: Research of Digital Twin Based Assembly Process Planning and Simulation of
General Aircraft Product. Hebei University of Science and Technology (2018)
4. Tao, F., Cheng, J., Cheng, Y.: SDMSim: a manufacturing service supply-demand matching
simulator under cloud environment. Robot. Comput. Integr. Manuf. 45, 34–46 (2017)
5. Zhang, J., Gao, L., Qin, W.: Big-data-driven operation analysis and decision-making
methodology in intelligent workshop. Comput. Integr. Manuf. Syst. 22(05), 1220–1228
(2016)
6. Armendia, M., Cugnon, F., Berglind, L., Ozturk, E., Gil, G., Selmi, J.: Evaluation of machine
tool digital twin for machining operations in industrial environment. Procedia CIRP 82, 231–
236 (2019)
Improvement of Hot Air Drying on Quality
of Xiaocaoba Gastrodia Elata in China

Xiuying Tang1, Chao Tan2, Bin Cheng3(&), Xuemei Leng1,


Xiangcai Feng1, and Yinhua Luo1
1
College of Mechanical and Electrical Engineering,
Yunnan Agricultural University, Kunming, China
2
Key Lab of Process Analysis and Control of Sichuan Universities,
Yibin University, Yuban, China
3
College of Mechanics and Transportation, Southwest Forestry University,
Kunming, China
[email protected]

Abstract. As a perennial herb, gastrodia elata is distributed in most parts of


China. Its tuber is a valuable Chinese herbal medicine, and controlling its
dryness is the most critical factor to ensure the quality of storing gastrodia elata.
This paper takes Gastrodia tuber drying as the research object, and attempts to
improve the local processing method which is hot air drying by coal heated. Our
team use electricity instead of coal as the energy for hot air drying and takes 50 °
C for drying under constant temperature. Experimental results show the
appearance and the polysaccharide content of gastrodia elata are not improved,
however, that the processing time has been reduced by one third, and the gas-
trodin content in boiled gastrodia elata is significantly higher compared to local
drying method.

Keywords: Hot air drying  Gastrodia elata  Gastrodin  Polysaccharide

1 Introduction

Gastrodia elate is a perennial herb and well-known for treatment effect on dizziness,
headache, migraine, infantile convulsion, limb spasm, wind-cold dampness, neuras-
thenia [1], and has been recorded for medicinal purposes in China more than 1000
years. Additional, many experiments have shown that it also has certain effects against
oxidation, hypoglycemic, vascular headache and concussion sequela [2, 3].
In recent years, the market demand for gastrodia elata has gradually increased.
However, wild gastrodia elata has been near extinction due to excessive exploitation. So
artificial cultivation in imitation of the wild environment has emerged on a large scale.
Zhaotong city is one of the main production area in China. The planting area of gastrodia
elata increased from 3400 ha in 2012 to 5367 ha in 2017, and the output value increased
from 300 million Yuan in 2011 to 3.97 billion Yuan in 2017 [4–6]. Due to the special
physical geography and climate of XiaoCaoba, for example, the average altitude of
1710 m, the average annual sunshine of 927.3 h and the annual average temperature of

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 480–487, 2020.
https://doi.org/10.1007/978-981-15-2341-0_60
Improvement of Hot Air Drying on Quality of Xiao-Caoba Gastrodia Elata 481

9.8°, and 47% forest coverage. Gastrodia elate in XiaoCaoba is well-known by excellent
quality [7].
Fresh gastrodia elata is difficult to store for a long time, and it is easy to corrupt, so
dryness is the most critical factor to ensure the quality of storing gastrodia elata. Our
team found that coal-powered hot air drying is a widely used method in Xiaocaoba by
field research. However, this method is difficult to control and adjust the temperature,
and heating is uneven in different positions in local drying room. If the temperature in
drying room is too low, gastrodia elata is prone to decompose; If the temperature is too
high, the surface of gastrodia elata will has a large number of folds and the effective
components will be partially destroyed resulting in poor quality. In addition, this
method needs a lot of manpower, large space, and result in sulfur residue easily
exceeding the standard.
There are many Chinese researches on the drying of gastrodia elata, and the drying
methods mainly include Hot-air Drying, Wind Drying, Sunlight Drying, Oven Drying,
Microwave Drying, Vacuum Drying, Vacuum Freezing Drying, Infrared Drying and
other various combination techniques [8–12]. Yandao Liu, Changli Wang, Bin Tang
et al. took 5 methods including Wind Drying, Sunlight Drying, Oven drying, Vacuum
drying, Vacuum Freeze Drying for drying gastrodia elata. Experimental results showed
that the best drying method of gastrodia elata was vacuum drying, and the drying
temperature was 52–58 °C, followed by Vacuum Freeze Drying, Sunlight Drying [13].
Ji, Ning, Zhang et al. selected Sunlight Drying, Hot air drying, Microwave drying,
Infrared drying, Hot-air and Microwave combined drying. Compare to the compre-
hensive analysis of the surface, content of active ingredients, production cost and other
factors of gastrodia elata, hot air drying or hot air combined with microwave drying is
the preferred method [14].
Therefore, this paper choose hot air drying to improve the quality of gastrodia elata.
One reason is because this method is widely used in Xiaocaoba area, another is that
electricity-powered hot air drying is clean, safe operation, and thoroughly solve these
problems that include mildew easily, excessive sulphide which come from the smoke
from burning coal. In addition, hot air drying by electricity can improve the effective
components of gastrodia elata and apply to large-scale production, which has certain
guiding significance for local gastrodia elata processing and improvement.

2 Materials and Methods

2.1 Raw Materials


Samples preparation of gastrodia elata were grown at Xiaocaoba, Zhaotong city,
Yunnan Province, China. The plant grew in the mountain forest, and planted in imitation
wild environment. On January 12, 2019, our research team collected gastrodia elata at
10 am. The topsoil was dug with tools first, when approaching gastrodia elata, samples
collected directly by hand in order to avoid damage. A total of 46 kg of gastrodia elata
was collected, and the retained soil in their surface was not cleaned after collection. All
samples were divided into two parts: one part was dried by coal-powered hot air dying in
local drying room, and another part brought back to Kunming on the same day to dried
482 X. Tang et al.

by electricity-powered hot air dryer in the laboratory. At 9:00 am on January 13, samples
covered with soil were cleaned with tap water at the same time.

2.2 Main Equipment


The main equipments include local drying facilities and equipments, laboratory
equipments and detection equipments.
Local drying facilities and equipments: drying room, Auy220 electronic balance,
coal stove, Mt-4612-c infrared thermometer gun (Pro’skit), shovel, turnover box,
steamer, etc. The drying room (shown in Fig. 1) is 6 m long and 5 m wide, and the coal
stove is 1.2 m long and 0.5 m wide. From the vertical plane, the drying room is divided
into two layers: the first layer is a hot air room, and the second layer is for gastrodia
elata.

Fig. 1. Local drying room Fig. 2. Electric thermostatic hot air blower dryer

Laboratory equipment in Yunnan Agricultural University: electric thermostatic air


blower dryer (model 101) as shown in Fig. 2, Auy220 electronic balance, induction
cooker, steamer, knife, etc.
Detection equipment and reagents: LC-20A liquid chromatograph, SPD-20A
ultraviolet-visible detector, JA2003 electronic balance, Auy220 electronic balance,
High-speed multifunctional grinder (JHF-150), p-hydroxybenzyl alcohol, heavy dis-
tilled water, gastrodin standard, ethyl acetate, ethanol, acetonitrile, gastrodin which
number is 0807-200104,chromatographic grade methanol, ultra-pure water, etc.

2.3 Hot Air Drying


Hot air drying takes hot air as drying medium, exchange heat and moisture with dry
products by convective circulation way, meanwhile, the surface become dry because
water on the products surface diffuses into the main air stream, and this results in a
water different between the inside and outside of the material and makes internal
moisture movement to the surface, liquid outward expansion, and finally the main air
discharge to achieve the purpose of drying [15].
Improvement of Hot Air Drying on Quality of Xiao-Caoba Gastrodia Elata 483

2.4 The Evaluation Index

(1) Appearance of Gastrodia elata. The appearance indicators of gastrodia elata


include color, curl degree and texture. Colors differ in yellow, dark yellow, yellow
transparent, milky white and etc. The curl degree fall into flat, little curls and big
curls. The texture is divided into hard or loose status.
(2) Dry base water content. The material without moisture content is usually called
absolute dry material, and the ratio of the moisture content to the absolute dry
material quality in the wet material is called the dry base water content (p) of the
wet material. The calculation formula is as follows [16]:

ð m  nÞ
p¼  100%
n
Where m and n are respectively the initial weigh and dry weight.
(3) Gastrodine. Gastrodine is one of the critical effective ingredients of gastrodia
elata, according to the Chinese pharmacopoeia (2015 edition), gastrodine is
measured by the high performance liquid phase chromatography method [17].
(4) Polysaccharide. Polysaccharide of gastrodia elata is another important effective
ingredients of gastrodia elata. Anthrone-sulfuric acid method is one measured
method [18]. The polysaccharide content of gastrodia elata was calculated as
follows:

CDf

m
Where C is the concentration of glucose in polysaccharide solution (mg/ml), D
is the dilution ratio of polysaccharide, m is the raw polysaccharide quality of
gastrodia elata (g), f is the conversion factor, and w is the polysaccharide content
of gastrodia elata (%).

2.5 The Technological Process


The local drying process includes: (1) classification, (2) boiled or non-boiled, (3) the
first dry in about 60°, (4) the temperature dropping gradually to about 30°, (5) put
samples in plastics bags for releasing the water naturally, (6) the second dry in about
60°, then drop to 30°, then put in bags, and repeatedly processing until dry samples. In
the drying process, the data of weight record by every 4 h (except the time in bags).
The laboratory drying process includes: (1) classification, (2) boiled or non-boiled,
(3) drying in 50°, and the data of weight record by every 4 h.
484 X. Tang et al.

3 Results and Discussion


3.1 The Appearance
As can be seen from the Fig. 3, Fig. 3(a) is the dried sample in the laboratory, and of
which No. L5 is the dried sample without boiled and No. A5 is the dried sample boiled.
Figure 3(b) is the dried sample in the local drying room, and of which No. 1 is the dried
sample without boiled and No. 7 is the dried sample after boiled. The color of the
boiled samples of the two drying methods are all obviously lighter than that of the
samples without boiled, and the color of the local drying method is lighter than that of
the laboratory drying method, showing a yellow transparent. Samples without boiled is
dark yellow. From the appearance of gastrodia elata, the curl degree is related to
samples weight. The lighter the weight, the easier it is to curl. Looking at cross section,
samples were free of hollow and mildew. The texture of all samples are hard, but the
color inside is differ in different methods. For laboratory samples, the boiled section is
yellow, while the not-boiled section is white in the middle seeing Fig. 3(c). However,
for local drying samples, the boiled section is yellow transparent, while the not-boiled
section is milky white seeing Fig. 3(d). In summary, although the differences in
appearance of two methods are significant, they are all acceptable by market.

(a) sample in the laboratory


(b)sample in the local drying room

(c)sample section of boiled and not-boiled (d)sample section of boiled and not-boiled in
in the laboratory the local dying room

Fig. 3. (a) The appearance, curling degree, the texture of samples with boiled or not-boiled in
two methods

3.2 Dry Base Moisture Content


In calculating results, comparing with the two methods, the process of water loss is
different of different-weight gastrodia elata. Nonetheless, the general trend is fell
sharply and then slowly. Taking D5 and XKD4 as examples, the weight of XKD4
dropped sharply from 0 to 48 h, then changes slowly until 190 h. D5 loses water
rapidly at 0–35 h, slowly after 35 h and stabilizes after 133 h. Therefore, hot air drying
by electricity is faster than hot air drying by coal.
Improvement of Hot Air Drying on Quality of Xiao-Caoba Gastrodia Elata 485

The range of dry base moisture content of samples is from 1.801 to 4.77. According
to the calculated data, the dry base moisture content in the weight range of 80–110 g is
the largest. Above or below this range, the dry base moisture content will decreases
gradually. The dry base moisture content of samples without boiled is higher than that
of boiled. From the comparison of similar weights boiled samples between two
methods, the dry base moisture content of the samples dried by electric hot air was
generally smaller than that of the samples dried by coal hot air, however, the data of the
non-boiled samples dried by two methods were not clearly distinguished. Therefore,
electric hot air drying is better than coal hot air drying dealing with boiled samples.

3.3 Gastrodin
An appropriate amount of gastrodin control substance was added to acetonitrile solu-
tion with a concentration of 3%, and then different concentrations solutions were
injected into the HPLC to obtain the standard curve square Area = 17777.57082 *
Amt + 8.0580846, R = 0.99999, and detailed steps referring to the Chinese pharma-
copoeia. The experimental data are shown in the Table 1. Lab number means sample
number in laboratory and local number means sample number in local drying method.

Table 1. Gastrodin content (%) of samples in two methods


Lab The initial Dry Gastrodin Local The initial Dry Gastrodin
number weight weight (%) number weight weight (%)
Boiled A5 128.41 45.84 0.27 XKD8 161 55.3 0.51
A6 120.99 36.04 0.27 XKD7 154 50.2 0.44
B5 89.22 21.64 0.33 XKD5 112 32 0.18
B6 81.78 25.14 0.31 XKD4 108 22 0.19
C5 70.30 15.31 0.38 XKD1 82 15.3 0.11
C6 62.30 16.21 0.36 XKD2 74 20.2 0.11
Non- L6 159.70 33.24 0.04 XK2 154 45.5 0.08
boiled L5 156.12 42.3 0.07 XK1 144 40.4 0.13
D5 107.28 26.98 0.07 XK4 130 34.3 0.07
D6 93.87 23.86 0.01 XK6 105 18.2 0.02
F5 73.12 17.53 0.01 XK8 80 17 0.03
F6 60.38 14.01 0.12 XK7 75 18.2 0.01

Compared to the same weight gastrodia elata, the gastrodin in boiled gastrodia elata
is significantly higher than that of the non-boiled, and as for different weight of gas-
trodia elata, the content of gastrodin with large weight is higher. The data compared
between two drying methods of boiled gastrodia elata show that the gastrodin content
in hot air drying by electricity are all higher than 0.20% (according to the Chinese
pharmacopoeia, gastrodin content in gastrodia elata should not be less than 0.20%).
However, except for samples XKD8 and XKD7, hot air drying by coal fails to meet the
requirements. Through analysis of these two samples, the possible reason is that they
486 X. Tang et al.

have large volume and thickness. The temperature in local drying room is in 60–30°,
which is conducive to rapid dehydration due to temperature difference, while the
temperature of electric constant dryer is always around 50°, which is easy to cause
uneven dehydration inside and so there are many surface folds. Therefore, fresh gas-
trodia elata with less than 130 g is better to be dried at a constant temperature of 50° by
electric hot air drying, while those with more than this weight should be dried at a
variable temperature. The specific process needs to be further studied.

3.4 Polysaccharide
Taking 1 mg/mL glucose standard solution 0, 1, 2, 3, 4, 5 and 6 mL put respectively in
a 50 mL volumetric flask with constant volume. Then, taking 1 mL of each tube put
respectively in a plugged tube and place them in an ice water bath, and add 4 ml 0.2%
anthranone - sulfuric acid reagent. After that, they are put in a boiling water bath for
10 min and cooling it, place them in the dark for 10 min. The absorbance was mea-
sured at 620 nm, and the regression equations y = 30.65x + 0.6617, R2 = 0.9925 were
obtained.
Experimental data show that polysaccharide content of all samples are between
16.21% and 16.49%. In general, the polysaccharide content of the boiled samples is a
bit higher than that of the non-boiled samples, however, the difference could be
ignored. So there was no significant difference in the amount of polysaccharides
between the two methods. The possible reason is that the drying temperature of both
methods is less than 60°, and the loss of polysaccharides of gastrodia elata below this
temperature is less.

4 Conclusions

In this paper, constant hot air drying by electricity in 50° is compared to local variable
temperature hot air drying by coal. Although the appearance and the polysaccharide
content are not improved, the previous method could improve the content of gastrodine
which meet the national standards. Therefore, it can be concluded from this experiment
that the 50 °C electric hot air drying method is better than the local hot air drying
method. The research group gives suggestions on local gastrodia elata drying:
(1) All the gastrodia elata should be deal with boiled, the suggested temperature is
90 °C, and the boiled time is: 140–175 g for 15–20 min, 105–140 g for
10–15 min, and 70–105 g for 8–10 min.
(2) The local drying room should be designed to a fully sealed, temperature adjustable
electric hot air drying room. The drying temperature should be controlled at 50 °C
below 130 g. For gastrodia elata over 130 g, backwater treatment is recom-
mended, but the specific process needs to be further studied.

Acknowledgements. This work was supported by the Opening Fund of Key Lab of Process
Analysis and Control of Sichuan Universities of China (2017001).
Improvement of Hot Air Drying on Quality of Xiao-Caoba Gastrodia Elata 487

References
1. Su, S.: Herbal Classic of Materia Medica (annotated). Fujian Science and Technology Press,
Fu Zhou (1988)
2. Yang, S., Lan, J., Xu, J.: Research progress of gastrodia elata. Chin. Tradit. Herbal Drugs 31
(1), 66–69 (2000)
3. Kong, X., Liu, T., Guan, J.: Effect of polysaccharide from the Gastrodia elata Blume on
metabolism of free radicals in subacute aging model mice. J. Anhui Univ. Nat. Sci. Ed. 29
(2), 95–99 (2005)
4. Yunnan “zhaotong gastrodia elata” industry development becoming strong. http://yn.
yunnan.cn/html/2018-04/20/content_5172277.htm. Accessed 21 June 2019
5. The investigation of the gastrodia elata industry by National gastrodia conference organizing
committee. http://www.emushroom.net/news/201208/08/11700.html. Accessed 21 June
2019
6. Report on the development of gastrodia elata industry by Zhaotong people’s government.
http://www.ztrd.gov.cn/article/201408/t20140821_1274_1.shtm,last. Accessed 21 June 2019
7. Xiaocaoba gastrodia elata in Zhaotong City-the world’s original factory. https://www.
taodocs.com/p-50138411.html. Accessed 21 June 2019
8. Qin, J., Zhang, J., Zhou, H.: Influence on the content of gastrodin of different processing
methods. J. Shanxi Univ. Sci. Technol. 23, 76–79 (2005)
9. Ning, Z., Mao, C., Lu, T., et al.: Effects of different processing methods on effective
components and sulfur dioxide residue in Gastrodiae Rhizoma. China J. Chin. Materia Med.
39(15), 2814–2818 (2014)
10. Huang, X., Qi, C., Zhu, Y., et al.: Determination of Gastrodin and Gastrodigenin in fresh and
different processing Gastrodia elata BI. J. ZhaoTong Univ. 39(5), 43–47 (2017)
11. Yong, W., Zhao, Y., Gu, Y.: Effect of different drying methods on quality of Rhizoma
Gastrodiae. Chin. Trad. Pat. Med. 27(6), 673–676 (2005)
12. Tian, Z., Wang, J., Liu, J., et al.: Effects of different processing methods and steamed time on
quality of Zhaotong Gastrodiae rhizoma. Southwest China J. Agric. Sci. 29(7), 1701–1706
(2016)
13. Liu, Y., Wang, C., Tang, B., et al.: Effect of different drying methods on the content of
gastrodin in gastrodia elata. Mod. Trad. Chin. Med. 33(3), 108–109 (2013)
14. Ji, D., Ning, Z., Zhang, X.: Effects of different drying methods on quality of gastrodiae
Rhizoma. China Jo. Chin. Materia Med. 41(14), 2587–2590 (2016)
15. Yu, M., Zhang, X., Mu, G., et al.: Research progress on the application of hot air drying
technology in Chian. Agric. Sci. Technol. Equip. 8, 14–16 (2013)
16. Du, X.: Xinxing Shipin Ganzao Jishu Ji Yingyong. Chemistry Industry Press, Beijing (2018)
17. National Pharmacopoeia Commission: Chinese Pharmacopoeia, 2015th edn. China Medical
Science and Technology Press, Beijing (2015)
18. Peng, Z.: Research overview of polysaccharide from gastrodia elata. J. Gansu Coll. TCM 25
(4), 49–51 (2008)
Installation Parameters Optimization
of Hot Air Distributor During Centrifugal
Spray Drying

Yunfei Liu and Jingjing Xu(&)

School of Mechatronic Engineering and Automation, Shanghai University,


Shanghai, China
[email protected], [email protected]

Abstract. The location of hot air distributor plays an important role in the
centrifugal spray drying. Based on response surface optimization and numerical
simulation, the overall desirability of fine powder ratio and outlet air moisture is
used as the response value, and the installation angle of hot air distributor and
the distance between atomizer and hot air distributor are optimized. A better
drying effect was obtained by optimizing parameters of hot air distributor, which
provides guidance for the industrial production of centrifugal spray drying.

Keywords: Hot air distributor  Spray drying  Response surface optimization 


Numerical simulation

Nomenclature
Mi indicator value.
Mmax, Mmin the maximum and minimum values of each indicator.
d1 he normalized value of ratio of fine powder.
d2 he normalized value of outlet air moisture.

1 Introduction

Many parameters such as atomizer speed, the hot air inlet temperature and the inlet air
volume affect the centrifugal spray drying effect, and many researchers have studied on
it. Meanwhile, the hot air distributor, which can make hot air contact fully and uni-
formly with droplets sprayed from the atomizer, also has influence on the centrifugal
spray drying effect like reducing or avoiding sticking or scorching. Li [1] showed that
larger or smaller installation angle of the hot air distributor will cause serious sticking
phenomenon, which is not conducive to the advantages of spray drying. Gao [2]
proposed to improve the drying effect by improving the hot air distributor. Yang [3]
showed that the tangential velocity provided by hot air can change dispersion of
droplets. However, the above research didn’t give a specific numerical research. In this
paper, the effect of the hot air distributor is fully considered, and installation angle and
the distance of the hot air distributor are adjusted to change hot air tangential speed and

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 488–494, 2020.
https://doi.org/10.1007/978-981-15-2341-0_61
Installation Parameters Optimization of Hot Air Distributor 489

increase contact time between hot air and droplets. The overall desirability (OD) of fine
powder ratio and outlet air moisture is defined as the goal to optimize the parameters
based on central composite design and response surface methodology (CCD-RSM),
and the optimal installation angle and the distance of the hot air distributor with better
drying performance are obtained, which plays a guiding role in actual industrial
production.

2 CFX Simulation and Verification

According to the principle of fluid dynamics simulation, the model of industrialized


centrifugal spray drying tower is simplified [4–6], and important parts such as hot air
inlet, hot air distributor and atomizer are remained. The process verifies the validity of
numerical simulation, and 3D model of the 5T/d spray drying tower is shown in Fig. 1
and the enlarged hot air distributor and atomizer is shown in Fig. 2.

air inlet
wall

air
outlet

particle
outlet

Fig. 1. 3D of drying tower

air distributor

atomizer(droplet inlet)

Fig. 2. Enlarged hot air distributor and atomizer


490 Y. Liu and J. Xu

1. Hot air inlet: The hot air, with 290 °C temperature and 2.173 kg/s flow rate cal-
culated based on the heat balance and material balance [7, 8], is composed of 0.051
H2O, 0.201 CO2 and 0.748 N2 [9] and its turbulent flow intensity is 4.33%.
2. Feed liquid inlet: The feed liquid, with 30 °C temperature, 0.175 kg/s flow rate and
67% moisture content, are divided into droplets through the atomizer with
7780 r/min. Droplets obeying to Nukiyama-Tanasawa [10] with distribution
parameter of 2.8953 and 0 enter the drying tower at the tangential speed of
85.267 m/s and the radial speed of 18.111 m/s.
3. Hot air outlet: The average static pressure generated by the induced draft fan is
−300 Pa.
4. Particle outlet: The flow rate is 0.
5. Wall boundary: Considering that the wall surface of insulation layer has a small
amount of heat loss, the wall surface is set to surface with a heat dissipation
coefficient of 0.961 W/(m2  K).
6. The installation angle of the hot air distributor and the distance between the hot
smoke dispenser and the atomizing disk are 0° and 250 mm, respectively.
The numerical simulation results are shown in Table 1.

Table 1. The comparison between numerical analysis and industrial measured data
Unit Industrial data Simulation results Relative error/%
Outlet air temperature °C 140 137.40 1.86
Ratio of 20–150 lm %  98 97.56 0.45
Product mean volume diameter lm 60–70 61.23 –
Particle moisture % 2 2.06 3.00

The simulation results such as temperature distribution, particle moisture, particle


size distribution are in good agreement with the industrial data, indicating that the
numerical simulation of centrifugal spray drying is reliable.

3 Optimization and Result Verification

3.1 CCD-RSM Parameters Optimization


In the CCD-RSM optimization [11, 12], the installation angle of the hot air distributor
and the distance between the hot air distributor and the atomizer are regarded as
variables, and the OD of fine powder ratio and outlet ait moisture is defined as the goal.
The values of every variable and test design are shown in Table 2, and the different
parameters results and the test design points are shown in Table 3.
Installation Parameters Optimization of Hot Air Distributor 491

OD Calculation [13, 14]: The normalization of fine powder ratio and outlet air
moisture are calculated through Eqs. (1) and (2) based on Hassan method, respectively.
Then the OD of fine powder ratio and outlet air moisture is calculated through Eq. (3).

d1 ¼ ðMmax  Mi Þ=ðMmax  Mmin Þ ð1Þ

d2 ¼ ðMi  Mmin Þ=ðMmax  Mmin Þ ð2Þ

OD ¼ ðd1  d2 Þ1=2 ð3Þ

Table 2. The factor and level of the CCD


Factor Label Level
−1 0 1
Installation angle h/° A 0 5 10
Distance d/mm B 210 250 290

Table 3. Design test points and results


Test Distance Installation angle Fine powder ratio Outlet air moisture OD
1 260 10 0.09785 0.05149 0.40813
2 210 10 0.09763 0.05560 0.00000
3 285 5 0.09791 0.04219 0.82540
4 220 5 0.09801 0.04212 0.96250
5 260 5 0.09794 0.04286 0.85266
6 210 5 0.09786 0.04105 0.78593
7 210 0 0.09796 0.04587 0.75959
8 260 0 0.09800 0.04849 0.68701
9 220 0 0.09794 0.04683 0.69848
10 280 10 0.09790 0.04840 0.59299
11 235 10 0.09771 0.05463 0.11636
12 235 0 0.09794 0.05148 0.48122
13 285 0 0.09797 0.05066 0.54840

The data in Table 3 (angle, distance and OD) is imported into the Design-Expert
10.0.7 software for multivariate binomial fitting. The result is Eq. (4):

OD ¼ 2:5662  0:0128A  0:1265B þ 0:0010AB þ 0:00002A2  0:0163B2 ð4Þ


492 Y. Liu and J. Xu

The variance results are shown in Table 4.

Table 4. Variance results


Source Sum of squares df Mean square F value p-value Prob > F
Model 0.874944 5 0.174989 12.83377 0.002056 Significant
A 0.154833 1 0.154833 11.3555 0.011924
B 0.470925 1 0.470925 34.53786 0.000614
AB 0.170281 1 0.170281 12.48849 0.009548
A2 0.001435 1 0.001435 0.105208 0.755145
B2 0.44535 1 0.44535 32.66218 0.000724
Residual 0.095445 7 0.013635
Cor total 0.97039 12

The results in Table 4 show that the model is significant, so the method can be used
to optimize the process parameters of the centrifugal spray drying. The values of R2
and Radj2 are 0.9016 and 0.8314, respectively, indicating that the model has a good
agreement with the simulation. The p values of B and B2 are extremely small, indi-
cating that the angle has a significant influence on the results. A and A2 are not
significant, indicating that the installation angle has a greater influence on the spray
drying effect than the distance.
The optimization result is that the installation angle is 5.741° and the distance is
219.758 mm. Considering the actual situation, the installation angle is 6° and the
distance is 220 mm.

3.2 Verify Optimized Process Parameters


The results before and after optimization are compared as shown in Table 5.

Table 5. The results before and after optimization


Unit Optimized Unoptimized
Installation angle ° 6 0
Distance mm 220 250
Ratio of 20–150 lm % 98.04 97.56
Product mean volume diameter lm 63.49 61.23
Particle moisture % 2.03 2.06

The optimization results show that the optimized parameters can improve the
drying effect of spray drying with lower particles moisture, larger ratio of 20–150 lm
and lower product wear, showing that it has contributed to controlling product wear and
ensuring particles moisture.
Installation Parameters Optimization of Hot Air Distributor 493

3.3 Optimization Results

Fig. 3. Air temperature distribution before Fig. 4. Air temperature distribution after
optimization optimization

As can be seen in Figs. 3, 4 and 5, the hot air temperature distribution in the drying
tower after optimization can provide a more uniform energy field for the droplets,
which ensures that the droplets are evenly heated and obtains good roundness and
moisture particles. It can be seen from the particle size distribution that the proportion
of small particles decreases, indicating that lower proportion of fine powder is bene-
ficial to reduce waste and increase production.

Fig. 5. Particle size distribution before and after optimization


494 Y. Liu and J. Xu

4 Conclusions
1. The influence of the installation position of the hot air distributor on the centrifugal
spray drying effect was verified by the numerical simulation. The reasonable
installation position of hot air distributor is of great significance to the centrifugal
spray drying.
2. The overall desirability (OD) of fine powder ratio and outlet air moisture is defined
as the goal to optimize installation angle and the distance of the hot air distributor
based on CCD-RSM, and the optimal installation angle and the distance are 6° and
220 mm, respectively.
3. The optimization results show that reasonable position of the hot air distributor can
improve the centrifugal spray drying effect and provide guidance for industrial drying.

References
1. Li, Q.: The improvement and optimization of centrifugal spray-drying tower. Dry. Technol.
Equip. 9(5), 268–273 (2011)
2. Gao, Z.L.: Development and application of electric high speed centrifugal spray dryer.
Chem. Eng. 26(3), 58–60 (1998)
3. Yang, S.J., Wei, Y.C., Woo, M.W., Wu, D.: Numerical simulation of mono-disperse droplet
spray dryer under influence of swirling flow. CIESC J. 69(9), 3814–3824 (2018)
4. Xu, J.J., Wang, Z.T., Yuan, K.: Numerical simulation of CFX gas-liquid two-phase flow in
spray drying tower for preparation of FCC catalyst. In: 2014 ANSYS China Technology
Conference, pp. 21–23 (2014)
5. Huang, L.X., Kumar, K., Mujumdar, A.S.: Simulation of a spray dryer fitted with a rotary
disk atomizer using a three-dimensional computational fluid dynamic model. Dry Technol.
22, 1489–1515 (2004)
6. Huang, L.X., Passos, M.L., Kumar, K.: A three-dimensional simulation of a spray dryer
fitted with a rotary atomizer. Dry Technol. 23, 1859–1873 (2005)
7. Wang, B.H., Wang, X.Z.: Two heat balance methods in spray drying process. Dry. Technol.
Equip. 9(2), 76–81 (2011)
8. Liu, G.W.: Spray Drying Technology, pp. 33–36. China Light Industry Press, Beijing (2001)
9. Xu, S.H.: Direct calculation method of flue gas physical properties. J. Suzhou Inst. Silk Text.
E Technol. 19(3), 32–36 (1999)
10. González-Tello, P., Camacho, F., Vicaria, J.M.: A modified Nukiyama-Tanasawa distribu-
tion function and a Rosin-Rammler model for the particle-size-distribution analysis. Powder
Technol. 186, 278–281 (2008)
11. Ma, Y.H., Lu, J.Y., Hu, Z.Z., Wei, S.H.: Preparation of 1-pentene/1-octene/1-dodecene
terpolymer drag reducer by response surface method. CIESC J. 68, 2195–2203 (2017)
12. Wang, X.H., Xia, L.L., Hu, M., Song, Y.: Optimization of extraction process for compound
Tongmai prescription by Box-Behnken response surface methodology combined with multi-
index evaluation. Chin. J. Hosp. Pharm. 37, 712–716 (2017)
13. Wu, W., Cui, G.H., Lu, B.: Optimization of multiple evariables: application of central
composite design and overall desirability. Chin. Pharm. J. 35, 532 (2000)
14. Hassan, E.E., Parish, R.C., Gallo, J.M.: Optimized formulation of magnetic chitosan
microspheres containing the anticancer agent, oxantrazole. Pharm. Res. 9, 390–397 (1992)
Wear Mechanism of Curved-Surface Subsoiler
Based on Discrete Element Method

Jinguang Li1, Liangliang Zou1,2, Xuemei Liu1,2, and Jin Yuan1,2(&)


1
College of Mechanical and Electronic Engineering,
Shandong Agricultural University, Tai’an 271018, China
[email protected]
2
Shandong Provincial Key Laboratory of Horticultural
Machinery and Equipment, Tai’an 271018, China

Abstract. The subsoiling technique in conservation tillage can effectively


break the bottom plow layer and improve the tillage structure. But the wear
problem of subsoiler is an important obstacle to the popularization of subsoiling.
In this paper, the surface of the subsoiler is defined, and the subsoiling process
of the curved-surface subsoiler with different speeds is simulate by discrete
element method. By comparing with the actual subsoiler that wears heavily, the
results shows that the main wear surfaces of the subsoiler are the upper surface
and the front surface of the subsoiler tip, the front surface of the subsoiler-
surface and the front surface of the subsoiler handle; the wear degree of sub-
soilers has a positive correlation with the work speed, wherein subsoiler handle
has the greatest correlated with the speed.

Keywords: Subsoiling  Discrete element method  Simulation

1 Introduction

Subsoiling technology is one of the basic contents of conservation tillage [1]. It can
effectively break the plough pan and deepen the tillage layer, which is beneficial to the
renewal of rain and oxygen in the soil [2]. However, there are lots of problems in
subsoiling preparation, such as high resistance, serious wear of subsoilers, and high
energy consumption, which hinder the development and popularization of subsoiling
technology. Therefore, clarifying the wear mechanism of the subsoiler becomes the
important conditions to solve these problems. But the relationship between soil and
subsoilers is very complicated. It is difficult to analyze it by traditional test methods.
With the development of modern science and technology, the discrete element
method for analyzing the motion law and mechanical properties of complex granular
systems is proposed [3]. It can effectively solve some complex dynamic problems in
agricultural soil cultivation. That provide new ways for explore the relationship
between soil particles and agricultural implements.
In this paper, to explore the wear mechanism of the curved-surface subsoiler, the
discrete element method is used to simulate the subsoiling process at three working
speeds. Using the slicing function in the EDEM post-processing module, the main wear
surface the subsoiler has found. The wear mechanism of the subsoiler is explored for

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 495–503, 2020.
https://doi.org/10.1007/978-981-15-2341-0_62
496 J. Li et al.

reducing wear, resistance and energy consumption. That provide the theoretical basis
for optimum design of subsoiler.

2 Establishment of Discrete Element Simulation Model

2.1 Subsoiler Model


Accurate geometric models are the prerequisite for accurate simulation results. To
ensure that the simulation is the same as the field test, the geometric parameters of the
curved-surface subsoiler are obtained by reverse engineering modeling according to the
ratio of 1:1. Firstly, the subsoiler is pre-treated by spraying the coloring penetrant, and
then the three-dimensional scanning is performed to obtain the data point cloud. The
data point cloud is repaired, merged, smoothed, and physically restore used by software
such as Geomagic Studio and Design Modeler. Finally, export the geometric entity.
The modeling process of subsoiler is shown in Fig. 1.

a. 3-D laser scanner b. Date point cloud c. Subsoiler model

Fig. 1. The modeling process of subsoiler

2.2 Soil Model


Establishing an accurate soil particle model is the basis for ensuring the validity of the
simulation results [4, 5]. According to a large number of studies, the basic structure of
soil particles is different, mainly divided into spherical block particles, core particles,
flake particles and column particles. The four particle models are shown in Fig. 2.
Wear Mechanism of Curved-Surface Subsoiler Based on Discrete Element Method 497

a. Block particle b. Core particle c. Flake particle d. Column particle

Fig. 2. Soil particle model

Soil compaction is an indicator to measure the internal strength of the soil. It has a
great impact on the subsoiling operation. The uniform compactness between the
simulation soil and the actual soil is one of the important conditions to ensure the
simulation accurately. In order to simulate the real soil state as much as possible, six
test points were selected at the test site, and the soil compactness at different depths is
shown in Table 1.

Table 1. Soil compaction


Test depths/mm 50 100 150 200 250 300
Soil compaction/KPa Test point 1 10 52.9 89 162 192 332
Test point 2 8 60 95 152 189 341
Test point 3 7 58 97 156 178 314
Test point 4 11 80 102 176 168 324
Test point 5 9 67 121 164 190 350
Test point 6 11 62 99 183 179 310
Average value 9.4 63.6 103.6 163.8 184.6 318.9

Existing soil-related studies have shown that there is bond force between soil
particles, which has a direct impact on soil compaction. Therefore, there should also be
adhesion between the simulated soil particles [6]. The particles were regarded as vis-
cous bodies, and the Bonding model in the Hertz-Mindlin contact model is used as the
final particle contact model [7]. After bonding, Bond bonds are formed to provide the
bonding force.
In this paper, two particle factories are set up to produce two layers of soil. The
bottom layer is the plow-bottom layer and the surface layer is the tillage layer. The
particle formation time is 5 s, and a total of 620,000 particles are formed. After 1 s
deposition, the particles are bonded at 6 s to form bonds. The soil model is shown in
Fig. 3.
498 J. Li et al.

a. Two layers of the soil b. Bonds

Fig. 3. Soil model

2.3 Subsoiling Model


Discrete element simulation parameters include material parameters and contact
parameters. These parameters are quoted from the literature [8, 9].
Through the relevant literatures mentioned in the paper, the repeated simulation
debugging of the parameter combination is carried out, and the final simulation
parameter s are determined as shown in Table 2.

Table 2. Basic parameters of the discrete element model


Parameter Numerical value
Density of soil particles q1/(kg/m3) 1346
Poisson’s ratio of soil particles v1 0.4
Shear modulus of soil particles G1/Pa 1 * 106
Density of subsoiler q2/(kg/m3) 7830
Poisson’s ratio of subsoiler v2 0.35
Shear modulus of subsoiler G2/Pa 7.27 * 1010
Coefficient of restitution between the soil and soil e1 0.2
Coefficient of rolling friction between the soil and soil e2 0.3
Coefficient of static friction between the soil and soil e3 0.4
Coefficient of restitution between the soil and subsoiler f1 0.3
Coefficient of rolling friction between the soil and subsoiler f2 0.05
Coefficient of static friction between the soil and subsoiler f3 0.5

Enter the corresponding simulation parameters in the software and import the three-
dimensional model of the subsoiler into the discrete element software. The tillage depth
is set to 300 mm, which is in line with the actual subsoiling operation. Finally, simulate
the subsoiling process at three working speeds of 0.6 m/s, 1.0 m/s and 1.5 m/s. The
subsoiling discrete element model is established as shown in Fig. 4.
Wear Mechanism of Curved-Surface Subsoiler Based on Discrete Element Method 499

Fig. 4. Subsoiling model

3 Results and Analysis of Discrete Element Simulations


3.1 Analysis of the Wear Surface of the Subsoiler
In order to facilitate the study of soil contact in subsoiler operation, as shown in Fig. 5
surfaces of the subsoiler are marked.

Fig. 5. The marked figure of curved-surface subsoiler

3.1.1 The Wear Surface of Subsoiler Tip


Taking 10 mm after the subsoiler tip as the starting point of slicing, a slice with
thickness of 400 mm is established. The contact area of subsoiler tip is shown in Fig. 6.
Figure 6 shows that the upper surface and front surface of the subsoiler tip are in
close contact with the soil, which is the main wear surface. The lower surface and the
back surface of the subsoiler tip is the non-wear surface. The main reason for this
phenomenon is that the soil in the upper part of the subsoiler tip is uplifted. The
working object of subsoiler tip is plow-bottom layer, which is difficult to be deformed.
500 J. Li et al.

a. 0.6 m/s b. 1.0 m/s c. 1.5 m/s

Fig. 6. Contact areas of subsoiler tip

The backfilling of the soil is not timely, so a confined space will be formed below the
subsoiler tip. It can be seen that the upper surface and front surface of the subsoiler tip
are worn severely, and the lower surface and the rear surface are worn lightly. It is
observed in Fig. 6 that the contact range between soil and subsoiler tip at a working
speed of 1.5 m/s is larger than that at 1.0 m/s and 0.6 m/s, therefore, the higher the
working speed, the more serious wears of subsoiler tips.

3.1.2 The Wear Surface of Subsoiler-Surface


Slicing the simulation model in the vertical direction. The slice center was 100 mm
behind the subsoiler tip and the slice thickness was 30 mm. The contact area of
subsoiler-surface is shown in Fig. 7.

a. 0.6 m/s b. 1.0 m/s c. 1.5 m/s

Fig. 7. Contact areas of subsoiler-surface

As shown in Fig. 7, the soil is in close contact with the front surface of the
subsoiler-surface, and the back surface has a gap with the soil, Therefore, the front
surface of the subsoiler-surface is the wear surface. It can be seen that the gap formed
between soil with subsoiler is the largest at the working speed of 1.5 m/s. And at
0.6 m/s, the gap formed is small. The main reason for the gap formed is the special
geometric shape and the working angle of the subsoiler, that causes the asymmetry of
the soil bulge around the subsoiler after the operation, which is more obvious with the
increase of the speed.
Wear Mechanism of Curved-Surface Subsoiler Based on Discrete Element Method 501

3.1.3 The Wear Surface of Subsoiler Handle


The slice of subsoiler handle is centered at 300 mm above the subsoiler tip and the
thickness is 30 mm. Contact areas of subsoiler handle is shown in Fig. 8.

a. 0.6m/s b. 1.0m/s c. 1.5m/s

Fig. 8. Contact areas of subsoiler handle

Because of the squeezing and shearing action of the subsoiler handle, the soil
moves to both sides. The soil contact area is formed with both sides of the subsoiler.
With the speed increases, the front of the subsoiler handle is in closer contact with the
soil. It is regarded as the main wear surface. At the same time, the gap between soil and
back surface of subsoiler handle increases, the friction surface decreases, and the wear
of subsoiler decreases. This is due to the special curved shape of the subsoiler. The
higher the speed, the more the soil in front of the shovel is lifted, and the soil movement
is more severe. The soil in contact with the subsoiler handle is the tillage layer soil, the
density is smaller than that of the plough bottom layer, soil movement is more intense.
This increases the soil movement difference between the sides of the subsoiler handle.
Ultimately, there is a difference in wear on both sides.

3.2 Verification of the Subsoiler Wear


Figure 9 shows the wear of the curved-surface subsoiler after long-term subsoiling
operation. Figure 9a shows the back surface of subsoiler, and Fig. 9b shows the front
surface of subsoiler.
It can be seen from the graph that the upper surface of the subsoiler tip has been
worn seriously during operation, and the edge of the subsoiler tip has been completely
worn and deformed. After a long period of operation, the front face of the subsoiler-
surface has also been worn seriously and rubbed to a smooth surface. That is not
conducive to soil debonding and drag reduction.
The lower surface and the back surface of the subsoiler tip, the back surface of the
subsoiler handle have less contact with the soil and basically no wear. In addition, the
502 J. Li et al.

a.Back surface b.Front surface

Fig. 9. Wear condition of the curved-surface subsoiler

paint sprayed on the subsoiler has not been rubbed off by the soil. The wear state of the
curved-surface subsoiler is consistent with the results of the discrete element analysis in
Sect. 3.1 of this paper, which verifies the authenticity of the results of the discrete
element simulation analysis.

4 Conclusions

In order to explore the wear mechanism of curved-surface subsoiler, the various sur-
faces of the subsoiler is defined in this paper, the subsoiling process is simulated
through discrete element simulation software. Combined the actual subsoiler that has a
long-working, the wear surface is analyzed, and draws the following conclusions.
(1) The upper surface and the front surface of the subsoiler tip, the front surface
subsoiler-surface and the front surface of the subsoiler handle are the friction
surface of the curved-surface subsoiler. These surfaces are the main wear parts of
the subsoiler, and that can be treated emphatically when optimizing the loss
reduction
(2) The wear of curved-surface subsoiler is related to working speed. The greater the
speed, the closer the subsoiler contacts with the soil, and the greater the force, the
more the wear. Among them, the wear of the subsoiler handle has the greatest
correlation with the change of speeds.

Acknowledgements. This work was supported by National Key R&D Program of China
(2017YFD0701103-3) and Key research and development plan of Shandong Province
(2018GNC112017), (2017GNC12108).
Wear Mechanism of Curved-Surface Subsoiler Based on Discrete Element Method 503

References
1. Zhiqiong, W., Weixin, W., et al.: Development status of subsoiling technology under
conservation tillage conditions at home and abroad. Agric. Res. 6, 256–257 (2016)
2. Baumhardt, R.L., Jones, O.R.: Residue management and tillage effects on soil-water storage
and grain yield of dryland wheat and sorghum for a clay loam in Texas. Soil Tillage Res. 68
(02), 71–82 (2002)
3. Cundall, P.A., Strack, O.D.L.: A discrete numerical model for granular assembles.
Geotechnique 29(1), 47–65 (1979)
4. Sadek, M.A., Tekeste, M., Naderi, M., Calibration of soil compaction behavior using discrete
element method (DEM). In: 2017 ASABE Annual International Meeting, Spokane, WA, USA
(2017)
5. Yuxiang, H., Chengguang, H., Mengzhao, Y., et al.: Discrete element simulation and
experiment on disturbance behavior of subsoiling. J. Agric. Mach. 47(07), 80–88 (2016)
6. Hang, C., Gao, X., Yuan, M., et al.: Discrete element simulations and tests of soil disturbance
as affected by the tine spacing of subsoiler. Biosyst. Eng. 168, 73–82 (2018)
7. Potyondy, D.O., Cundall, P.A.: A bonded particle model for rock. Int. J. Rock Mech. Min.
Sci. 41(8), 1329–1364 (2004)
8. Tamás, K., Jóri, I.J., Mouazen, A.M.: Modelling soil–sweep interaction with discrete element
method. Soil Tillage Res. 134, 223–231 (2013)
9. Liu, X., Du, S., Yuan, J., Yang, L., et al.: Analysis and test on selective harvesting mechanical
end-effector of white asparagus. J. Agric. Mach. 49(04), 110–120 (2018)
Development Status of Balanced Technology
of Battery Management System
of Electric Vehicle

Xiupeng Yan(&), Jianjun Nie, Zongzheng Ma, and Haishu Ma

School of Mechatronics Engineering, Zhongyuan University of Technology,


Zhengzhou, China
[email protected]

Abstract. In order to solve the problem of lithium battery inconsistency and


improve the service life and utilization rate of lithium batteries, it is always the
focus of research in this field to use equalization technology to improve the
above situation. Through reviewing the research progress of lithium ion battery
equalization technology at home and abroad in recent years, two main types of
equalization methods, namely active equalization and passive equalization, are
introduced, and the characteristics of different methods of equalization tech-
niques are introduced from the equalization circuit structure and the equalization
strategy. The challenges facing the current equalization technology are identi-
fied, and the future research direction is presented.

Keywords: Lithium batteries  Balancing techniques  Active equilibrium 


The equilibrium strategies

1 Introduction

Compared with other batteries, lithium-ion batteries have the advantages of small
accumulation, light weight, high single cell voltage, high specific energy, long cycle
life, no memory effect, low self-discharge rate and no pollution [1, 2]. However,
lithium battery has inconsistent battery performance caused by long-term use, which
may cause overcharge, overdischarge, overheat and overcurrent of the battery, which
may cause irreparable damage to the battery. In severe cases, it may even cause battery
explosion and spontaneous combustion [3]. In order to give full play to the excellent
characteristics of lithium-ion batteries, many people at home and abroad use battery
management systems (BMS) to improve battery utilization and life cycle, and solve the
problem of inconsistent battery performance through the balanced management tech-
nology in BMS.
This paper mainly summarizes the development and characteristics of equalization
technology in battery management systems at home and abroad in recent years. It
analyzes the advantages and disadvantages of different equalization techniques from
the topology structure and equalization strategy of equalization circuit, and proposes
the future of lithium ion battery equalization technology. The direction of research and
the key technologies that need to be addressed.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 504–510, 2020.
https://doi.org/10.1007/978-981-15-2341-0_63
Development Status of Balanced Technology of Battery Management System 505

2 Equalization Circuit Structure


2.1 Passive Equilibrium
Passive equalization uses resistance in parallel with the single cell. When the energy of
the cell is higher than other cells, the excess energy is consumed by the resistor during
the charging process. Single and multiple cells can be realized simultaneously.
Balanced, it is also called energy consumption. At present, passive equalization control
used in the market is divided into software solutions and hardware solutions. The
software solution is to set the highest voltage, the lowest voltage, and the threshold
between the high and low voltages of the single battery. If the threshold is exceeded,
the equalization is started; the hardware scheme is con-trolled based on the comparison
between the voltage value detected by the single chip microcomputer and the special
chip and the set value. Whether it is balanced [4, 5].
In the passive equalization circuit, the simplest and typical circuit is a switched
resistor circuit. As shown in Fig. 1, each single cell in the circuit is connected with a
resistor and a switch in parallel. The circuit has a simple structure, is convenient to
control, and is inexpensive, but is prone to generate excessive heat. In a special case, a
certain heat dissipation mechanism needs to be set, and the energy utilization rate is
low, and is now used for equalization during charging. In addition to this, there are
analog shunt circuits and fixed shunt resistor structures.

2.2 Active Equilibrium


Active equalization is an equalization method applied to charge and discharge. It is to
shift the energy of a battery with a high voltage to a battery with a low voltage, to
improve the inconsistency of each single cell, and there is no energy loss in the whole
process, so it can be called non-energy. type. At present, common methods for active
equalization circuits include energy storage inductance method, bypass equalization
method, switched capacitor method, and DC-DC converter method (transformer
method) (Fig. 2).

Fig. 1. Power battery double-layer Fig. 2. Principle of modular multilevel


equalization control circuit schematic energy storage system
506 X. Yan et al.

Energy Storage Inductance Method


The method is based on an inductive device, that is, energy transfer between the
monomer and the monomer can be realized, energy transfer between the single cell and
the battery pack can be realized, the control is easy, the structure is simple, the
implementation is convenient, and the circuit energy The loss is low, and the disad-
vantage is that the inductance balancing efficiency is low when a plurality of single
cells are simultaneously equalized.
Junfeng [6] and others use an inductive equalization structure. An equalization
module consists of a storage inductor, a MOSFET HE and a Schottky diode achieve a
redundant single cell with high terminal voltage during charging. The energy is
transferred to other batteries, the circuit structure is simple, the cost is low, and it is
easy to control, and is suitable for battery applications with many series connections.
Switched Capacitance Method
The switched capacitor method is similar to the stored energy inductor method. The
capacitor is used as a carrier for energy transfer between any two cells in the battery
pack. The circuit contains a switch matrix controlled by a single chip system. The
method has the advantages of fast charging and discharging speed, long cycle life and
large working temperature range, but too many control switches increase the difficulty
of control, and the switch occupies a part of energy consumption. Shengshuang [7]
based on the traditional CUK circuit to improve, proposed a battery pack equalization
system using array selection switches, capacitors, and inductors. The high-voltage
single cell and the low-voltage single cell form a balanced pair. The switch can realize
voltage equalization between adjacent single cells, complete energy conversion, bal-
ance across batteries, high efficiency, short time consumption, and can be balanced in
both charged and non-charged states. The module circuit used for equalization is shown
in Fig. 3.
DC-DC Converter Method
The converter is used as the equalization circuit of the key device, the equalization
speed is fast, the multi-cell battery can be charged at the same time, and the whole
process has no energy loss. It is the main control method of current popularization and
research, but compared with the methods such as capacitance and inductance. In this
way, the cost is high, and since there are multiple transformers or coils, the circuit
control is complicated. According to the different circuit topology, it can be divided
into two types: centralized and distributed equalization.
(1) Centralized equilibrium
The centralized equalization circuit transfers the energy obtained from the battery
pack to the low-voltage battery by using a multi-stage coil or a multi-output trans-
former, and can be divided into one-way equalization and two-way equalization
according to the flow direction of the current.
Zhiguo [8] proposed a new two-layer energy transfer control strategy based on the
buck-boost converter equalization module circuit. The structure principle is that all the
batteries are divided into multiple groups, and the group is balanced and controlled.
Separate equalization controls are also implemented within the group. The 6  2
Development Status of Balanced Technology of Battery Management System 507

circuit model is taken as an example, and the 3  2 circuit model is configured to


effectively solve the disadvantage that the battery balance efficiency is not high.
(2) Distributed
The characteristic of the distributed equalization circuit structure is that each single
cell has an independent proprietary equalization module, which has high charging
flexibility, modularity and good expandability, but has many control signals, high cost,
many control elements, and complicated circuits. Widely used circuit structures include
Buck-Boost circuits and Cuk circuits.
Kaitao [9] and other modular multi-level DC converters and super capacitors form
an energy storage system. As shown in Fig. 4, the phase-shifted bidirectional DC
converter is made up of multiple sub-modules. When the arm is turned on, the super
capacitor starts to charge and discharge. The SOC of the supercapacitor of the inde-
pendently operated submodule is determined by the duty ratio of the submodule. The
control strategy combining the supercapacitor and the droop control idea is based on
the level of its own energy. Adjust the average operating current to achieve equal-
ization. The structure has high redundancy and system stability, which greatly reduces
the SOC inconsistency of the battery pack.

Fig. 3. Equalization module circuit structure Fig. 4. Principle of modular multilevel


energy storage system

3 Balance Criteria

At present, the equalization variables used in the equalization control strategy mainly
include terminal voltage, SOC and battery capacity. The equalization methods used are
maximum value method, average value method and fuzzy control method.

3.1 Equilibrium Variable


Terminal Voltage
The control variable based on the terminal voltage of the single cell is the most widely
used and intuitive means at present, and is easy to control and has high precision. By
setting the cut-off voltage to the single cell, the process or over-discharge of the cell can
be effectively prevented in the process of charging and discharging, and the continuity
of the terminal voltage of the cell is also ensured. However, as the number of cycles of
battery use increases, the internal resistance increases, the capacity becomes smaller,
508 X. Yan et al.

and the influence of the opposite terminal voltage is relatively large, and such a method
cannot be applied in a parallel circuit. Ruihua [10] of Tongji University and others used
the terminal voltage as the equilibrium variable, and applied the integrated equalization
circuit topology to the active equalization strategy of the series battery, avoiding
overcharging or overdischarging of the battery and improving the equalization speed.
Remaining Power
The equilibrium strategy with SOC as the equilibrium variable is the most researched
and the future development direction. Le and Jianlong [1, 2, 11, 12] proposed a
balanced strategy aiming at the SOC consistency of single cells. The battery SOC
cannot be directly measured by the battery. It can only use the relevant monitoring
circuit to detect the remaining power calculated by the parameters such as the current,
voltage and temperature of the battery. When the difference in the remaining capacity
of the battery pack exceeds the theoretical value, the battery is charged. Start balancing
at the beginning. At present, the SOC estimation methods include open circuit voltage
method, ampere-time integration method, neural network method, Kalman filter
method, etc., but the accurate measurement of SOC still needs to be improved, and the
SOC cannot accurately estimate the battery overcharge and undercharge. On the other
hand, when the number of battery cycles increases, the effects of polarization effects
and aging of the battery gradually deepen, and the internal resistance of the battery
becomes large, which hinders accurate measurement of the battery SOC.
Battery Capacity
Accurate measurement of battery capacity is the main reason that restricts its wide
application. As with the measurement of SOC, as the number of cycles of battery
charge and discharge increases, the factors such as aging, polarization effect, electrolyte
concentration and temperature of the battery deepen. The accuracy of battery capacity
measurement is difficult to guarantee, and offline estimation of battery capacity is
currently the most common method.

3.2 Equilibrium Strategy Method


The maximum value method and the minimum value method have a combined max-
imum value method, which usually detects the highest voltage and the lowest voltage
of the single cell, and the difference exceeds the set threshold value, and the equal-
ization starts, and the energy of the high voltage single cell is low. The voltage battery
is transferred and cycled several times until the pressure difference between all the cells
is less than the set value, and the equalization is ended, which is regarded as achieving
the consistency of the battery. However, if the consistency of the battery pack is
extremely poor, the equalization speed and equalization efficiency are extremely poor,
and it is highly likely to cause logic confusion. The average comparison method is also
called the adjacent battery comparison method, which compares the average voltages of
all the single cells and the battery packs respectively, and transfers the energy of the
cells higher than the battery cells of the battery pack to the adjacent low-voltage single
cells, compare multiple times multiple times until all batteries are consistent. Long-
distance battery energy transfer efficiency is extremely low, so this method is only
applicable to two adjacent single cells.
Development Status of Balanced Technology of Battery Management System 509

The fuzzy control method is a relatively complicated control method, and it is also
the digital development direction of battery online control equalization. Taking the
battery consistency parameter as the input variable, the equilibrium non-linear char-
acteristic model of the battery is established by the correlation algorithm, and the output
of the controller is used. Control the equalization voltage and current. Time and other
parameters. Thi Thu Ngoc Nguyen [13] and others combined the neural network with
fuzzy logic control, which is both learnable and adaptive. It can use the online mea-
surement data to find the optimal control point and equalize the current between the
batteries.

4 Conclusion

Through the understanding of battery equalization technology at home and abroad in


recent years, it can be found that the research in the field of active equalization tech-
nology has deepened and achieved good results. Regardless of the method and theo-
retical basis, the objectives are the same, aiming to improve the equilibrium speed of
the battery, improve the balance accuracy and efficiency of the battery, reduce the cost
of the equalization circuit, and simplify the structure. However, there are still problems
in the formulation of the topological structure and equalization rules of the equalization
circuit, and it is necessary to continuously improve and strengthen the research.
(1) Active equalization technology consumes less energy, and even some topo-logical
structures do not consume energy theoretically. Therefore, in the era of re-source
shortage, active equalization technology is still a hot topic in the future, and has a
wide range of development prospects and use value.
(2) The topological structure of the equalization circuit is diverse, and there are still
many problems. However, there are still many problems. For example, the energy
transfer between two single cells that are far away needs to set a specific circuit
structure, otherwise the equilibrium speed is slow and takes a long time. Balanced
efficiency and so on. At present, the existing equalization circuit cannot have both
cost, structure simplicity, efficiency, service life and performance.
(3) With the development of computer technology, the trend of digitization is
deepened, and SOC as the best target for digital online measurement will be the
focus of research. Improving the acquisition accuracy of battery SOC is a difficult
point in this field.
(4) The research on composite topological structure and control strategy is gradually
increasing. By adopting modular equalization control of battery charge and dis-
charge, the battery is improved in various aspects such as equalization speed and
efficiency, so the composite equalization technology is also Future re-search
directions.
510 X. Yan et al.

References
1. Dongjin, Y., Janan, L.: Review of lithium ion battery and its online testing technologies.
Chin. J. Power Sources 42(09), 1402–1403+1419 (2018)
2. Mingli, L., Bingtao, Q.: Research and design of battery management system for pure electric
vehicle. Process. Autom. Instrum. 39(09), 21–24 (2018)
3. Jun, T., Cuijun, T.: Safety test and evaluation method of lithium ion battery. Energy Storage
Sci. Technol. 7(06), 1128–1134 (2018)
4. Guopeng, T., et al.: Research progress of power battery equalization. Chin. J. Power Sources
39(10), 2312–2315 (2015)
5. Ying, P., et al.: Research status of equalization technology for series batteries. Electron.
Meas. Technol. 38(08), 21–24+49 (2015)
6. Junfeng, H., et al.: Research of charging equalization circuit and equilibrium strategy for Li-
ion battery series. Chin. J. Power Sources 40(12), 2439–2443 (2016)
7. Shenshuang, Y., Xiangzhong, Q.: Design of equalization control for battery of electric
vehicle. Electron. Des. Eng. 25(22), 154–157+161 (2017)
8. Zhiguo, A., et al.: Design of energy transfer equalization control for electric vehicle power
battery. Comput. Simul. 34(05), 147–150+252 (2017)
9. Kaitao, B., et al.: Distributed energy balancing control strategy for energy storage system
based on modular multilevel. Trans. China Electrotech. Soc. 33(16), 3811–3821 (2018)
10. Ruihua, L., et al.: Voltage equalization optimization strategy for LiFePO4 series-connected
battery packs based on Buck-Boost converter. Electr. Eng. 19(03), 1–7 (2018)
11. Le, Q., et al.: Research on control strategy of energy storage system based on SOC. Coal
Technol. 37(06), 247–249 (2018)
12. Jianlong, H., et al.: Research on equalization strategy of energy storage battery strings based
on SOC. Renew. Energy Resour. 35(12), 1828–1834 (2017)
13. Yoo, H.G., et al.: Neuro-fuzzy controller for battery equalisation in serially connected
lithium battery pack. IET Power Electron. 8(3), 458–466 (2015)
Application Analysis of Contourlet Transform
in Image Denoising of Flue-Cured
Tobacco Leaves

Li Zhang1, Haohan Zhang1(&), Hongbin Liu2, Sen Wang2,


and Xiaoyu Liu3
1
Faculty of Mechanical and Electrical Engineering,
Yunnan Open University, Kunming, China
[email protected]
2
Faculty of Mechanical and Electrical Engineering,
Kunming University of Science and Technology, Kunming, China
[email protected]
3
School of Culture Tourism and International Exchange,
Yunnan Open University, Kunming, China
[email protected]

Abstract. Image denoising is one of the most basic and important tasks in
image processing when computer is used for quality inspection of flue-cured
tobacco leaves. The Contourlet transform has the advantages of multiresolution,
anisotropy, and sparsity. Wavelet denoising, median filter, mean filter, gaussian
filter and wiener filter are used to conduct comparative experiments on tobacco
leaf images so as to verify the denoising effect of Contourlet transform. It is
showed that the image denoising method based on the Contourlet transform has
the advantages of high signal-to-noise ratio and good visual effect when applied
to tobacco image denoising, which is effective and feasible for image denoising
of flue-cured tobacco.

Keywords: Tobacco leaves  Image denoising  Contourlet transform  Tower


directional filter bank

1 Introduction

After the fresh tobacco leaves are picked from the field and sorted according to different
maturity and shape characteristics, their subsequent baking quality can be improved
[1]. Therefore, the tobacco leaf sorting technology based on computer vision is applied.
There will be noise when acquiring the image of tobacco leaves due to the dust or
strains on their surface, which will have impact on their identification in severe cases.
Therefore, the image of the tobacco leaf needs to be denoised before the feature
extraction.
The image denoising can be divided into two kinds: spatial domain denoising and
transform domain denoising based on the actual characteristics of image and the
spectral distribution of noise. Among them, the spatial domain denoising mainly
includes mean filter, Gaussian filter and median filter, while the transform domain

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 511–516, 2020.
https://doi.org/10.1007/978-981-15-2341-0_64
512 L. Zhang et al.

denoising mainly includes wavelet denoising, multi-scale analysis methods, etc.


Wavelet transform is widely used in various denoising processes [2, 3] due to its multi-
resolution and time-domain characteristics. However, it has the drawback of isotropic,
mainly showed in the weaker effect in image denoising to express its directionality. In
order to overcome its limitations, Do and Martin proposed Contourlet transform in
2002 [4], which not only contains the advantages of wavelet transform, but also has
more directionality to the target when performing image denoising. In 2005, Cunha [5]
improved it. The improved method makes good use of the geometric structure of the
image to satisfy the anisotropic scale relation of the curve, and provides a fast and
structured method to decompose the sampled signal. After that, the denoising algorithm
based on Contourlet transform is gradually applied to the field of agricultural product
image processing [6].
In view of the above, this paper proposes a tobacco image denoising algorithm
based on Contourlet transform. By decomposing the image of the noisy tobacco leaf
through PDFB, and estimating the speckle noise variance of the sub-bands in each
high-frequency direction and the local mean of the transform coefficient modulus, the
multi-scale shrinkage thresholding is used to determine the Contourlet coefficient, and
denoise the image of the tobacco leaves for the better effect of image denoising.

2 General Model of Image Denoising

Denoising are generally represented by the following model:

y ¼ xþr  e ð1Þ

Among them, x is the ideal signal, y, the observed noisy signal e, the noise and r,
the noise variance. The purpose of denoising is to recover the original signal x from the
noisy signal y.
For the image of N  N pixel size, its original image is set as:

fi;j ¼ 1; 2; . . .; n ; n  N ð2Þ

fi;j is the grayscale at the point ði; jÞ.


Then the image with noise can be expressed as:

gi;j ¼ fi;j þ ei;j ; i; j ¼ 1; 2; . . .; n; n  N ð3Þ

Among them, gi;j is the grayscale at the point ði; jÞ where the noise is superimposed
on the original image ei;j , the noise at the point ði; jÞ, which i; j is the row and column
coordinates of the corresponding pixel.
Application Analysis of Contourlet Transform in Image Denoising 513

3 Image Denoising Based on Contourlet Transform

Contourlet transform is a multi - scale geometric analysis method with multi - reso-
lution, multi - direction and anisotropy. Compared with wavelet transform, the coef-
ficient energy of the images obtained after Contourlet transform is more concentrated in
different directions and scales, and the effect of image denoising is better [7].
In the image denoising of Contourlet transform, the decomposition layer J is firstly
determined, then the low frequency coefficient a0 and high frequency coefficient
d0 ; d1 ; . . .; dJ1 are respectively obtained by Laplace transform and directional filter
bank.
Then the new Contourlet coefficient dbt , t ¼ 0; 1; . . .; J  1 is obtained by setting the
thresholding to address coefficient. The transposition is made in the last, which can gain
the estimated signal fbi;j of the original fi;j . Thus, the denoised image after the tobacco
leaf treatment db0 ; db1 ; . . .; dd
J1 and the Contourlet inversion a0 is obtained.
The choice of thresholding and its functions is the key to the denoising algorithm.
In the Contourlet-based denoising, the hard thresholding denoising algorithm can better
preserve the local and edge details of the image, but after reconstruction, the image may
have hair-like visual interference. While soft thresholding has smoother denoising
effect, but it is easy to blur the edge details. First, the hard thresholding function is used
for denoising, as shown in Eq. (4).

dt ; when jdt j  d
dbt ¼ ; t ¼ 0; 1; . . .; J ð4Þ
0; others

In the equation, dt is the high frequency coefficient, dbt , the processed low frequency
coefficient, and d, shrinkage thresholding.
The denoising effect is directly related to the assignment of d. The larger d selected,
the more noise will be eliminated, but the high-frequency information in the image will
also be lost. The smaller d selected, the more image information can be retained, but so
the noise can. In view of this situation, Donoho proposed the shrinkage-thresholding
algorithm in 1994 [8, 9].
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
d ¼ r 2 lnðN0 Þ ð5Þ

Although this method has certain basis for the selection of shrinkage thresholding, it
can only obtain the upper limit of the optimal thresholding, not the optimal thresholding.
In different scale subbands, the proportion of image and noise information is also
different. And the higher the scale, the more obvious the trend. Therefore, one selection
method based on multi-scale decomposition denoising thresholding is proposed [10].
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðlJ Þ=2
Tl ¼ r 2 lnðN0 Þ  2 ð6Þ

In the equation, r is the size of noise; N0 , the total number of image pixels; l, the
scale level; and J, the number of decomposition layers.
Through the above formula, the selection of thresholding can be changed according
to the different scale of coefficient, thus achieving its adaptive selection.
514 L. Zhang et al.

4 Analysis and Comparison of Simulation Results

In order to verify the de-noising effect of Contourlet transform de-noising algorithm,


pepper and salt noise, which accounts for 10% of all the pixels of the current images,
was added to test images in the experiment. Six methods, namely mean filter, gaussian
filter, wiener filter, median filter, wavelet de-noising and Contourlet transform, were
used to conduct de-noising experiments and comparisons on tobacco images. The first
four are all denoised by filtering windows with a size of 3  3 pixels; wavelet
transform adopted global denoising based on db3 wavelet filter; and Contourlet
transform, “9–7” tower decomposition and “pkva” directional filter bank. The
denoising effects of tobacco image are shown in Fig. 1 respectively, and PSNR in
Table 1. PSNR represents the denoising effect of the algorithm, and the higher its value
is, the better denoising effect would be.

Fig. 1. Tobacco images after denoising


Application Analysis of Contourlet Transform in Image Denoising 515

In Fig. 1, it is obvious that most of the noise in the image after mean filter has been
removed, but the details become blurred. Median filter, gaussian filter, wiener filter and
wavelet denoising methods have similar denoising effects, and better texture details are
preserved. Contourlet-based denoising method not only removes all noise, but also
performs better than other methods in preserving texture details of tobacco image,
which also proves the effectiveness of this method in denoising agricultural products.

Table 1. PSNR of different image denoising algorithms


Mean Gaussian Median Wiener Wavelet Contourlet
filter filter filter filter denoising transform
Channel 1 34.00 43.47 40.15 40.12 40.37 46.11
Channel 2 33.96 43.43 39.85 39.87 40.21 46.41
Channel 3 33.16 42.54 40.23 40.62 40.45 44.50

It can be concluded from Table 1 that the PSNR by using Contourlet transform for
denoising is the highest, followed by gaussian filter. The denoising effect of median
filter, wiener filter and wavelet is similar, slightly lower than that of gaussian filter.
While the effect of mean filter is relatively low.

5 Conclusions

To better realize the image denoising of fresh tobacco, this paper proposes an image
denoising algorithm based on Contourlet transform. Compared with other denoising
methods, it is showed that Contourlet transform thresholding denoising is an algorithm
more suitable for denoising tobacco images.

References
1. Xu, F., Zhang, F., Du, B., et al.: Effects of different fresh leaves classification on tobacco leaf
quality and benefits. J. Anhui Agric. Sci. 41(25), 10429–10432 (2013)
2. Donoho, L.: De-noising by soft threshholding. IEEE Trans. Inf. Theory 41(3), 613–627
(1995)
3. Yang, F., Tian, Y., Yang, L., et al.: Agricultural product image denoising algorithm based on
hybrid wavelet transfor. Trans. Chin. Soc. Agric. Eng. 27(3), 172–178 (2011)
4. Do, M.N., Vetterli, M.: Contourlets: a directional multi resolution image representation. In:
IEEE International Conference on Image Processing, Rochester, NY, pp. 357–360 (2002)
5. Cunha, L., Zhou, J.P., Do, M.N.: The nonsubsampled contourlet transform theory design and
applications. IEEE Trans. Image Process. 15(10), 3089–3101 (2005)
6. Song, H., He, D., Han, T.: Contourlet transform as an effective method for agricultural
product image denoising. Trans. Chin. Soc. Agric. Eng. 28(8), 287–292 (2012)
7. Dai, W., Yu, S., Sun, S.: Image de-noising algorithm using adaptive threshold based on
Contourlet transform. Acta Electron. Sinica 35(10), 1939–1943 (2007)
516 L. Zhang et al.

8. Donoho, D.L., Johnstone, I.M.: Ideal spatial adaptation via wavelet shrinkage. Biometrika
81, 425–455 (1994)
9. Donoho, D.L.: Denoising by soft-thresholding. IEEE Trans. Inf. Theory 3, 613–627 (1995)
10. Chang, S.G., Yu, B., Vetterli, M.: Adaptive wavelet thresholding for image denoising and
compression. IEEE Trans. Image Process. 9(9), 1532–1546 (2000)
Monte Carlo Simulation of Nanoparticle
Coagulation in a Turbulent Planar Impinging
Jet Flow

Hongmei Liu1,2(&), Weigang Xu1,2, Faqi Zhou1,2, Lin Liu1,2,


Jiaming Deng1, Shuhao Ban1, and Xuedong Liu1,2
1
School of Mechanical Engineering, Changzhou University, Changzhou, China
[email protected]
2
Key Laboratory of Green Process Equipment in Jiangsu, Changzhou, China

Abstract. A Monte Carlo method coupled with large eddy simulation is


employed to study nanoparticle evolution in a confined impinging jet. The
transient and discrete particle distributions and the time-averaged particle
number density distributions can be obtained. The results show that the coherent
structure evolution has large effect on the particle dispersion pattern and the
particle diameter distributions.

Keywords: Nanoparticle  Coagulation  Impinging jet  Large eddy


simulation  Monte carlo method

1 Introduction

In many scientific and industrial applications, the phenomenon of nanoparticle


dynamics in turbulent flows is of great interest. For example, particulate matter evo-
lution in ground vehicles [1], nanoparticle synthesis in reactive flows [2] and soot
formation in engine combustion chambers [3].
In these areas, the impingement of nanoparticles on a solid surface occurs in many
processes. The impinging jet is a full-fledged technique to study the deposition of
nanoparticles onto a solid surface [4]. Furthermore, the particle diameters in the
mentioned applications are usually nanoscale, and the complicated coherent structures
in the stagnation region and wall flow region usually lead to non-determined particle
distributions in the impinging jet, which results in difficulties in fully understanding the
particle dynamic behaviors in those phenomena [5].
Among the numerical methods in solving the general dynamic equation of
nanoparticles, the Monte Carlo (MC) method [6, 7] is more and more preferred by
people because of its stochastic nature by using simulated particles to imitate the
dynamic behaviors and movement trajectories of particles. In the present study, the MC
method is coupled with the large eddy simulation (LES) to study nanoparticle evolution
in a confined impinging jet.
The remaining part of this paper is organized as follows. Section 2 introduces the
governing equation of the nanoparticle in turbulent flows. Section 3 details the

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 517–522, 2020.
https://doi.org/10.1007/978-981-15-2341-0_65
518 H. Liu et al.

algorithmic implementation of the coupled LES-MC method. Section 4 presents the


results and discussions. Finally, the conclusions are given in Sect. 5.

2 Numerical Methodology

2.1 Governing Equations for Gas Phase


The transport of the continuous fluid phase is governed by the well-known Navier-
Stokes (N-S) equations. In LES, the turbulent flows are decomposed into two parts of
large- and small-scale structures: the large eddies are directly computed on a Eulerian
grid, while the small eddies are modelled. In LES, the filtered N–S Equation is written
as,

@q @ðqui Þ
þ ¼0 ð1Þ
@t @xi

@ðqui Þ @ðqui uj Þ @p @ @


ui @sij
þ ¼ þ ðl Þ  ð2Þ
@t @xj @xi @xj @xj @xj

where ui is the velocity, p is the pressure, q is the density and l is the viscosity. sij
refers to the subgrid scale (SGS) stress tensor.

2.2 Governing Equations for Particle Phase


The dispersed particle phase is described by a Lagrangian Monte Carlo method, and the
governing equation of the position and velocity of a particle is given by the following
equations:

dxp;i
¼ up;i ð3Þ
dt
dup;i 3 q
¼ u!
cD ðui up;i Þj~ up j þ f s ð4Þ
dt 4 qp dp

where xp;i is the position, up;i is the velocity of the particles, ui is the velocity of the
continuous gas phase and dp is the diameter of the dispersed particles. The first term in
the right hand side of Eq. (4) denotes the drag force that the carrier flow imposes on the
particles. The second term in the right hand side represents the contributions from
forces other than drag force.

3 Algorithmic Implementation

A brief outline of the algorithm of the LES-MC method is given as follows [8]:
(a) Initialization.
Monte Carlo Simulation of Nanoparticle Coagulation 519

(b) Choose a time-step, Dt for the gas phase flow.


(c) Solving the gas flow fields.
(d) Updating the spatial position and velocity field of particles.
(e) Choose a time-step, dt for the nanoparticles.
(f)Start M (i.e., M ¼ Dt=dt) Monte Carlo loops.
(g) Treatment of coagulation process by MC method [8, 9].
(h) The properties of simulated particles are updated.
(i)If the current MC loop number, R does not reach the predetermined MC loop
number, M, then start a new MC loop. Otherwise, quit the MC loop for the
nanoparticles.
(j) If the calculation time, t reaches tstop, output the results of two-phase flow fields.
The flowchart of the LES-MC algorithm is shown in Fig. 1.

Fig. 1. Flowchart of the LES-MC algorithm [8].


520 H. Liu et al.

4 Results and Discussion


4.1 Configuration and Model Description
Figure 2 shows a planar impinging jet flow configuration which is used in the present
study. The width of the nozzle D is 25 mm and the Reynolds number, Re = DU0/m, is
30000 [5]. The nozzle-to-plate distance H is 2D. The computational grid is comprised
of 400  800. The fluid in this study is air at a temperature of T = 300 K. The
nanoparticles are injected with a diameter of 5 nm. In this size regime, free molecule
regime coagulation rate [10] is used.

Fig. 2. A sketch map of a planar impinging jet flow [5].

4.2 Evolution of Coherent Structures


Figure 3 shows the transient evolution of the vorticity in z-direction. It can be seen that
vorticity is generated at the interface where the jet and environment gas mix together.
The core of the vortex firstly moves downward and then split to the direction of far
away from the y-axis because of the effect of impinging plate. It can be seen that at time
t = 0.02 s, a series of vortexes appear near the bottom wall because of the impingement
effect between the flow and the wall.

(a) t=0.0005s (b) t=0.010s

(c) t=0.0015s (d) t=0.020s

Fig. 3. Contours of vorticity in the evolution of vortex


Monte Carlo Simulation of Nanoparticle Coagulation 521

4.3 Evolution of Nanoparticles


The transient states and dispersion characteristics of the particle field by using the
discrete simulated particles are shown in Fig. 4. The transient distributions of the
particles are affected by the vortex structure of the fluid flows and the transient dis-
tribution of the particles is quite similar to the distribution of vorticity shown in Fig. 3.
It can also be seen that the diameters of particles increase along the stream-wise
direction which illustrated that the coagulation process happens over time.

(a) t=0.0005s (b) t=0.010s

(c) t=0.0015s (d) t=0.020s

Fig. 4. Transient particle field distribution coloured by the diameter of particles

The time-averaged normalized particle number density (N/N0, where N0 is the


particle number density at the jet inlet) distribution at time t = 0.020 s is shown in
Fig. 5. It can be seen that the particle number density becomes smaller along the
stream-wise direction which has complete opposite changes with the particle diameter.
The results reveal that near the inlet of the jet, the particle number density shows
dramatic decrease and the decrease tendency slows down in the wall flow region. This
is because the coagulation rate will become slower with the decrease of particle number
density.

Fig. 5. Time-averaged normalized particle number density distribution at time t = 0.020 s


522 H. Liu et al.

5 Conclusions

A coupled LES-MC method is used to study the transient evolution of nanoparticles in


a turbulent planar impinging jet flow. The results show that the coherent structure
evolution has a great effect on the transient particle dispersion pattern and the particle
diameter distributions, and the coagulation rate will change accordingly along the
stream-wise direction because of the decrease of particle number density.

Acknowledgements. The work is supported by the Sinopec Corp. major research project (Grant
No. 417002-2). The research work is based on the MC method developed by H.M. Liu during her
PhD study at the Department of Mechanical Engineering in the Hong Kong Polytechnic
University.

References
1. Chan, T.L., Liu, Y.H., Chan, C.K.: Direct quadrature method of moments for the exhaust
particle formation and evolution in the wake of the studied ground vehicle. J. Aerosol Sci. 41(6),
553–568 (2010)
2. Yu, M., Lin, J., Chan, T.: Numerical simulation of nanoparticle synthesis in diffusion flame
reactor. Powder Technol. 181, 9–20 (2008)
3. Rodrigues, P., Franzelli, B., Vicquelin, R., Gicquel, O., Darabiha, N.: Coupling an LES
approach and a soot sectional model for the study of sooting turbulent non-premixed flames.
Combust. Flame 190, 477–499 (2018)
4. van de Ven, T.G.M., Kelemen, S.J.: Characterizing polymers with an impinging jet.
J. Colloid Interface Sci. 181, 118–123 (1996)
5. Yu, M., Lin, J., Xiong, H.: Quadrature method of moments for nanoparticle coagulation and
diffusion in the planar impinging jet flow. Chin. J. Chem. Eng. 15(6), 828–836 (2007)
6. Liu, H.M., Chan, T.L.: Differentially weighted operator splitting Monte Carlo method for
simulating complex aerosol dynamic processes. Particuology 36, 114–126 (2018)
7. Liu, H.M., Chan, T.L.: Two-component aerosol dynamic simulation using differentially
weighted operator splitting Monte Carlo method. Appl. Math. Model. 62, 237–253 (2018)
8. Liu, H.M., Chan, T.L.: A coupled LES-Monte Carlo method for simulating aerosol
dynamics in a turbulent planar jet. Int. J. Numer. Methods Heat Fluid Flow (2019). https://
doi.org/10.1108/hff-11-2018-0657
9. Zhao, H., Kruis, F.E., Zheng, C.: A differentially weighted Monte Carlo method for two-
component coagulation. J. Comput. Phys. 229(19), 6931–6945 (2010)
10. Zhou, K., He, Z., Xiao, M., Zhang, Z.: Parallel Monte Carlo simulation of aerosol dynamics.
Adv. Mech. Eng. 6, 1–11 (2014)
Structural Damage Detection of Elevator Steel
Plate Using GNARX Model

Jiaxin Ma1,2(&) and Yan Dou1,2


1
School of Mechanical Engineering, Changshu Institute of Technology,
Changshu, China
[email protected]
2
Jiangsu Key Laboratory for Elevator Intelligent Safety,
Changshu Institute of Technology, Changshu, China

Abstract. For elevator steel plate, the tiny crake is difficult to find out. How-
ever, the crake growth may cause fatality. Thus, it’s crucial to detect structural
damage of elevator steel plate. With the expression of GNARX model deduced,
the modified Mahalanobis distance least square (MMDLS) is proposed for
parameter estimation. Then, the structure pruning algorithm based on parame-
ters’ rate of standard deviation (SPRSD) is proposed for structure identification.
With experimental data, GNARX model is applied to structural damage
detection for elevator steel plate. The results show that the effect of structural
damage detection of GNARX model is better than those of AR, ARX, GNAR
models, which indicates the superiority of GNARX model applied to structural
damage detection of elevator steel plate.

Keywords: Structural damage detection  GNARX model  Parameter


estimation  Structure identification  Elevator steel plate

1 Introduction

As closely related to people’s daily life, elevator safety receives more and more
attention. However, for elevator steel plate, the initial and tiny crack is hard to be
discovered. Yet, crack growth may cause serious failure, sometimes even leading to
fatal disaster. Hence, it is critical to detect, locate, and estimate the extent of the
structural damage.
Generally, structural damage detection can be categorized as local-damage detec-
tion and global-damage detection [1]. The local-damage detection techniques [2–4]
refer to dye penetration, magnetic powder, eddy current, radial, ultrasound, strain, etc.
The main advantage is that there is no need to develop specific model or obtain baseline
data of undamaged structure. Thus, local-damage detection is very effective for small
and regular structures. However, for large and complex structures in invisible or closed
environments, it is very difficult and time-consuming to complete an inspection of the
whole structure using local-damage detection methods. To overcome the limitation of
local-damage detection, the vibration-based structural damage detection as a global-
damage detection technique is proposed [5, 6].

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 523–532, 2020.
https://doi.org/10.1007/978-981-15-2341-0_66
524 J. Ma and Y. Dou

Vibration-based methods mostly consist of two categories: finite element model


updating methods [7] and statistical time series model methods [8]. However, most finite
element model updating methods and statistical time series model methods of available
literatures require developing the baseline models of undamaged structures. This will
greatly limit the application, e.g., structural damage detection of in-service equipment.
Thus, to overcome the limitation, a novel approach is proposed to detect, locate, and
estimate the extent of the structural damage for in-service equipment based on a linear
and nonlinear auto-regressive model with exogenous input (GNARX model) [9].
The remaining part of this paper is organized as follows. Section 2 introduces the
expression of GNARX model. Section 3 details the parameter estimation and structure
identification of GNARX model. In Sect. 4, the methodology of structural damage
detection for elevator steel plate is proposed and verified. Finally, the conclusions are
provided in Sect. 5.

2 Expression of GNARX Model

According to the modeling strategy of time series analysis, the general linear and
nonlinear auto-regressive model (GNAR) takes a zero mean white noise {at} as input
to the system [10]. When one of the exogenous inputs {ut} is known, the GNAR model
is converted into the GNARX model with a single exogenous input.
If the system has two exogenous inputs, ut and vt, the abbreviation of GNANX
model with double inputs can be written as GNARX (p; su, sv; nw,1, nw,2, …, nw,p; nu,1,
nu,2, …, nu,p; nv,1, nv,2, …, nv,p), which is expressed as follows:

xt;i;1 ¼ fwt1 ;    ; wtnw;i ; utsu ;    ; utsu nu:i þ 1 ; vtsv ;    ; vtsv nv:i þ 1 g ð1Þ

nw;1 þX
nu;1 þ nv;1
wt ¼ hði1 Þxt;1;1 ði1 Þ
i1 ¼1
nw;1 þX
nu;1 þ nv;1 nw;2 þX
nu;2 þ nv;2
þ hði1 ; i2 Þxt;2;1 ði1 Þxt;i;1 ði2 Þ
i1 ¼1 i2 ¼1
nw;1 þX
nu;1 þ nv;1 nw;p þX
nu;p þ nv;p ð2Þ
Y
p
þ  þ  hði1 ;    ; ip Þ xt;p;1 ðik Þ
i1 ¼1 ip ¼1 k¼1
p nw;1 þX
X nu;1 þ nv;1 nw;j þX
nu;j þ nv;j Y
j
¼  hði1 ;    ; ij Þ xt;j;1 ðik Þ þ at
j¼1 i1 ¼1 ij ¼1 k¼1

where xt,i (i = 1, 2, …, p) is the ith-order term; xt,i,j (j = 1, 2, …, i) is the jth-order


transitional term in the derivation process of xt,i; xt,i,1(j) is the jth element of vector xt,
i,1; wt-i is the observation at time t − i; ut-su−i is the exogenous input ut at time
t − su − i; vt-sv−i is the exogenous input vt at time t − sv − i; at-i is the white noise at
time t − i, i = 1, 2, …, n; su and sv are the input delay of ut and vt, respectively; h(i1),
h(i1, i2),… are the model parameters; p is the model order; nw,j (j = 1, 2, …, p) is the
Structural Damage Detection of Elevator Steel Plate Using GNARX Model 525

memory step of the jth-order term of output {wt}; nu,j and nv,j (j = 1, 2, …, p) are the
memory step of the jth-order term of input {ut} and input {vt}, respectively.
Similarly, Eq. (2) can also be generalized into multi-input systems, which need not
be repeated here.

3 Identification of GNARX Model


3.1 Parameter Estimation of GNARX Model
The modified Mahalanobis distance least square (MMDLS) is proposed and applied to
GNARX model parameter estimation. It is better than least square (LS) method as
MMDLS takes total sample’s secondary moment property into account while LS only
considers sample mean.
Using the GNARX model with double inputs indicated in Eq. (2) as example, the
FFRLS algorithm for the parameter estimation of GNARX is deduced as follows:

xt;i;1 ¼ 
fwt1 ;    ; wtnw;i ; utsu ;    ; utsu nu:i þ 1 ; vtsv ;    ; v
tsv nv:i þ 1 g
xt;i;1 ð1Þfxt;i;1 ð1Þg; xt;i;1 ð2Þfxt;i;1 ð1Þ; xt;i;1 ð2Þg;    ;
xt;i;2 ¼
xt;i;1 ðmi;1 Þxt;i;1
.. ð3Þ
.  
xt;i;i1 ð1Þfxt;i;i1 ð1Þg; xt;i;i1 ð2Þfxt;i;i1 ð1Þ; xt;i;i1 ð2Þg;    ;
xt;i;i ¼
xt;i;i1 ðmi;i1 Þxt;i;i1

where mi;j ¼ Cnjw;i þ nu;i þ nv;i þ j1 (j = 1, 2, …, i).

xt;p ¼ xt;p;p ð4Þ

xt ¼ fxt;1 ; xt;2 ;    ; xt;p g ð5Þ

Accordingly,

X ¼ fxTt ; xTt þ 1 ;    ; xTt þ k gT


ð6Þ
w ¼ fwt ; wt þ 1 ;    ; wt þ k g

Thus, the equation for least square estimation is given as follows:

^h ¼ ðXT XÞ1 XT w ð7Þ

The model residual is given as follows:

e ¼ ½e1 ; e2 ;    ; en T ¼ w  X  ^
h0 ð8Þ

where ^h0 is calculated with Eq. (7). Thus, ei is the model residual of ith sample.
Covariance matrix of e is given as Ce.
526 J. Ma and Y. Dou

eðiÞ ¼ ½ei;1 ; ei;2 ;    ; ei;n T ¼ wðiÞ  XðiÞ  ^


h0 ð9Þ

h i 1X k
Ce ¼ E ðe  E½eÞðe  E½eÞT ¼ eðiÞ eTðiÞ ð10Þ
k i¼1

However, the covariance matrix Ce may be singular matrix.

M ¼ diagðCe ; lÞ þ    þ diagðCe ; 0Þ þ    þ diagðCe ; lÞ ð11Þ

where diag(Ce, 0) is the leading diagonal of Ce and diag(Ce, l) is the lth diagonal.
Thus, GNARX model parameter estimation with MMDLS is given as follows:

^hi ¼ ðXT M1 XðiÞ Þ1 XT M1 w ð12Þ


ðiÞ ðiÞ

where ^hi is the estimated parameters of ith sample.

3.2 Structure Identification of GNARX Model


The structure pruning algorithm based on parameters’ rate of standard deviation
(SPRSD) is proposed and applied to structure identification of GNARX model.
The flow chart of the SPRSD is shown in Fig. 1, and the concrete steps are shown
as follows:

Start

The data are divided into K groups.

The large enough model is confirmed.

Parameter estimation of K groups

The joint AIC value is calculated

Parameters’ rate of standard deviation is calculated

The term with biggest rate of standard deviation is deleted

One term No
remains
Yes

The model structure with the smallest AIC value is


obtained as the optimal model structure

The end

Fig. 1. Schematic overview of key monitoring components of hydropower plant


Structural Damage Detection of Elevator Steel Plate Using GNARX Model 527

Step 1: The data are divided into K groups.


Step 2: The initial model with high enough order and long enough memory steps is
confirmed.
Step 3: Repeat.
Step 3:1: Parameter estimation for K groups.
Step 3:2: The joint AIC value is calculated as Eq. (13).
Step 3:3: Parameter’s rate of standard deviation is calculated. The term
with biggest rate of standard deviation is deleted.
Step 3:4: When one term remains, turn to step 4. Otherwise, turn to step
3.1.
Step 4: The model structure with the smallest AIC value is obtained as the optimal
model structure.
Step 5: Finally, the algorithm is terminated.
With consideration of modeling residuals, forecasting error, and model complexity,
the joint Akaike Information Criterion (AIC) is proposed. It can be calculated as
follows:

Nm 1 X K
Nf 1 X
K
AIC ¼ lnð  r2m;i þ  r2 Þ þ 2R=N ð13Þ
N K i¼1 N K i¼1 f ;i

where r2m;i is the ith sample’s variance of the modeling residuals; r2f ;i the ith sample’s
variance of forecasting error; R is the number of model parameters; N is the sequence
length; Nm is the modeling sequence length; Nf is the forecasting sequence length.

4 Structural Damage Detection of Elevator Steel Plate

GNARX model is applied to steel plate structural damage detection. With parameter
estimation and structure identification, suitable model is developed and model
parameters are taken as feature vector. With k-nearest neighbors (KNN) algorithm, the
steel plates with crake and without crake are identified.

4.1 KNN Algorithm


The main idea of KNN is to assign new examples to be classified to the class to which
the majority of its k nearest neighbors belongs. Euclidean distance expressed as follows
can be used to determine the similarity between samples.
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
X n
dða; bÞ ¼ ðai  bi Þ2 ð14Þ
i¼1

where a = (a1, a2, …, an) and b = (b1, b2, …, bn) are two sample data; n is the sample
length.
528 J. Ma and Y. Dou

To evaluate the accuracy of classification, the indicator is presented as follows:

P
n
dðyi ; ci Þ
i¼1
Acuuracy ¼ ð15Þ
n
where yi and ci denote the true category label and the obtained cluster label, respec-
tively; d is a function that equals 1 if yi = ci and equals 0 otherwise.

4.2 Data Acquisition


Two steel plates with the same size (300 mm  100 mm  3 mm) are prepared. One
is undamaged and the other one has a through crack (50 mm  1 mm). The crack
position is shown as Fig. 2. Four PK151 resonant sensors are set in four positions on
the steel plate (shown as Fig. 2). The signals are collected by SENSOR HIGHWAY
acoustic emission acquisition instrument. The experimental facility is shown as Fig. 3.
The sampling frequency is 1 MHz and sampling number is 8192. The threshold
value is 26 db. Five groups of acoustic emission data are collected for each steel plate.
Typically, one group of data for undamaged steel plate is shown as Fig. 4 (a) and one
group of data for damaged steel plate with a crack is shown as Fig. 4(b).

Fig. 2. Dimension of the damaged steel plate and the location of four measuring points

Fig. 3. The sketch map of experimental facility and the layout of sensors
Structural Damage Detection of Elevator Steel Plate Using GNARX Model 529

#2 measure point #3 measure point #2 measure point #3 measure point


sound pressure (µBar)

sound pressure (µBar)

sound pressure (µBar)

sound pressure (µBar)


0.2 0.2 0.2 0.2

0 0 0 0

-0.2 -0.2 -0.2 -0.2

0 2000 4000 6000 8000 0 2000 4000 6000 8000 0 2000 4000 6000 8000 0 2000 4000 6000 8000
sampling sampling sampling sampling
#1 measure point #4 measure point #1 measure point #4 measure point
sound pressure (µBar)

sound pressure (µBar)

sound pressure (µBar)

sound pressure (µBar)


0.2 0.2 0.2 0.2

0 0 0 0

-0.2 -0.2 -0.2 -0.2

0 2000 4000 6000 8000 0 2000 4000 6000 8000 0 2000 4000 6000 8000 0 2000 4000 6000 8000
sampling sampling sampling sampling

(a) undamaged steel plate (b) damaged steel plate

Fig. 4. One group of acoustic emission data for each steel plate

4.3 Result and Discussion


For each of totally 10 groups of data, the first 1024 data and the last 3072 data are
deleted. Finally, the middle 4096 data are applied to GNARX model. Each 256 data are
taken as one sample (the first 200 data are used for modeling and the left 56 data are
used for forecasting). Thus, both of undamaged and damaged steel plates are divided
into 80 samples, from which the random 50 samples are used for training and the
remained 30 samples are used for test.
The upper model takes #2 measure point as input and #3 measure point as output.
The bottom model takes #1 measure point as input and #4 measure point as output. The
distance between input and output is 160 mm. As shear wave velocity in steel plate is
3000 m/s, the time delay is about 5.3  10−5 s and the input delay is taken as 53.
For structure identification, GNARX(3;53;10,4,2;8,2,2) is taken as the initial
model. With SPRSD, the upper and bottom model structures of undamaged and
damage steel plate are shown as follows:
The upper model structure of undamaged steel plate:

xt;1 : wt1 ; wt3 ; wt4 ; wt5 ; wt7 ; wt8 ; wt10 ; utsu 3


xt;2 : w2t3 ; wt3 utsu 1 ; utsu 1 utsu 2 ð16Þ
xt;3 : wt1 u2tsu 1

The upper model structure of damaged steel plate:

xt;1 :wt1 ; wt3 ; wt4 ; wt5 ; wt7 ; wt8 ; wt10 ; utsu 1 ; utsu 2 ; utsu 3
xt;2 :wt2 wt3 ; w2t3 ; wt1 utsu 1 ; wt2 utsu 1 ; wt3 utsu 1 ;
ð17Þ
wt3 utsu 2 ; wt4 utsu 2 ; u2tsu 2
xt;3 :w3t1 ; wt1 u2tsu 1
530 J. Ma and Y. Dou

The bottom model structure of undamaged steel plate:

xt;1 : wt1 ; wt2 ; wt3 ; wt4 ; wt6 ; wt8 ; utsu 3


xt;2 : w2t2 ð18Þ
xt;3 : w3t1

The bottom model structure of damaged steel plate:

xt;1 : wt1 ; wt2 ; wt3 ; wt4 ; wt6 ; wt8 ; wt10 ; utsu 2 ; utsu 3 ; utsu 5
xt;2 : w2t1 ; w2t2 ; u2tsu 1 ; wt1 utsu 2 ð19Þ
xt;3 : w3t1 ; w2t1 utsu 1 ; wt1 u2tsu 1 ; u3tsu 1

To make the feature vectors exactly exhibit the nonlinear characteristics and the
vector dimension of undamaged and damaged steel plate data stay the same, the final
model structures are shown as follows:
The final upper model structure:

xt;1 :wt1 ; wt3 ; wt4 ; wt5 ; wt7 ; wt8 ; wt10 ; utsu 1 ; utsu 2 ; utsu 3
xt;2 :wt2 wt3 ; w2t3 ; wt1 utsu 1 ; wt2 utsu 1 ; wt3 utsu 1 ;
ð20Þ
wt3 utsu 2 ; wt4 utsu 2 ; utsu 1 utsu 2 ; u2tsu 2
xt;3 :w3t1 ; wt1 u2tsu 1

The final bottom model structure:

xt;1 : wt1 ; wt2 ; wt3 ; wt4 ; wt6 ; wt8 ; wt10 ; utsu 2 ; utsu 3 ; utsu 5
xt;2 : w2t1 ; w2t2 ; u2tsu 1 ; wt1 utsu 2 ð21Þ
xt;3 : w3t1 ; w2t1 utsu 1 ; wt1 u2tsu 1 ; u3tsu 1

With parameter estimation, the model parameters are taken as the feature vectors
and KNN algorithm is applied. The results are listed in Tables 1, and 2. For com-
parison, the results of AR, ARX, different order GNAR, and different order GNARX
models are also listed in Tables 1, and 2.
From the above, the following can be obtained:
(1) As steel plate structural damage detection, the classification accuracy of GNARX
models are higher than those of AR, ARX, and GNAR models. This indicates that
GNARX model is suitable for structural damage detection.
(2) The classification accuracy of GNARX model with SPRSD is the highest. This
indicates the effectiveness of SPRSD for GNARX model structure identification.
(3) The classification accuracy of the bottom model is obviously higher than that of
the upper model. This indicates that model closer to damaged location embodies
more damage information.
Structural Damage Detection of Elevator Steel Plate Using GNARX Model 531

Table 1. The KNN classification accuracy of different models applied to the upper data of
undamaged and damaged steel plate
Model Maximum accuracy* Mean accuracy**
AR(6) 66.67% 61.11%
GNAR(2;6,1) 65.00% 61.11%
GNAR(3;8,4,1) 70.00% 65.74%
ARX(8,3,53) 68.33% 62.78%
GNARX(2;53;7,3;1,2) 73.33% 67.22%
GNARX(3;53;8,4,1;1,2,1) 75.00% 68.89%
GNARX with SPRSD 75.00% 70.19%
*
is the maximum KNN classification accuracy of different k values
(k = 3, 5, 7, …, 19).
**
is the mean KNN classification accuracy of different k values
(k = 3, 5, 7, …, 19).

Table 2. The KNN classification accuracy of different models applied to the bottom data of
undamaged and damaged steel plate
Model Maximum accuracy* Mean accuracy**
AR(8) 73.33% 68.89%
GNAR(2;6,2) 68.33% 62.59%
GNAR(3;8,1,1) 75.00% 71.67%
ARX(10,3,53) 70.00% 67.04%
GNARX(2;53;10,2;2,2) 85.00% 83.15%
GNARX(3;53;10,2,0;5,2,1) 93.33% 86.11%
GNARX with SPRSD 95.00% 91.48%

5 Conclusions

The expression of GNARX model is deduced. On the basis of the structure charac-
teristics of GNARX model, a novel approach of parameter estimation (MMDLS) and
structure identification (SPRSD) for GNARX model is proposed. With the experi-
mental data, structural damage of elevator steel plate is detected by time series models,
among which the effect of GNARX model is obviously better than those of other
models.
In this paper, simple structure steel plates’ damage detection is used for research.
For elevator car, multiple GNARX models can be developed for subsections of elevator
car. Then, parameter matrix can be obtained, of which the change state can reflect the
structural damage’s the location and extent of elevator car. However, this needs further
study.
532 J. Ma and Y. Dou

References
1. Ghiasi, R., Fathnejat, H., Torkzadeh, P.: A three-stage damage detection method for large-
scale space structures using forward substructuring approach and enhanced bat optimization
algorithm. Eng. Comput. 35, 1–18 (2019)
2. Janapati, V., Kopsaftopoulos, F., Li, F., et al.: Damage detection sensitivity characterization
of acousto-ultrasound-based structural health monitoring techniques. Struct. Health Monit.
15(2), 143–161 (2016)
3. Souridi, P., Chrysafi, A.P., Athanasopoulos, N., et al.: Simple digital image processing
applied to thermographic data for the detection of cracks via eddy current thermography.
Infrared Phys. Technol. 98, 174–186 (2019)
4. Tabatabaeipour, M., Hettler, J., Delrue, S., et al.: Non-destructive ultrasonic examination of
root defects in friction stir welded butt-joints. NDT E Int. 80, 23–34 (2016)
5. Santos, A., Santos, R., Silva, M., et al.: A global expectation–maximization approach based
on memetic algorithm for vibration-based structural damage detection. IEEE Trans. Instrum.
Meas. 66(4), 661–670 (2017)
6. Loh, C.H., Chan, C.K., Chen, S.F., et al.: Vibration-based damage assessment of steel
structure using global and local response measurements. Earthq. Eng. Struct. Dyn. 45(5),
699–718 (2016)
7. Vahidi, M., Vahdani, S., Rahimian, M., et al.: Evolutionary-base finite element model
updating and damage detection using modal testing results. Struct. Eng. Mech. 70(3), 339–
350 (2019)
8. Vamvoudakis-Stefanou, K.J., Sakellariou, J.S., Fassois, S.D.: Vibration-based damage
detection for a population of nominally identical structures: Unsupervised Multiple Model
(MM) statistical time series type methods. Mech. Syst. Signal Process. 111, 149–171 (2018)
9. Ma, J., Xu, F., Huang, K., et al.: Improvement on the linear and nonlinear auto-regressive
model for predicting the NOx emission of diesel engine. Neurocomputing 207, 150–164
(2016)
10. Huang, R., Xu, F., Chen, R.: General expression for linear and nonlinear time series models.
Front. Mech. Eng. China 4(1), 15–24 (2009)
Production Management
The Innovative Development and Application
of New Energy Vehicles Industry
from the Perspective of Game Theory

Jianhua Wang1,2(&) and Junwei Ma2(&)


1
Evergrande School of Management, Wuhan University of Science
and Technology, Hongshan District, Wuhan, China
[email protected]
2
School of Economics and Management, Changshu Institute of Technology,
No. 99, 3rd South Ring Road, Changshu, China
[email protected]

Abstract. Since the advent of new energy vehicles, it has attracted much
attention from many parties, but the market performance has not been very
competitive. Therefore, in order to make more consumers accept new energy
vehicles, so as to improve the state energy consumption structure, how to
effectively promote new energy vehicles has become an urgent problem to be
solved by the government. This paper believes that the marketing of new energy
vehicles is a typical game process. By constructing the game model among the
supply and demand side (automobile manufacturer and consumer), the supply
side (the automobile manufacturer and competitor), the government and the
enterprise (government and enterprise), the game optimal solution of the par-
ticipants of the new energy vehicle market is analyzed. Finally, this paper
proposes that consumers should actively improve their product needs, the auto
manufacturers should focus on improving the profit of the products, and the
government should use different support policies at different stages.

Keywords: New energy vehicles  Innovative development  Market


promotion  Game theory

1 Introduction

Under the impact of the 2008 world financial crisis, oil prices rose, the global car
industry sales fell, and auto makers began to reflect on the car structure and production
technology upgrading. At the same time, because of the enhancement of public crisis
awareness of the traditional energy exhaustion, more and more pollution to the modern
society such as haze, acid rain and so on, the voice of the ecological environment
protection and sustainable development is becoming intense. Under the multifaceted
situation, energy conservation and new energy vehicles have developed rapidly under
the intensive technology innovation support led by government and began to gradually
enter the public view. China is the world’s fastest-growing auto market, with more than
23.6 million vehicles sold in 2016. By 2020, China is projected to have around 300
million automobiles, which would surpass the current U.S. fleet of 265 million. Some

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 535–544, 2020.
https://doi.org/10.1007/978-981-15-2341-0_67
536 J. Wang and J. Ma

consumers are willing to consume for environmental protection because of the dete-
rioration of their living environment, but many other consumers also have many
concerns about new energy vehicles, such as price performance, safety and practicality.
As a result, all car manufacturers have intensified their technological research and
joined the market competition successively. In addition, there are different demands
between automobile manufacturers and the government. Car manufacturers pursue
profit maximization because it needs a lot of cost investment for car manufacturers to
shift from traditional energy vehicles to new energy vehicles. The government hopes to
bring more social value and implement the concept of sustainable development, so it
need to publicize and advocate new energy vehicles and guide market demand.
However, because of the high cost of new energy vehicles, the market price has
remained high. Due to the price, technology and supporting facilities, consumers who
are willing to buy the new energy vehicles are still small even if the government has
introduced the relevant preferential subsidy policies. As a product that breaks through
the energy restriction, the new energy vehicle is conducive to changing the energy
consumption structure of the country, and its marketing promotion is particularly
important.

2 Literature Review

The game theory was founded in 1944 by John von Neumann and Oskar Morgenstern.
In the later study, foreign scholars (Harsanyi [1], Nash, Selen) continued to enrich the
theoretical framework and contributed to the important achievements of the transfor-
mation of the sea and the Nash equilibrium and so on.
In the study of new energy vehicles marketing promotion, the foreign scholar Scott
believes that the government’s subsidy policy has a positive effect on the enterprise,
that is, the government subsidy policy has a stimulating effect on the R & D investment
of the enterprise [2]. Chinese scholars generally believe that the development of new
energy vehicles is in line with the national conditions of our country. It can deal with
the impact of the international financial crisis while solving the energy environment
problems. At the same time, it is the important breakthrough point of the industrial
upgrading and the establishment of strategic new industries [3].
Shen [4], Zhang [5], Xu [6], Luo [7] and so on, respectively from the product,
publicity and marketing strategy, marketing mode and policy support, pointed out the
current problems in the development of new energy vehicles in China, and put forward
solutions and suggestions. With the background of the development of new energy
vehicles in Japan, Jin [8] put forward the optimization strategy for the promotion of the
new energy vehicle in China by studying the specific practices and the factors that restrict
the development in new energy vehicle promotion in Japan. In combination study of
market promotion and game theory, Wang and Miu had obtained a more reasonable
government subsidy mode through the game model between government subsidies and
enterprises [9]. Wang and Wang used the game evolution model to study the income
matrix of the government and related enterprises in the process of technology research
and development of new energy vehicles and put forward a proposal to promote the
technology development of new energy vehicle by analysing the optimal solution [10].
The Innovative Development and Application of New Energy Vehicles Industry 537

It can be seen that many scholars have proposed solutions to the promotion of new
energy vehicles from the aspects of products, prices, channels and promotions. On the
basis of this, this paper tries to establish a game model for the promotion of new energy
vehicle. From three perspectives of supply-demand side, supply side and government
enterprise, this paper analyzed the strategic combination of game players and sought
equilibrium strategy, so as to provide countermeasures and suggestions for the market
promotion of new energy vehicle.

3 Case Analysis and Propositions

New energy vehicles, as new products of technological innovation, have received


favorable reviews from all walks of life but its market performance is not good. How to
effectively promote its market performance has become the focus of attention of all
participants. In the process of market promotion of new energy vehicles, there are three
main roles: supplier side, demand side and government. They held different positions
and sought different benefits in the development of new energy vehicles.

3.1 Model Assumptions and Definitions


Since there are three main participants in the game of the new energy vehicle market
promotion, in order to make a more detailed game analysis, this paper will build the
model from three aspects and do the following basic assumptions.
Hypothesis 1: in the game model of supply-demand side and demand side, auto-
mobile manufacturers and automobile consumers are the main participants. Govern-
ment policy is considered to be the influencing factor. In the game model of supply
side, the automobile manufacturers and their competitors are the main participants, and
the government policy is considered to be the influencing factor. In the game model of
government and business, government and automobile manufacturers are the main
participants.
Hypothesis 2: all the participants in the market promotion game of new energy
vehicles are rational. Automobile manufacturers aim at the best of their own interests,
and consumers pursue their own purchase demand. The government takes healthy and
orderly development of new energy vehicles as the utility maximization.
Hypothesis 3: the demand for automobile products is limited, and the development
of new energy vehicle market is at the initial stage, and the market share is relatively
low. It is a fast response production mode with no inventory. Consumers’ consumption
demand is elastic, which is driven by price fluctuations and not necessarily considers
environmental benefits.
Hypothesis 4: all participants in the market promotion game of new energy vehicle
understand each other’s characteristics, the set of action strategies that can be chosen
and the utility of them.
Hypothesis 5: assuming that all the games are static, that is, the actions of all
participants in the game take place at the same time. There is no first action and a post
action.
538 J. Wang and J. Ma

3.2 Model Analysis and Recommendations


3.2.1 Game Model Analysis of Supply Side and Demand Side in New
Energy Vehicle Market
Under the influence of government policies, this paper constructs a game model of
supply and demand for manufacturers and consumers in new energy vehicle market,
and makes the following definitions.
The utility of a consumer who bought a car is UC, and a car manufacturer promotes
a new energy car with a profit of Un; a car manufacturer promotes a traditional car with
a profit of U0; the cost of a car manufacturer which promotes a new energy vehicle to
the market is Cn; the cost of a car manufacturer which promotes a traditional vehicle to
the market is Co; The policy subsidy for consumers who buy new energy vehicles is Sc
and the policy subsidy for new energy vehicle manufacturers is Sm.
According to the basic assumption, under the influence of government policies the
game model of new energy vehicle manufacturers and consumers is shown in Table 1.

Table 1. Payoff matrix of vehicle manufacturers and customers


Consumers Manufacturers
Promotion of new energy vehicles Promotion of traditional vehicles
Consumption (Sc+UC, Sm+Un) (UC, U0)
No consumption (0, -Cn) (0, -Co)

Based on the consumer’s perspective, consumers have multiple decision choices


when considering automobile products, and. The income obtained is Sc + UC or UC
when they choose to consume. At this time, under the influence of government policy,
the income Sc + UC of buying new energy vehicles is obviously greater than the
benefit UC from the purchase of traditional vehicles. Consumers can also choose not to
spend money on vehicles with earning zero.
Based on manufacturer’s point of view, the automobile manufacturer also has
multiple decision choices in car market. Its utility is Sm + Un or -Cn when manu-
facturer choose to promote the new energy vehicle. The utility is not necessarily
positive. This depends on consumer decision. If the consumer’s dominant strategy is to
consume, the manufacturer’ utility is Sm + Un. If the consumer chooses not to con-
sume, the manufacturer needs to bear a certain cost which is -Cn. Owing to no
inventory factors in previous assumption, so here the cost consists of R & D, tech-
nology, patents and other aspects.
Automobile manufacturers can also choose to popularize traditional vehicles, and
their utility is Uo or -Co. In the same way, the utility is not necessarily positive, and it
depends on consumer decision. If the consumer’s dominant strategy is consumption,
the utility Uo is obviously greater than -Co. If the consumer chooses not to consume,
the manufacturer needs to bear a certain cost which is -Co.
Automobile manufacturers and consumers, who are both rational, are certain to
consider the best decision of the other party when making their best action.
The Innovative Development and Application of New Energy Vehicles Industry 539

To sum up, the utility is assigned and compared in order to find out the optimal
solutions under different circumstances in the game in new energy vehicle. Because of
the influence of government policy, it can be easily found out that Sc + UC is greater
than UC or 0, so that only when consumers have vehicles consumption demand each
other’s best decisions are considered at this time.
1. If Sc + UC is greater than UC and Sm + Un is greater than Uo, the equilibrium
solution is (consumption, popularizing new energy vehicles).
2. If Sc + UC is larger than UC and Sm + Un is less than Uo, the equilibrium solution
is (consumption, popularizing traditional cars).
Conclusion: when the profits of new energy vehicles are greater than those of
traditional vehicles, the cost factors of new energy vehicles and consumers’ demand for
new energy vehicles have little effect on the optimal results. under these circumstances,
vehicles manufacturers are more inclined to promote new energy vehicles. As new
energy vehicles have certain environmental benefits, consumers will be more willing to
buy new energy vehicles when their environmental awareness is enhanced.

3.2.2 Game Model Analysis of Supply Side in New Energy Vehicle


Market
In the case of government policy intervention, this paper constructs game model of the
automobile manufacturers and their competitors in new energy vehicle market, and
makes the following definitions:
The profit of vehicle manufacturers to promote new energy vehicles is Un
According to the basic assumption, the game models of new energy vehicle
manufacturers and their competitors under the influence of government policies are
shown in Table 2.

Table 2. Payoff matrix of vehicle manufacturers and their competitors


Competitors Manufacturers
Promotion of new energy Promotion of traditional
vehicles vehicles
Promotion of new energy (Un, Un) (Un, 0)
vehicles
Promotion of traditional (0, Un) (0, 0)
vehicles

Based on the competitors’ point of view, the competitors have two options. They
can choose to promote new energy vehicles and obtain the profit Un, or they can
choose to promote the traditional vehicles and obtain the profit 0. The profit 0 here
refers to no enjoyment of government’s policies welfare in promoting traditional cars,
which is no more utility than conventional gains.
540 J. Wang and J. Ma

Based on manufacturers’ point of view, the manufacturers also have two options
when choosing the product. They can choose to promote new energy vehicles and
obtain the profit Un, or they can choose to promote the traditional vehicles and obtain
the profit zero.
As rational-economic man, the manufacturers and their competitors will consider
the best decision of the other party when making their best action.
In summary, because Un is greater than zero, the equilibrium solution is to (pro-
mote new energy vehicles, promote new energy vehicles).
Conclusion: when the profit of the new energy vehicle industry is higher than that
of the traditional vehicle, the manufacturer and the competitor will both choose to
promote new energy vehicle. However, as new energy vehicles are new products, it is
faced with a great challenge when entering the market. In absence of knowing each
other’s action, as a rational person, the manufacturer will choose to promote the tra-
ditional vehicles in order to avoid risks.

3.2.3 Game Model Analysis Between Government and Enterprise in New


Energy Vehicle Market
In the case that the revenue of the government to maintain the order of the automobile
market is greater than the maintenance cost, this paper constructs the game model
between the automobile manufacturer and the government in the new energy vehicle
market, and makes the following definitions:
The profit of vehicle manufacturer to promote new energy vehicles is Un.
Government gains Ug from the order maintenance of the vehicle market. The cost of
vehicle manufacturers to promote new energy vehicles is Cn. Government takes Cg for
the order maintenance of the vehicle market. The profit of vehicle manufacturer to
promote traditional vehicles is Uo. When the government maintains the market order,
the cost of vehicle manufacturer promoting traditional vehicles is Co.
According to the basic assumption, the game models between new energy vehicle
manufacturers and government are shown in Table 3.

Table 3. Payoff matrix of vehicle manufacturers and government


Government Manufacturers
Promotion of new energy Promotion of traditional
vehicles vehicles
Market maintenance (Ug-Cg, Un-Cn) (Uo-Ug, -C0)
No market (Un, Un-Cn) (0, 0)
maintenance

From the perspective of government, government has two choices in this game.
First, it can choose to maintain the order of vehicle market to ensure the healthy and
orderly development. At this time, the revenue of the government is Ug-Cg or Uo-Ug.
The specific revenue needs to be determined according to the dominant strategy of the
other side in the game.
The Innovative Development and Application of New Energy Vehicles Industry 541

Government can also choose not to maintain the order of the vehicle market in this
competition and let it develop naturally. At that time, the government’s revenue is Un
or 0. Un refers to the benefits brought by the increasing market share of new energy
vehicles in vehicle market. 0 means that there will be no more revenue and more cost
when manufacturers promote traditional vehicles under the circumstance of no gov-
ernment’s order maintain in vehicle market.
Based on vehicle manufacturer’s point of view, vehicle manufacturer still has two
choices when choosing the vehicle. It can choose to promote the new energy vehicle,
whose utility is Un-Cn. The utility is not necessarily positive, which needs to be
determined according to the government’s decision. If the government’s dominant
decision is to maintain the vehicle market, the earning of the vehicle manufacturer is
Un-Cn. And if government does not maintain the market, the earning of the vehicle
manufacturer to promote new energy vehicles is still Un-Cn.
Vehicle manufacturer can also choose to promote traditional vehicle, and the profit
is -C0 or zero. In the same way, the specific utility needs to be determined by the
government’s dominant decision. If government chooses to maintain market order, the
vehicle manufacturer needs to face a certain penalty -Co. If government does not
maintain market order, the vehicle manufacturer which promotes traditional vehicle
gains zero and the zero here means that vehicle manufacturer does not need to bear the
payment used to maintain the market and has no more utility.
As rational economic man, the vehicle manufacturer and government must consider
the best decision of the other party first when they making the best action.
To sum up, we compare the utilities in order to find out the optimal solution for the
new energy vehicle promotion under different circumstances.
3. If Un-Cn is greater than zero and Uo-Ug is less than zero, the equilibrium solution
is (not to maintain the market, to promote new energy vehicles).
4. If Un-Cn and Uo-Ug are both greater than zero, the equilibrium solution is (not to
maintain the market, to promote new energy vehicles).
5. If Un-Cn is less than- Co and Ug is greater than Uo, the equilibrium solution is (not
to maintain the market, to promote traditional vehicles).
6. If Un-Cn is less than - Co and Ug is less than Uo, the equilibrium solution is to (to
maintain the market, to promote new energy vehicles).
7. If Cois less than Un-Cn and less than zero, and Ug is greater than Uo, the equi-
librium solution is (not to maintain the market, to promote traditional vehicle).
8. If Co is less than zero and less than Un-Cn and Ug is less than Uo, this situation is
special. When the decision of vehicle manufacturer is to promote new energy
vehicles, the optimal decision of government is not to maintain market order. When
the decision of vehicle manufacturers is to promote traditional vehicles, govern-
ment’s best decision is to maintain market order. When the government’s decision
is to maintain market order, the best decision of vehicle manufacturer is to promote
new energy vehicles. When the government’s decision is not to maintain market
order, the best decision of vehicle manufacturer is to popularize traditional vehicles.
Since it is a static game, there is neither Nash equilibrium and nor optimal solution.
Conclusion: when the profits of new energy vehicles are not as high as those of
traditional vehicles, the vehicle manufacturers are more willing to focus their
542 J. Wang and J. Ma

production on the original products. But government has different demands compared
to vehicle manufacturers. Government hopes that the market consumption can be self-
adjusted and upgraded. Both vehicle manufacturers and customers can turn to new
energy vehicles, which can be realized when the profits of new energy vehicles are
better than traditional vehicles, but in the current market, it needs to cut down the cost
further in order to make sure that profits of new energy vehicles are as good as these of
traditional vehicle. Therefore, government should maintain order in the vehicle market
so as to ensure the smooth promotion of new energy vehicles.

4 Conclusions

As a whole, new energy vehicles are not mature enough and have not been widely
accepted by consumers. The profit of promoting the new energy vehicles may not be
able to achieve the profit from the promotion of traditional vehicles. As a rational
economic man, vehicle manufacturer will not take the social responsibility and promote
new energy vehicles on his own for the sake of environment. The market demand of
new energy vehicles is unknown, and no one can undertake the loss which manufac-
turers will face in promoting new energy vehicles.
At this time, government’s intervention is needed to guide the healthy and orderly
development of the new energy vehicle industry. Firstly, we should make macro
control means play a role in the development of new energy vehicle market, and let the
policy be effectively landed. Secondly, government needs to guide consumer demand,
so that vehicle consumers can accept and purchase new energy vehicles.
First, consumers should pay close attention to the good policy of the new energy
vehicle market. They should seize the opportunity to enjoy the welfare subsidies. At the
same time, whether they need convenience, safety or other attributes, they should clear
their own demand for the product itself, and must communicate effectively with vehicle
manufacturers through a reasonable path. To ensure that the manufactured car products
can meet their own needs. In addition, consumers should improve the awareness of
environment, and should not let the vehicle manufacturers do these things alone. When
the supply and demand are turned to new energy vehicles, market development of new
energy vehicles can enter a virtuous cycle and jointly improve the energy consumption
structure.
Second, when promoting new energy vehicles, vehicle manufacturers should fur-
ther reduce the production cost and channel management cost. Manufacturers should
increase the investment of technology, break through the limitation of poor battery
endurance, make the battery endurance of new energy vehicles greatly improved, and
reduce the production cost of new energy vehicles by mass production or cooperation
with other manufacturers. Vehicle manufacturers must catch hold of Internet plus era.
Production or distribution should be flat as possible in order to respond quickly, save
marketing costs and enhance overall competitiveness. Except for product quality, the
service cannot be ignored. As a new product, the new energy vehicle has a certain
resistance to it. It needs to do a good job of after-sales service for new energy vehicles,
and to invest in the construction of the supporting facilities in the area covered by the
market including increasing the input of charging pile and equipment maintenance in
The Innovative Development and Application of New Energy Vehicles Industry 543

order to eliminate consumer concerns. Try to avoid competing with competitors in the
price war, which is not helpful to the healthy and long-term development of new
energy vehicle industry. Manufacturers should set up their own brand with correct
positioning and pay close attention to the consumer’s product demand in order to meet
consumers’ need while get the product upgrade and development.
Third, the government should play a different role in the different stages in pro-
moting new energy vehicle. At the beginning, government should guide the con-
sumption by propaganda and education of consumption consciousness in the field of
new energy, and introduce the subsidy policy of vehicle purchasing in order to reduce
the consumer’s using cost. Government can also adjust the fuel tax policy to raise the
cost of using traditional vehicles. At the same time, through the technology subsidy
policy, the automobile manufacturers should be encouraged to carry out technological
innovation, cultivate professional talents, strengthen the protection of intellectual
property rights, and establish a national technical standard system based on the national
conditions of China, so as to meet the needs of new energy vehicles production.
Meanwhile, we should maintain the market order of new energy vehicles, and reso-
lutely avoid some bad manufacturers’ cheating on tax fraud, and try not to let “bad
money drives out good money” appear. In the middle period of the development of new
energy vehicle market, government should focus on the construction of service projects
which large manufacturers can’t accomplish by themselves, such as public charging
piles, maintenance facilities, and purchase new energy vehicles in the public transport
field. Through various efforts, government assists manufacturers and other forces to
promote the market promotion of new energy vehicles.

Acknowledgement. This research was financially sponsored by fund of six talents peak in
Jiangsu province (Grant No. JY-001), Project of Philosophy and Social Science Research in
Colleges and Universities in Jiangsu Province (Grant NO. 2017ZDIXM004). We would like to
thank anonymous references for their insightful comments and suggestions which lead to the
significant improvement and better presentation of the paper.

References
1. Harsanyi, J.C.: Games with incomplete information played by “Bayesian” players, the basic
model. Manag. Sci. 14(3), 159–182 (1997)
2. Scott, J.T.: Firm versus industry variability in R&D intensity. NBER Chapters, pp. 233–248
(1984)
3. Sun, L.W.: Development status and countermeasures of new energy vehicles in China. China
Sci. Technol. Inf. 7(7), 135 (2012)
4. Shen, L.: Introduction strategy of new energy vehicle market. Shanghai Mot. 1(1), 37–40
(2009)
5. Zhang, F., Bao, X.J.: Problems and countermeasures of new energy vehicle market
promotion in China. Price Theory Pract. 5(5), 85–86 (2011)
6. Xu, C.X.: New energy vehicles need a new marketing model. China’s Strat. Emerg. Ind.
14(21), 54–55 (2014)
7. Luo, J.: Research on marketing strategy in new energy vehicle. J. Qigihar Inst. Eng. 5(2),
52–54 (2011)
544 J. Wang and J. Ma

8. Jin, Y.H.: Reference to Japan’s marketing strategy in new energy vehicle for China.
Northeast Asia Forum 21(3), 105–112 (2012)
9. Wang, H.X., Miao, X.M.: Game research on R&D subsidies of new energy vehicles in
China. Soft Sci. 27(6), 29–32 (2013)
10. Wang, R., Wang, Z.L.: Evolutionary game analysis of R&D process of new energy vehicle.
J. Daqing Norm. Univ. 36(3), 61–66 (2016)
Survey and Planning of High-Payload
Human-Robot Collaboration: Multi-modal
Communication Based on Sensor Fusion

Gabor Sziebig(&)

Department of Production Technology, SINTEF Manufacturing,


Trondheim, Norway
[email protected]

Abstract. Human-Robot Collaboration (HRC) has gained increased attention


with the widespread commissioning and usage of collaborative robots. How-
ever, recent studies show that the fenceless collaborative robots are not as
harmless as they look like. In addition, collaborative robots usually have a very
limited payload (up to 12 kg), which is not satisfactory for most of the industrial
applications. To use high-payload industrial robots in HRC, today’s safety
systems has only one option, limiting speeds of robot motion execution and
redundant systems for supervision of forces. The reduction of execution speed,
reduces efficiency, which limits more widespread of automation. To overcome
this limitation, in this paper, we propose novel sensor fusion of different safety
related sensors and combine these in a way that they ensure safety, while the
human operator can focus on task execution and communicate with the system
in a natural way. Different communication channels are explored (multi-modal)
and demonstration scenarios are presented.

Keywords: Human-Robot collaboration  Industrial robot  Multi-modal


communication  Sensor fusion

1 Introduction

Automation is a tool to increase productivity, while decreasing the amount of human


involvement in production. Humans, after introduction of automation are either used
solely in hard to automate processes or very high-skilled processes. Interfaces toward
automated industrial equipment (like an industrial robot) is usually a screen or a
keyboard, where the interaction is regulated and feels unnatural. The only way how we
can build trust and connection with industrial robots is a way, where we think about
them as co-workers and can communicate with them in a natural way without buttons,
script language, etc.
In order to increase acceptance among co-workers, the communication channels
need to become fenceless. As soon as a co-worker sees a machine behind fences (which
is typical today in case of industrial robots) associates danger with the given machine
and results in resistance toward acceptance.
In this paper first the state-of-the-art results (Sect. 2) from the perspective of
Human-Robot Collaboration (HRC), cognition in HRC and safety in HRC will be
© Springer Nature Singapore Pte Ltd. 2020
Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 545–551, 2020.
https://doi.org/10.1007/978-981-15-2341-0_68
546 G. Sziebig

presented. This will be followed by proposal of different cooperation scenarios


(Sect. 3) and discussion of such scenarios (Sect. 4). Concluding remarks are Sect. 5.

2 Related Research

One of the main concerns in the realization of Human-Robot Collaboration systems is


the safety of the human. Robots used for automation in production are usually
industrial robotic arms (i.e. rigid) with load capacity ranging from 1 to 1000 kg.
Lightweight robots grant human safety through the use of a power and force limitation
approach. For lightweight robots with a load capacity of about 20 kg, various tactile
sensors have been developed which ensure controlled collision with humans. Tactile
sensors which have been proposed for the development of “Robot Skin” and Robot cell
flooring [1, 2] ensure detection of stationary and moving objects.
Robots with higher load capacity are dangerous for humans if left functional at full
speed in their presence [3] since power and force limitation cannot be applied. In such a
context, industrial practice establishes the robot working region, enclosing it by steel
barriers or with certified safety sensors. These barriers or doors, once opened, result in
complete halt in the routine, avoiding the equipment of robots with perception system
to make them aware of their surroundings. Eventually, various technologies and safety
regulations were developed to overcome these limitations.
Technologies ranging from tactile sensors to 3D sensors [4] have been developed as
well as a set of safety norms for collaborative tasks in robotic cell. Various modes for
collaboration with humans have also been developed which invokes different levels of
safety and security levels. Once the robot is set in initial levels of collaboration where it
will only share the workspace with the human, it reduces its speed as the human comes
near and completely stops once the human can collide with the robot (speed and
separation monitoring) [5]. In general, most of the vision algorithms and the proposed
safety functions [6, 7] have taken safety procedures with respect to a specific task into
account without considering the variance of the required shared tasks between human
and robot.
Useful information that a human can convey to a robot are, e.g., gestures [8, 9],
verbal commands and physical interaction with digital devices such as buttons and
touch screens. The computer vision community has made impressive recent develop-
ments concerning detection and recognition of human actions and behaviour (see e.g.
[10–14]). Speech recognition and the interpretation of commands for devices is also an
up and coming development that moves towards the consumer market (e.g. Amazon
Alexa and Google Home). Physical interaction is the current standard in the interaction
with robotic equipment. Buttons and interfaces, such as smart-phones, tablets and other
touchscreens, are well developed and used abundantly. However, these come with the
drawback of introducing an intrusive device in the working area of the human. This
also holds for current state of the art augmented or virtual reality glasses (Microsoft
HoloLens, Meta 2 AR headset). While most of these devices are meant for the con-
sumer market and research is mostly demonstrated in non-industrial environments,
slow integration towards industry is expected.
Survey and Planning of High-Payload Human-Robot Collaboration 547

When robots are used in unstructured environments, it is necessary to combine


multiple sensory input to satisfy safety levels for human-robot collaboration. In addition,
sensor fusion is needed to serve for path planning, force control and fault diagnostics in
these cases. The choice of the appropriate sensor(s) is depending on the task and the
given environment, where the human-robot collaboration is taking place. In the specific
case for unstructured environment, where also humans are in the working space of the
robots, range and proximity sensors are the most important. In addition: touch, vision or
sound sensors could also be needed. More details could be found in [15].

3 Scenarios

In this section, after a short introduction of the proposed architecture, scenarios will be
described, which will highlight the novelty of sensor-fusion based Human-Robot
Collaboration.
System Overview
The Intelligent Factory Space (IFS) concept represents a framework for interaction
between a human and an automated system (e.g. industrial robot, mobile robot or CNC
machine). Multiple layers build up the IFS, which are representing services for the
humans in a modular way.
The IFS concept was previously developed by the author [16] and here is only a
very short overview is given, in order to give an understanding for the following
scenarios and cases.
The IFS architecture is composed from three layers, as illustrated in Fig. 1. The
layers are ordered in a hierarchical manner, mirroring the necessary autonomy and
requirements for the given layer. The combination of these layers forms the Intelligent
Factory Space. The IFS layers offer specific services, in order to increase comfort and
collaboration for the human operators, interacting with the automated machines. To
establish connection between the IFS and humans, a physical interface is proposed,
which can also be seen in Fig. 2. This physical interface is called POLE and placed in
the lowest layer (single element). The available functions and services are also shown
in Fig. 2.
Overall Scenario
The industrial robot is loading/unloading goods from pallet and places the heavy boxes
to a transport conveyor. The operator is delivering the pallets with using a jack. The
emptied pallets are also removed by the operator. An overview of the scenario can be
seen in Fig. 3. In a standby case, the operator is outside the theoretical work-zone of the
industrial robot and the Pole system projects a green circle around the work-zone of the
industrial robot, signaling that everything is fine.
Cases
The operator enters the work-zone of the industrial robot in order to carry out its work.
As soon as the operator enters the work-zone the Pole system detects and notifies the
worker, that he/she is recognized with projecting a green circle around the worker, as
seen in Fig. 4.
548 G. Sziebig

Fig. 1. General architecture of the IFS

Fig. 2. Functions and services of POLE

The closer the worker goes to the industrial robot; the circle’s colour will turn
toward red as a warning for the operator, that the situation is not comfortable for the
human neither for the industrial robot, see Fig. 5. The Factory management cloud
learns the standard behaviour of a worker for the typical execution of the tasks that are
happening in the safety critical zones. If there is a difference, either from worker or
from robot side, the Pole system can warn the operator and take countermeasure actions
on robot task execution also in order to prevent any unwanted event. The Pole system
adapts to the task’s natural execution and limits or modifies the industrial robot’s path
for maximum safety. As the Pole system is designed to provide two-way communi-
cation and behaviour learning we can also detect if a worker begins to be tired and can
adapt even to the capabilities/mood of the given worker who will interact with the
industrial robot.
Survey and Planning of High-Payload Human-Robot Collaboration 549

Fig. 3. Scenario overview

Fig. 4. Worker detected in work-zone of industrial robot

If there was any additional equipment used to carry out the task and for some reason
this remains in the work-zone of the industrial robot, the equipment is highlighted
similar way a human being and the operator is warned about the situation.
550 G. Sziebig

Fig. 5. Operator is in close proximity of the industrial robot, red circle is signalling for
unnecessary proximity to the industrial robot

4 Discussion

State-of-the-art in communication between human and a robot is typically limited to


displaying information on a screen for industrial settings and speech for research
settings. Industrial environments are noisy which limits the possibilities for more
intuitive communication. In order to overcome this challenge, a multi-modal com-
munication solution is proposed. Such solution can help representing information for
different senses, which was originally assigned to.
For multi-modal communication, sensory data fusion is necessary. This ensures that
the environment, the robot is placed, will be monitored continuously and feedback
from the robot can be communicated back to the human in a natural way. When the
human could actually “touch” the robot, this will be even more crucial. To select the
appropriate control parameters for such cooperation, this could be done in agreement
with the human. Also, this would allow the robot and human sharing the responsibility
when sharing the workspace. The Intelligent Factory Space is such an environment,
where human-robot collaboration combined with multi-modal communication could be
achieved.

5 Conclusion

In this paper Human-Robot Collaboration scenarios has been introduced. The scenarios
are based on the Intelligent Factory Space concept and describes the use of the IFS. It
can stated that such scenarios are not possible in today’s safety standards, especially
when we would like to use high-payload industrial robots and not so-called collabo-
rative robots. Today’s system are “stupid” proof and need changing, with the scenarios
Survey and Planning of High-Payload Human-Robot Collaboration 551

detailed here, this could be the first toward fair responsibility sharing between human
co-worker and industrial robot.

Acknowledgements. The work reported in this paper was supported by the centre for research
based innovation SFI Manufacturing in Norway, and is partially funded by the Research Council
of Norway under contract number 237900.

References
1. Youssefi, S., Denei, S., Mastrogiovanni, F.: A real-time data acquisition and processing
framework for large-scale robot skin. Robot. Auton. Syst. 68, 86–103 (2015)
2. IFF: Tactile sensor system, 15 February 2017. http://www.iff.fraunhofer.de/content/dam/iff/
en/documents/publications/tactile-sensor-systems-fraunhofer-iff.pdf
3. Haddadin, S.: Injury evaluation of human-robot impacts. In: IEEE International Conference
on Robotics and Automation ICRA 2008 (2008)
4. https://www.pilz.com/en-INT/eshop/00106002207042/SafetyEYE-Safe-camera-system
5. Szabo, S., Shackleford, W., Norcross, R., Marvel, J.: A testbed for evaluation of speed and
separation monitoring in a human robot collaborative environment. NIST
Interagency/Internal Report (NISTIR) – 7851 (2012)
6. Saenz, J., Vogel, C., Penzlin, F., Elkmann, N.: Safeguarding collaborative mobile
manipulators - evaluation of the VALERI workspace monitoring system. Procedia Manuf.
11, 47–54 (2017)
7. Baranyi, P., Solvang, B., Hashimoto, H., Korondi, P.: 3D Internet for cognitive info-
communication. In: 10th International Symposium of Hungarian Researchers on Compu-
tational Intelligence and Informatics, CINTI 2009, pp. 229–243 (2009)
8. Gleeson, B., MacLean, K., Haddadi, A., Croft, E., Alcazar, J.: Gestures for industry Intuitive
human-robot communication from human observation. In: 8th ACM/IEEE International
Conference on Human-Robot Interaction (HRI), Tokyo, pp. 349–356 (2013)
9. Liu, H., Wang, L.: Gesture recognition for human-robot collaboration: a review. Int. J. Ind.
Ergon. 68, 355–367 (2017)
10. Cao Z., Simon T., Wei S.E., Sheikh, Y.: Realtime multi-person 2D pose estimation using
part affinity fields. In: CVPR (2017)
11. Simon T., Joo H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using
multiview bootstrapping. In: CVPR (2017)
12. Vincze, D., Kovács, S., Gácsi, M., Korondi, P., Miklósi, A., Baranyi, P.: A novel application
of the 3D VirCA environment: modeling a standard ethological test of dog-human
interactions. Acta Polytech. Hung. 9(1), 107–120 (2012)
13. Herath, S., Harandi, M., Porikli, F.: Going deeper into action recognition: a survey. Image
Vis. Comput. 60, 4–21 (2017)
14. Baranyi, P., Nagy, I., Korondi, B., Hashimoto, H.: General guiding model for mobile robots
and its complexity reduced neuro-fuzzy approximation. In: Ninth IEEE International
Conference on Fuzzy Systems. FUZZ- IEEE 2000 (Cat. No. 00CH37063), San Antonio, TX,
USA, vol. 2, pp. 1029–1032 (2000)
15. Shu, B., Sziebig, G., Pieskä, S.: Human-robot collaboration: task sharing through virtual
reality. In: IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics
Society, Washington, DC, pp. 6040–6044 (2018)
16. Reimann, J., Sziebig, G.: The intelligent factory space – a concept for observing, learning
and communicating in the digitalized factory. IEEE Access 7, 70891–70900 (2019)
Research on Data Encapsulation Model
for Memory Management

Lixin Lu1, Weixing Zhao1, Guiqin Li1(&), and Peter Mitrouchev2


1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai 200072, China
[email protected]
2
University Grenoble Alpes, G-SCOP, 38031 Grenoble, France

Abstract. A data encapsulation model for memory management based on


CAN-bus is proposed in this paper. The data encapsulation model includes the
information about function, function attribute code, sampling frequency, time,
sensor number, etc. It divides data packet into fixed size storage blocks to avoid
the problem caused by data coverage when the functional code is same. Besides,
the model is stored in SRAM to solve the problem of memory lifetime caused by
Nor Flash. The memory management consists of memory pool and memory
management table. Through memory management, each packet corresponds to a
unique address. The efficient and fast allocation of memory resources is realized.
Finely, a program flow for general real-time detection system’s data storage and
reading is presented. The memory management method is universal, easy to
expand, and has been successfully applied to the function detection device of
massage chair. The utilization of memory is consistent with the actual situation.
The data storage is more reliable and the retrieval is more convenient.

Keywords: CAN-bus  Data storage  Detection

1 Introduction

One of the important links in the modern industrial manufacturing process is detection.
The detection of industrial product is the evaluation of the product itself and the results
can be used as the basis for the improvement of the manufacturing process. The present
problem, however, is that the host computer of the real-time detection system always
keeps connection with the lower computer, to get the state of the lower computer and
execute tasks, such as database reading, writing, etc. It is impossible to analyze the
detection data of the lower computer in time. Both Peng and Zhang have developed a
real-time detection system, but the data encapsulation model of each sensor are not
packaged [1, 2]. Yang proposed a dynamic Scratch-pad memory management with data
pipelining, which can effectively improve embedded systems’ performance [3]. Stilk-
erich, on the other hand, proposed a cooperative memory management method, but it is
not suitable for the general industrial production environment [4]. Sun develops a fault
detection device based on neural network model. It’s detection function is well clas-
sified [5].

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 552–559, 2020.
https://doi.org/10.1007/978-981-15-2341-0_69
Research on Data Encapsulation Model for Memory Management 553

Detection data is generally stored in Flash memory or RAM [6]. Flash memory can
be erased and programed without removing the memory chip, which is a common way
of data storage, but the number of erasure allowed per memory unit is limited. The Nor
Flash can be used 100000 times. If the Flash is abnormal, the data written will go
wrong. However, data refresh is frequent to a real-time detection system. In order to
avoid the rapid loss of Flash, one method is to reduce the refresh frequency of each
storage sector, thereby reducing the number of writes per unit and improving the
utilization of Flash memory capacity [7]. Another method is to store the data in RAM
when the storage requirement is small and there is no need for power-off storage. RAM
is divided into dynamic RAM (DRAM) and static RAM (SRAM). DRAM uses
capacitance storage and must be refreshed once every once in a while. SRAM is a kind
of memory with static access. This paper introduces the method of memory manage-
ment by storing data in SRAM.
To solve the problem, this paper proposes a data storage model for memory
management, which can realize the dynamic storage of data. The rest of this paper is
organized as follows: Sect. 2 illustrates the data encapsulation model. Memory man-
agement method is presented in Sect. 3. Finally, the experiment and conclusions are
given in Sects. 4 and 5.

2 Data Storage Modeling

The detection data is required to be well packed. For embedded real-time detection
system based on CAN-bus, the time and frequency of each sensor are variable and the
data is stored in the lower computer in advance. When the data of a certain function is
to be used, the master controller of the lower computer sends an instruction to the slave
controller through the CAN-bus. The data packets of each sensor are retrieved from the
controller in the function and function attribute code. Therefore, each packet contains
the code of this function, the frequency and time of data acquisition and so on.
The data encapsulation model for each detection packet is building. The form of
encapsulation is shown in Table 1. The data start bit is 0xF0. The function and attributes
are saved later. Bytes 4–6 are reserved bits. Byte 7 holds the number and number of
detection sensors. If one or several of them is 1, the corresponding sensor value is stored,
otherwise it is not stored. Time and sampling frequency of data are stored in bytes 8 and
9, respectively. The value of the sensor is then stored, and the end mark is 0xF1.
The bytes of data stored for the function is as follows:
ns number of sensors
f the sampling frequency
t the acquisition time
A the number of bytes per sensor value (usually two bytes, that is, a = 2)
N constant (the fixed length of function code, attribute and so on)

Nd ¼ A  ns  f  t þ N ð1Þ
554 L. Lu et al.

Table 1. Encapsulation format of data


Content Number of bytes Address
0xF0 1 0
Function 1 1
Attribute 1 2
0 3 3–5
Sensor 1 6
Time 1 7
Frequency 1 8
Data N-8 9–N
0xF1 1 N+1

The storage model is given in Fig. 1. The memory is divided into fixed-size blocks
(such as 10 KB), according to the size of the packet. The detection data for each
function takes up a block. The codes of the function and function attribute are stored in
the fixed storage location. Each time a packet is retrieved, it is only required to add
10 KB to the previous position. In order to improve memory utilization, the storage
block size should be similar to the maximum packet size.

F0 F0
00 00
02 04

10KB
F1

F1
Unused
Unused

Fig. 1. Fixed-size storage model

In addition to the automatic detection mode, the detection system also includes the
manual detection mode. When a function is detected manually, there may be a problem
with the same functional code storing the data again. In this fixed-size blocks model,
the following data will not be affected as a result of the different stored data length.
Research on Data Encapsulation Model for Memory Management 555

3 Memory Management and Dynamic Storage

The detection data is stored in SRAM, which is a kind of memory with static access
function and can save the data without refreshing the circuit. Memory management is
mainly used to manage the allocation of memory resources in the running process of
MCU, to achieve rapid allocation and to recover memory in due course.
The memory management consists of two parts: memory pool and memory man-
agement table. The memory pool is divided into n blocks, the corresponding memory
management table size is also n. When the pointer calls the function to allocate
memory, firstly, the number of the required memory blocks is calculated according to
the required memory size. If the continuous m-blocks memory is not occupied (i.e. the
value in the memory management table is 0), the memory management table memory
corresponding to the continuous m-blocks memory is set. The memory of the segment
is marked to be occupied (Fig. 2).

Memory
Block 1 Block 2 Block 3 Block n
pool

Memory
No.1 No.2 No.3 No.3 management
table

Fig. 2. Memory management elements

The memory allocation method is shown in Fig. 3. When collecting and storing a
function data, the lower computer first retrieves the function list according to the
function and attribute code. If there is a same function, the new data over-writes it.
Otherwise, computer will add a new column in the array. Then the required memory
space is calculated. After the allocation is successful, the first address of the memory is
stored in the corresponding column of the functional address array. And the functional
code and related information are stored in the later memory.

Function_list Function_addr

Memory First address


Function.1 Attribute.1 Address.1
allocation
Function.2 AƩribute.2 Address.2

FuncƟon.n AƩribute.n Memory First address Address.n


allocaƟon

Fig. 3. Memory allocation method


556 L. Lu et al.

Based on the storage model and allocation method, the program flow of real-time
detection system data storage and reading is shown in Fig. 4. After initializing the
function list and data address table, the data storage or data reading is judged according
to the instructions received by CAN-bus.

Initialization Function List

Initialize Data Address Table

Data Storage Data Reading


Act Based on
CAN-bus
N

Match
Read instruction
instruction

Calculate the
Read data packet
required data space

Calculate the number


Allocation of memory
of sensors, numbers,
space
data points

memory Exit storage, CAN Split a packet into a


allocation N feed back error CAN data frame
successfully? code

Store function Feed back status Enough data


Y
information information frame?

Enough points to Feed back status Load into CAN


Y
store data? information mailbox in turn

AD conversion, Calculate frame


Data filtering interval, delay

Store data in turn

Delay according to
sampling frequency

Fig. 4. The program flow of data storage and reading

When data is stored, the instruction contains information such as function, attribute
code, sampling frequency, time, sensor number, etc. Retrieving the list of function, the
lower computer navigates to the first unused array line, and calculates the required
Research on Data Encapsulation Model for Memory Management 557

storage space. Then the memory management table is traversed to allocate the required
memory space, the storage function and attribute code are stored into the function
array. The information related to the function is stored from the allocated memory
header address. According to the acquisition frequency, the data is collected and fil-
tered, and the filtered data is packaged and stored based on the data encapsulation
model. When data is reading, the instruction contains only the function and attribute
code that need to be read. The lower computer retrieves the list of functions until the
function and attribute code match the instruction. Then it will read the description
information in the packet and calculate the packet size. The packet is divided into 8
bytes of CAN data frames, loaded into can mailbox in turn. Finally the read state is fed
back.

4 Application Example

The data encapsulation model for memory management has been successfully applied
to the function detection equipment of massage chair, as showed in Fig. 5. By sensing
the pressure applied by the massage chair to the detection dummy, the pressure signal is
collected. The performance of massage chair is judged by the amplitude and frequency
of the signal.

Fig. 5. Massage chair detection system

In this device, the original massage chair feedback data received by the main
control board is stored in the array. Figure 6(a) shows an abnormal frame which lacks
end mark. In addition, there are other abnormal frames (not listed one by one). An
algorithm for filtering invalid frames is proposed to solve this problem. The effective
feedback frame is shown in Fig. 6(b). It can be seen that the byte length and value of
the stored data are normal, and the number of bytes from F0 to F1 is the same as the
number of bytes in the packet. The utilization of memory is consistent with the actual
situation. The data storage is more reliable and the retrieval is more convenient.
558 L. Lu et al.

(a) Abnormal frame

(b) Normal frame

Fig. 6. Massage chair detection system

5 Conclusions

In this paper, the storage method for detection data in real-time detection system is
studied. The reliable storage and retrieval of data can be realized by the encapsulation
model. The data packet is stored in SRAM. On the one hand, it can solve the problem
of storage life; On the other hand, because of the small memory, it is only suitable for
small and medium-sized detection system. The memory management is convenient to
store and read data, but it also wastes some resources, which is needed to further study.
The experimental result gives a support of the correctness and the practicability of the
data storage model. In a word, the data encapsulation model for memory management
can be widely used in industrial measurement and control and other fields.
Research on Data Encapsulation Model for Memory Management 559

References
1. Peng, D., Zhang, H., Weng, J., Li, H., Xia, F.: Research and design of embedded data
acquisition and monitoring system based on PowerPC and CAN bus. In: Proceedings of the
8th World Congress on Intelligent Control and Automation, pp. 4147–4151. IEEE (2010)
2. Zhang, X., Zhang, J.: Design of embedded monitoring system for large-scale grain granary.
In: 11th International Symposium on Computational Intelligence and Design, pp. 145–148.
IEEE (2018)
3. Yang, Y.: Dynamic scratch-pad memory management with data pipelining for embedded
systems. In: International Conference on Computational Science and Engineering, pp. 358–
365. IEEE (2009)
4. Stilkerich, I., Taffner, P., Erhardt, C.: Team up: cooperative memory management in
embedded systems. In: International Conference on Compilers, Architecture and Synthesis for
Embedded Systems. IEEE (2014)
5. Sun, L., Wang, D.: The development of fault detection system based on LabVIEW. In: 5th
International Conference on Electrical and Electronics Engineering, pp. 157–161. IEEE
(2018)
6. Zhang, H., Kang, W.: Design of the data acquisition system based on STM32. Procedia
Comput. Sci. 17, 222–228 (2013)
7. Wei, P., Yue, L., Liu, Z., Xiang, X.: Flash memory management based on predicted data
expiry-time in embedded real-time systems. In: ACM 2008 Symposium on Applied
Computing, pp. 1477–1481 (2008)
Research on Task Scheduling Design
of Multi-task System in Massage Chair
Function Detection

Lixin Lu1, Leibing Lv1, Guiqin Li1(&), and Peter Mitrouchev2(&)


1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai 200072, China
[email protected]
2
University Grenoble Alpes, G-SCOP, 38031 Grenoble, France
[email protected]

Abstract. The lC/OS-II system has the advantages of interrupt service, nesting
support, multi-tasking, etc., and has been successfully applied to the massage
chair function detecting device. This paper proposes a massage chair function
detection task scheduling design based on lC/OS-II real-time kernel by using
the synchronization and communication between tasks rationally according to
the task scheduling principle. It can transfer data and achieve massage chair
function detection control. The semaphores are used to achieve the synchro-
nization between related user tasks. The message mailbox mechanism and the
whole process variable provided by lC/OS-II are used to realize the commu-
nication between related user tasks. We can use ECANTOOLS to send and
receive data and read related feedback reports to verify whether the massage
chair function detection logic task scheduling which we design can meet the
requirements.

Keywords: lC/OS-II  Task scheduling  Message mailbox mechanism  Task


synchronization  Task communication  Massage chair function detection

1 Introduction

In recent years, with the steady development of the economy, massage chairs as a new
health care and daily necessities have become more and more popular in the market.
Therefore, the efficient production and testing of massage chairs are particularly
important.
Hiyamizu et al. [1] proposed a massage chair function detection technology based
on a human sensory sensor, but with certain uncertainty. Nowadays, the detection of
the massage chair is carried out by lifting the humanoid inspection tool. Zoican et al.
[2] proposed the application of task scheduling in embedded, Song [3] proposed lC/
OS-II based real-time operating system design and implementation and Quammen et al.
[4] proposed more the application of the task system on the robot, there is a close
relationship between the three. Task scheduling is an important part of the operating
system. For real-time operating systems, task scheduling directly affects its real-time

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 560–566, 2020.
https://doi.org/10.1007/978-981-15-2341-0_70
Research on Task Scheduling Design of Multi-task System 561

performance, while in the embedded system, it can be processed simultaneously


multiple tasks by using the operating system.
In this paper, the task scheduling design based on lC/OS-II operating system is
combined with the three. It can control the liftable humanoid inspection tool and realize
the detection of the functions of the massage chair.

2 The Principle of Task Scheduling

The massage chair function detection is based on the lC/OS system framework. The
execution of the application depends on the scheduling of the user tasks by the lC/OS-
II real-time kernel. The task scheduling strategy is completely controlled by the
application. If the application needs to perform task A at the next moment, task A must
be made the highest priority task in the ready list.

3 The Logical Design of Task Scheduling for Massage Chair


Function Detection

One each user task of the application is independent, but in order to complete a certain
task, it is necessary to maintain a certain relationship between multiple user tasks to
form a whole. In the massage chair detection, the synchronization and communication
between tasks are required.
The completion of one task requires the execution result of another task, and this
kind of constraint cooperation between tasks is called the synchronization between
tasks. The message mechanism provided by lC/OS-II can be used to synchronize
between tasks, as shown in Fig. 1, semaphores are used to synchronize task and ISR,
task and task.

OSSemPost OSSemPend
ISR TASK

OSSemPost OSSemPend
TASK TASK

Fig. 1. Achieve task synchronization by semaphores

As shown in Fig. 1, it uses a “key” flag to indicate the semaphore. This flag
indicates the occurrence of an event. The semaphores used to synchronize the task need
to be initialized to 0. This does not indicate a mutually exclusive relationship.
562 L. Lu et al.

The mailbox message mechanism provided by lC/OS-II sets the corresponding


task list waiting for the message for each mailbox. The task waiting for the message
will be suspended because the mailbox is empty until the mailbox receives the message
or waits for the message mailbox to time out and will enter the ready state, as shown in
Fig. 2, it can achieve the communication of ISR and task, task and task through the
message mailbox, mailbox message waiting timeout can be set to infinite waiting
according to design requirements [5].

OSMboxPost OSMboxPend
ISR TASK

OSMboxPost OSMboxPend
TASK TASK

Fig. 2. Achieve task communication through message mailbox

In the function detection of the massage chair, the message mailbox mechanism
provided by lC/OS-II and the whole process variable are used to realize the com-
munication between related user tasks, as shown in Fig. 3, it is the scheduling design of
the function of the massage chair function detection logic control based on the syn-
chronization and communication between tasks.

3 11
16
1 2 9 10
Task_Detection_Procese
6
7
Task_UART_DataVer 13

8 14
Task_CAN1_fb_Handing

...
4 5 12
ISR 15

UART Data Reception

Fig. 3. Massage chair function detection control task scheduling logic based on the synchro-
nization and communication between tasks
Research on Task Scheduling Design of Multi-task System 563

(1) The user task Task_Detect_Process gets the CPU usage rights and is in the
running state.
(2) The user task Task_Detect_Process makes the UART send function to send a
frame control instruction to the object under test, enables the UART to receive
the interrupt, waits for the message mailbox Mbox_UART_fb to be empty, and
defines the wait timeout.
(3) The user task Task_Detect_Process is suspended because the mailbox is empty
and enters the blocking state. At this point, the UART receives an interrupt
trigger, causing the CPU enters the ISR.
(4) The UART interrupt service stores the received data in bytes and counts it.
(5) After the CPU completes the ISR, repeat steps (4) and (5) until the number of
received data bytes reaches the limit. At this time, the UART receiving interrupt
is closed, and the first array address *UART_Rev_Data which is stored in the
UART data is placed in the mailbox. Mbox_UART_Rev.
(6) The user task Task_UART_DataVer waits for the mailbox Mbox_UART_Rev
event to occur, the task enters the ready state, and the task immediately enters the
running state because it has the highest priority in the ready list.
(7) The user task Task_UART_DataVer performs double redundancy check pro-
cessing on the data pointed to by the pointer to obtain a frame of valid UART
feedback information, and puts a pointer to the information into the mailbox
Mbox_UART_fb.
(8) The user task Task_Detect_Process waits for the message mailbox to be empty,
and immediately enters the ready state, and the task is the task with the highest
priority among the ready tasks, so the CPU usage right is obtained.
(9) The user task Task_Detect_Process identifies and compares the UART feedback
data stored at the address pointed to by the pointer.
(10) If the UART feedback data is correct, the user task Task_Detect_Process makes
the CAN1 send function to send a function execution instruction to the corre-
sponding lower-level module of the function.
(11) The function execution instruction is completed, the feedback function executes
the result, triggers the CAN1 reception interrupt, and the CPU enters the ISR of
CAN1.
(12) The CPU executes the ISR of CAN1, receives the CAN message, and places a
pointer to the message into the message mailbox Mbox_CAN1_Rev.
(13) The user task Task_CAN1_fb_Handing enters the ready state by waiting for the
mailbox event to occur, and becomes the ready state task with the highest
priority and enters the running state.
(14) The user task Task_CAN1_fb_Handing processes the CAN1 feedback and
marks the result. Regardless of the outcome, the entire variable is assigned.
(15) The user task Task_Detect_Process receives the global variable, ends the wait,
and enters the ready state. Because the priority is highest in the ready list, it
enters the running state.
(16) The user task Task_Detect_Process continues to detect other functions.
The massage chair massages the corresponding airbag on the liftable humanoid
inspection tool. The airbags feedback massage function report. We can achieve one-by-
564 L. Lu et al.

one detection of the function of the massage chair by controlling the humanoid
inspection tool and reading the feedback reports. In theory, this logic task scheduling
can meet the requirements.

4 The Verification of Massage Chair Function Detection


Logic Task Scheduling

According to the real-time event channel implemented on the CAN bus proposed by
Kaiser et al. [6], we perform short-frame test of bus communication, and send a MAC
address request to each node module by means of the CAN analyzer. If the receiver
feeds back its MAC address, then it indicates that the node has normal short frame
communication on the bus.

Fig. 4. CAN bus short frame communication test


Research on Task Scheduling Design of Multi-task System 565

As shown in Fig. 4, a MAC address request is sent to the 0#–8# node module, and
the first byte of the feedback frame is the MAC address of the node. After testing, the
contents of the eight feedback information are correct. After that, the ECANTOOLS
detection tool sends the function execution instruction to the corresponding lower
computer module according to the CAN protocol. For example, here we control the
arms of the liftable humanoid inspection tool and the massage of massage chair arms.
As shown in Fig. 5, the command is sent and the corresponding feedback is obtained,
and the feedback data is correct, and the arms retract as shown in Fig. 6. What’s more,
the massage function feedback reports of the arms are as shown in Table 1.

Fig. 5. CAN protocol test

Fig. 6. Arm retraction

Table 1. The feedback report of massage chair arms function


Right arm airbag Gear Right arm front Peak in the Peak after the
qualified position peak right arm right arm
615.25 736.8 1034
Left arm airbag Gear Right arm front Peak in the Peak after the
qualified position peak right arm right arm
1689.8 1539.2 841.2
566 L. Lu et al.

It is known from the above that the task scheduling design of this detection system
for massage chair function detection meets the requirements.

5 Conclusions

In this paper, the task scheduling design of the detection system in the massage chair
detection is studied. The task scheduling logic design of the massage chair function
detection control is carried out to realize the maximum guarantee for the real-time
performance of the system under the controllable process and make massage chair
detection more efficient. This program can be widely used in industrial testing and
other fields.

References
1. Hiyamizu, K., Fujiwara, Y., Genno, H., et al.: Development of human sensory sensor and
application to massaging chairs. In: Proceedings of 2003 IEEE International Symposium on
Computational Intelligence in Robotics and Automation. Computational Intelligence in
Robotics and Automation for the New Millennium (Cat. No. 03EX694), Kobe, Japan, vol. 1,
pp. 140–144 (2003)
2. Zoican, S., Zoican, R., Galatchi, D.: Improved load balancing and scheduling performance in
embedded systems with task migration. In: International Conference on Telecommunication
in Modern Satellite, Cable and Broadcasting Services, pp. 354–357. IEEE (2015)
3. Song, X., Chen, L.: The design and realization of vehicle real-time operating system based on
UC/OS-II. In: 6th International Conference on Networked Computing, Gyeongju, Korea
(South), pp. 1–4 (2010)
4. Quammen, D.J., Kountouris, V.G., Stephanou, H.E., Tabak, D.: Multitasking system for
robotics source. In: Proceedings of the 1989 American Control Conference, 21–23 June 1989,
pp. 2743–2748 (1989)
5. Labrosse, J.J.: Embedded Real-Time Operating System lC/OS-II. Beijing Aerospace
University Press, Beijing (2003)
6. Kaiser, J., Brudna, C., Mitidieri, C.: Implementing real-time event channels on CAN-bus. In:
Proceedings of IEEE International Workshop on Factory Communication Systems, Vienna,
pp. 247–256 (2004)
A Stochastic Closed-Loop Supply Chain
Network Optimization Problem Considering
Flexible Network Capacity

Hao Yu(&), Wei Deng Solvang, and Xu Sun

Department of Industrial Engineering, Faculty of Engineering Science


and Technology, UiT The Arctic University of Norway, Narvik, Norway
{hao.yu,wei.d.solvang,xu.sun}@uit.no

Abstract. Nowadays, due to the concern of environmental challenges, global


warming and climate change, companies across the globe have increasingly
focused on the sustainable operations and management of their supply chains.
Closed-loop supply chain (CLSC) is a new concept and practice, which com-
bines both traditional forward supply chain and reverse logistics in order to
simultaneously maximize the utilization of resource and minimize the genera-
tion of waste. In this paper, a stochastic CLSC network optimization problem
with capacity flexibility is investigated. The proposed optimization model is able
to appropriately handle the uncertainties from different sources, and the network
configuration and decisions are adjusted by the capacity flexibility under dif-
ferent scenarios. The sample average approximation (SAA) method is used to
solve the stochastic optimization problem. The model is validated by a
numerical experiment and the result has revealed that the quality and consis-
tency of the decision-making can be dramatically improved by modelling the
capacity flexibility.

Keywords: Closed-loop supply chain  Network design  Location problem 


Stochastic optimization  Sample average approximation

1 Introduction

In today’s global market, the competition is not only between different individual
enterprises but also largely between different supply chains. The effectiveness and
efficiency in handling material flow, information flow and capital flow within a supply
chain will determine the profitability and success of a company. Supply Chain Man-
agement (SCM) aims, through decision-makings at both strategic level and operational
level, at properly managing different players and flows within a supply chain in order to
maximize the total supply chain surplus or profit [1].
Network design is one of the most essential strategic decisions in SCM, which
formulates the configuration of a supply chain through facility selection and determines
the operational strategies. Traditionally, the design of a supply chain only focuses on
the forward direction from raw material supplier towards end customer. However, due
to the concern of environmental challenges, global warming and climate change from
the whole society, increasing attention has been paid to the value and resource recovery
© Springer Nature Singapore Pte Ltd. 2020
Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 567–576, 2020.
https://doi.org/10.1007/978-981-15-2341-0_71
568 H. Yu et al.

through reverse logistics activities [2, 3]. Closed-loop supply chain (CLSC) is a new
concept and practice, which combines both traditional forward supply chain and
reverse logistics in order to simultaneously maximize the utilization of resources and
minimize the generation of wastes. Compared with the traditional supply chain network
design, the planning of a CLSC is more complicated due to the involvement of more
players. Furthermore, reverse logistics involves more uncertainties compared to the
forward supply chain [3], and this needs to be appropriately treated in a CLSC network
design problem.
Due to the aforementioned complexity of the CLSC network design problem,
significant efforts have been given in order to develop advanced optimization models
and algorithm for a better decision-making. Yi et al. [4] developed a mixed integer
linear program for minimizing the total cost of a retailer oriented CLSC for the
recovery of construction machinery. The model was solved by an enhanced genetic
algorithm and was validated through a case study in China. Özceylan et al. [5] pro-
posed a linear program for maximizing the total profit generation of an automotive
CLSC. The model was solved by CPLEX solver and was validated by a real world case
study in Turkey. Taking into account of the recovery options, Amin et al. [6] inves-
tigated a tire manufacturing CLSC network optimization problem, which was validated
by a real world case study in Canada.
In addition to the economic incentives from incorporating reverse logistics activi-
ties, some research works considered the overall environmental performance of a
CLSC. Hasanov et al. [7] formulated a mathematical model for the optimization of a
CLSC network design problem considering remanufacturing options. The model aims
at minimizing the total cost and emission cost of greenhouse gas (GHG) through
optimal decision-making on both production planning and inventory management.
Taleizadeh et al. [8] investigated a bi-objective optimization model for CLSC network
design considering the balance between total cost and total CO2 emission. A fuzzy
Torabi-Hassini (TH) method was used to solve the multi-objective optimization
problem.
Due to the complexity of the proposed mathematical models, significant compu-
tational efforts may be required to solve the optimization problems of CLSC network
design. Therefore, several research works have been conducted to develop improved
algorithm. Soleimani and Kannan [9] developed a hybrid genetic algorithm (GA) and
particle swarm optimization (PSO) for improving the computational efficiency of a
multi-period and multi-level CLSC network optimization model. Chen et al. [10]
investigated a location-allocation problem for the CLSC network design of cartridge
recycling, which was solved by an enhanced two-stage GA. Hajipour et al. [11] for-
mulated a non-linear mixed integer program for maximizing the profit generation in
CLSC network design. Two metaheuristics: PSO and greedy randomized adaptive
search procedure (GRASP) were employed to solve the proposed mathematical model.
The treatment of uncertainty within the life cycle of a CLSC is another focus of the
recent modeling efforts. Zhen et al. [12] proposed a two-stage stochastic optimization
model for optimizing the decision-making of facility location and capacity allocation in
a CLSC, and an enhanced Tabu search algorithm was developed to solve the model.
Jeihoonian et al. [13] formulated a two-stage stochastic model for CLSC network
design considering uncertain quality. Mohammed et al. [14] proposed a stochastic
A Stochastic Closed-Loop Supply Chain Network Optimization Problem 569

optimization model for minimizing the total cost of CLSC network design. The model
was further incorporated with different carbon policies in order to test their effective-
ness in carbon reduction.
In this paper, we developed a new two-stage stochastic mixed integer program for
CLSC network optimization. Compared with the existing models, the main difference
is the capacity flexibility is taken into account in order to improve the stability and
consistency of the objective values under different scenarios. In addition, the sample
average approximation (SAA) method is used to test the performance of the proposed
mathematical model.

2 Mathematical Model

In this paper, we considered a network optimization problem of a single-product multi-


echelon CLSC. As shown in Fig. 1, the forward supply chain consists of manufacturer,
wholesaler/distributor and customer. The reverse logistics activities are performed at
collection center, disposal center and recycling center. The material flows between
different facilities are given in Fig. 1.

Fig. 1. Network structure of a CLSC.

In this paper, the flexible network capacity is taken into account and is formulated
in the mathematical model. The capacity limitation in a traditional facility location
model may lead to unstable objective values and sub-optimal decisions under a
stochastic environment [3, 15]. For example, because of the rigid capacity constraint,
one more facility may be opened for dealing with a small increase on the customer
demand in some scenarios, which results in unreasonable decisions and inefficient use
of capacity opened. Due to this reason, the flexible network capacity is formulated as a
penalty in the objective function in order to solve the problem and generate reasonable
decisions. In practice, the inclusion of the flexible network capacity is a more realistic
representation of the decision-making problem of CLSC network design, which
enables different interpretations under different conditions, i.e., increase of facility
capacity, outsourcing options, hire of temporary or seasonal workers, or even loss of
sales. In addition, the uncertainty related to the customer demand in the forward supply
570 H. Yu et al.

chain and the rate of waste generation and the quality level in the reverse logistics are
taken into consideration and are formulated as stochastic parameters.

X
M X
W X
C X
R
Min cost ¼ð F m im þ Fw iw þ Fc ic þ Fr ir Þ
m¼1 w¼1 c¼1 r¼1
!
X
S X
M X
R X
W X
M
þ Us Pm Apsm þ Asrm þ Pw Asmw
s¼1 m¼1 r¼1 w¼1 m¼1
!
X
C X
V X
R X
C
þ Pc Asvc þ Pr Ascr
c¼1 v¼1 r¼1 c¼1
X
M X
W W X
X V V X
X C
þ Cmw Asmw þ Cwv Aswv þ Cvc Asvc ð1Þ
m¼1 w¼1 w¼1 v¼1 v¼1 c¼1
!
X
C X
R C X
X D R X
X M
þ Ccr Ascr þ Ccd Ascd þ Crm Asrm
c¼1 r¼1 c¼1 d¼1 r¼1 m¼1
X
M X
V X
V
þ Pum Apsm þ Ov Asv þ Orv Arvs
m¼1 v¼1 v¼1
!
X
D X
C
þ Pd Ascd
d¼1 c¼1

Subject to:

X
W
Dsv  Aswv þ Asv ; 8s; v ð2Þ
w¼1

X
C
#s Dsv  Asvc þ Arvs ; 8s; v ð3Þ
c¼1

X
R X
W
Apsm þ Asrm ¼ a Asmw ; 8s; m ð4Þ
r¼1 w¼1

X
M X
V
Asmw ¼ Aswv ; 8s; w ð5Þ
m¼1 v¼1

X
V X
R X
D
Asvc ¼ Ascr þ Ascd ; 8s; c ð6Þ
v¼1 r¼1 d¼1

X
V X
R
qs b Asvc ¼ Ascr ; 8s; c ð7Þ
v¼1 r¼1
A Stochastic Closed-Loop Supply Chain Network Optimization Problem 571

X
C X
M
c Ascr ¼ Asrm ; 8s; r ð8Þ
c¼1 m¼1

X
R
Apsm þ Asrm  Capm im ; 8s; m ð9Þ
r¼1

X
M
Asmw  Capw iw ; 8s; w ð10Þ
m¼1

X
V
Asvc  Capc ic ; 8s; c ð11Þ
v¼1

X
C
Ascr  Capr ir ; 8s; r ð12Þ
c¼1

X
C
Ascd  Capd ; 8s; d ð13Þ
c¼1

X
V
Asv  Uo; 8s ð14Þ
v¼1

X
V
Arvs  Uro; 8s ð15Þ
v¼1

Objective function (1) minimizes the total cost that is comprised of fixed facility
cost, processing cost, transportation cost, purchasing cost, flexible network capacity
cost and disposal cost. Besides, the model includes 14 constraints. Constraints (2) and
(3) require the CLSC system should be capable to deal with the customer demands in
both forward and reverse directions. Constraints (4) and (5) specify the relationship
between the input and output amount in the forward channels. Constraints (6)–(8)
balance the material flows in the reverse logistics. Constraints (9)–(13) are capacity
requirements of respective facilities. Constraints (14) and (15) give the upper limits of
the flexible network capacity in both forward and reverse logistics. Besides, the
decision variables fulfill their respective binary and non-negative requirements.

3 Algorithm

Equation (16) defines a generic form of a two-stage stochastic optimization problem,


which has the same structure as a CLSC network optimization problem. The first stage-
decisions should be robust to withstand the change of the external environment under
which the system is operated, and the second-stage decisions should be flexible to
572 H. Yu et al.

Fig. 2. Algorithmic procedures of the SAA.

adapt those changes and can be easily altered in order to maximize the system per-
formance. Solving a stochastic programming model is a complex optimization problem
that may require large computational efforts. In this paper, a sample average approx-
imation (SAA) method is employed in order to obtain the optimal objective value of a
large stochastic optimization problem with a great number of scenarios.

min  
f ðx; yÞ :¼ CT x þ EP ½Uðx; nðyÞÞ ð16Þ
x; y 2 H
A Stochastic Closed-Loop Supply Chain Network Optimization Problem 573

( )
min 1 XQ
~f ðx; yÞ :¼ C x þ
T q
Uðx; nðy ÞÞ ð17Þ
x; y 2 H Q Q q¼1

With the SAA, the optimal objective value is approximated by solving a set of
randomly generated small problems repeatedly instead of solving the original problem
directly, as shown in Eq. (17). In such a way, the computational efforts required is
manageable. Figure 2 illustrates the algorithmic procedures of the SAA method. For
more details of the solution method, the research works given by Verweij et al. [16] and
Kleywegt et al. [17] can be referred.

4 Experiment and Discussion

In order to illustrate the application of the proposed model for CLSC network opti-
mization, this section presents a computational experiment based on a set of randomly
generated parameters. The stochastic parameters are generated from uniform distri-
bution of respective parameter intervals. Besides, we investigated the performance of
three different sample sizes: 10, 30 and 50, respectively. All the optimizations were
performed with Lingo 18.0 solver. The results are presented in Figs. 3 and 4.

Fig. 3. CV of the total cost, facility operating cost, transportation cost and flexible network
capacity cost.

First, the in-sample stability is tested with coefficient of variation (CV) that is
obtained using CV ¼ r=l . Figure 3 illustrates the CVs of the total cost as well as
different cost components. With the increase of sample size, the CVs of all relevant cost
574 H. Yu et al.

components reduce, which reveals an improvement on the in-sample stability. When


the sample size increases from 30 to 50, the improvement on the in-sample stability of
the total cost is negligible. In addition, compared with the CVs of other cost compo-
nents, the CV of flexible network capacity cost is extremely high. This can be explained
that the flexible network capacity can be used as an adjustment factor for mitigating the
negative impact on the first-stage network decisions and the objective values from
uncertainty. In such a way, the unsatisfied demand in some scenarios can be fulfilled by
the flexible network capacity, i.e., outsourcing, instead of opening new facilities, which
may dramatically reduce the in-sample stability and result in a low capacity utilization.

Fig. 4. Percentage of the optimality gap and standard deviation.

Then, the quality of the SAA solutions are tested with the reference example. As
shown in Fig. 4, the optimality gap reduces significantly with the increase of sample
size. Compared with 10 scenarios, the optimality gap is decreased by 90.6% when 50
scenarios are used. However, in this case, the combined standard deviation will be
increased by 2.51%. Thus, the solution quality of the stochastic optimization problem
can be improved drastically with the increase of the sample size. It is noteworthy that
the selection of the sample size is based upon a trade-off analysis between quality of
solution and computational efforts required.

5 Conclusions

In this paper, a novel two-stage mixed integer programming model is formulated for
the network optimization of a single-product multi-echelon CLSC. The model aims at
minimizing the total cost for opening and operating the CLSC through optimal
decision-makings on both facility locations and transportation strategies. Compared
with the existing optimization models, this model takes the flexible network capacity
into account and thus formulates a penalty in the objective function. In order to solve
the proposed model, the SAA method is used. The result of the computational
experiment has shown that the inclusion of the flexible network capacity can signifi-
cantly improve the in-sample stability, and the increase on sample size will improve the
quality of solution of a large stochastic optimization problem.
A Stochastic Closed-Loop Supply Chain Network Optimization Problem 575

For further improvement of the current research, two suggestions are given. First,
the environmental impact and policies, i.e., different carbon policies or strategies
[3, 14], may be formulated in the CLSC network optimization problem under an
uncertain environment. Second, different alternatives may be tested in order to increase
the network flexibility [18].

Notations

Set and Parameters


m Index of manufacturer, m = 1, …, M
w Index of wholesaler, w = 1, …, W
v Index of customer, v = 1, …, V
c Index of collection center, c = 1, …, C
d Index of disposal center, d = 1, …, D
r Index of recycling center, r = 1, …, R
s Index of scenario, s = 1, …, S
Fm , Fw , Fc , Fr Fixed opening cost of respective plants
Pm , Pw , Pc , Pd , Pr Unit processing cost at respective plants
Us Probability of occurence
Pum Purchasing cost of materials
Ov , Orv Flexible network capacity cost in both forward and
reverse logistics
Dsv Customer demand from respective locations
#s Conversion rate to used products
a Materials required for assembling one product
qs Quality level
b, c Conversion fraction at respective plants
Capm , Capw , Capc , Capd , Capv Capacity of respective plants
Uo, Uro Upper limits on flexible network capacity in both
forward and reverse logistics
Variables
im ,iw , ic ,ir Binary decision variables for the location decision
on respective candidates
Asmw , Aswv , Asvc , Ascr , Ascd , Asrm Amount of products transported on respective links
Apsm Amount of materials purchased
Asv , Arvs Amount of flexible network capacity used in both
forward and reverse logistics
576 H. Yu et al.

References
1. Chopra, S., Meindl, P.: Supply chain management: strategy, planning, and operation (2016)
2. Yu, H., Solvang, W.D.: A general reverse logistics network design model for product reuse
and recycling with environmental considerations. Int. J. Adv. Manuf. Technol. 87(9–12),
2693–2711 (2016)
3. Yu, H., Solvang, W.D.: A carbon-constrained stochastic optimization model with augmented
multi-criteria scenario-based risk-averse solution for reverse logistics network design under
uncertainty. J. Clean. Prod. 164, 1248–1267 (2017)
4. Yi, P., Huang, M., Guo, L., Shi, T.: A retailer oriented closed-loop supply chain network
design for end of life construction machinery remanufacturing. J. Clean. Prod. 124, 191–203
(2016)
5. Özceylan, E., Demirel, N., Çetinkaya, C., Demirel, E.: A closed-loop supply chain network
design for automotive industry in Turkey. Comput. Ind. Eng. 113, 727–745 (2017)
6. Amin, S.H., Zhang, G., Akhtar, P.: Effects of uncertainty on a tire closed-loop supply chain
network. Expert Syst. Appl. 73, 82–91 (2017)
7. Hasanov, P., Jaber, M., Tahirov, N.: Four-level closed loop supply chain with remanufac-
turing. Appl. Math. Model. 66, 141–155 (2019)
8. Taleizadeh, A.A., Haghighi, F., Niaki, S.T.A.: Modeling and solving a sustainable closed
loop supply chain problem with pricing decisions and discounts on returned products.
J. Clean. Prod. 207, 163–181 (2019)
9. Soleimani, H., Kannan, G.: A hybrid particle swarm optimization and genetic algorithm for
closed-loop supply chain network design in large-scale networks. Appl. Math. Model.
39(14), 3990–4012 (2015)
10. Chen, Y., Chan, F., Chung, S.: An integrated closed-loop supply chain model with location
allocation problem and product recycling decisions. Int. J. Prod. Res. 53(10), 3120–3140
(2015)
11. Hajipour, V., Tavana, M., Di Caprio, D., Akhgar, M., Jabbari, Y.: An optimization model for
traceable closed-loop supply chain networks. Appl. Math. Model. 71, 673–699 (2019)
12. Zhen, L., Sun, Q., Wang, K., Zhang, X.: Facility location and scale optimisation in closed-
loop supply chain. Int. J. Prod. Res. 57, 7567–7585 (2019)
13. Jeihoonian, M., Zanjani, M.K., Gendreau, M.: Closed-loop supply chain network design
under uncertain quality status: case of durable products. Int. J. Prod. Econ. 183, 470–486
(2017)
14. Mohammed, F., Selim, S.Z., Hassan, A., Syed, M.N.: Multi-period planning of closed-loop
supply chain with carbon policies under uncertainty. Transp. Res. Part D: Transp. Environ.
51, 146–172 (2017)
15. King, A.J., Wallace, S.W.: Modeling with Stochastic Programming. Springer, New York
(2012). https://doi.org/10.1007/978-0-387-87817-1
16. Verweij, B., Ahmed, S., Kleywegt, A.J., Nemhauser, G., Shapiro, A.: The sample average
approximation method applied to stochastic routing problems: a computational study.
Comput. Optim. Appl. 24(2–3), 289–333 (2003)
17. Kleywegt, A.J., Shapiro, A., Homem-de-Mello, T.: The sample average approximation
method for stochastic discrete optimization. SIAM J. Optim. 12(2), 479–502 (2002)
18. Yu, H., Solvang, W.D.: Incorporating flexible capacity in the planning of a multi-product
multi-echelon sustainable reverse logistics network under uncertainty. J. Clean. Prod. 198,
285–303 (2018)
Solving the Location Problem of Printers
in a University Campus Using p-Median
Location Model and AnyLogic Simulation

Xu Sun, Hao Yu(&), and Wei Deng Solvang

Department of Industrial Engineering, Faculty of Engineering Science


and Technology, UiT The Arctic University of Norway, Narvik, Norway
{xu.sun,hao.yu,wei.d.solvang}@uit.no

Abstract. The location decision on service facilities is of significant importance


in determining the accessibility of the service provided. Due to this reason, it has
been extensively focused over the past decades by both researchers and practi-
tioners. This paper investigates a novel two-phase hybrid method combining both
optimization model and agent-based simulation in order to solve the location
problem of printers at a building of UiT The Arctic University of Norway, Narvik
campus. In the first phase, the p-median location problem is employed to select
the optimal locations of printers from a number of pre-determined candidate
points so that the total travel distance by both employees and students can be
minimized. In the second phase, both the original and the optimal location plans
of printers are tested, validated and visualized with the help of AnyLogic sim-
ulation package. The result of the case study shows, however, the mathematically
optimized solution may not yield a better performance under a realistic envi-
ronment due to the simplification made and incapability to deal with the ran-
domness. This has revealed that AnyLogic simulation can be used as a powerful
tool to validate and visualize the result obtained from an optimization model and
to make suggestions on the improvement.

Keywords: Location problem  Service facility  Optimization  p-median


model  Simulation  AnyLogic

1 Introduction

The location problem of printers in a university campus is to select the optimal locations
from a set of pre-determined candidates so that the accessibility and satisfaction of users
(students and employees in this case) can be improved. Considering the nature of this
problem, it is a service location and network design problem that has already been
extensively focused on and investigated by both researchers and practitioners for more
than half a century. In management science, the basic idea of this problem is to locate a
number offacilities and, meanwhile, to allocate customer demand to different facilities [1].
Over the years, several methods, i.e., mathematical optimization model, analytical hier-
archy process (AHP), geographical information system (GIS), etc., have been developed
and used to support the decision-making of the location problem and network design of
service facility.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 577–584, 2020.
https://doi.org/10.1007/978-981-15-2341-0_72
578 X. Sun et al.

Based on a previous research given by Yu et al. [2], this paper presents an


improved hybrid method for the location problem of printers in a university campus in
Norway. The rest of the paper is structured as follows. Section 2 presents the
methodological development. Section 3 illustrates the application of the proposed
method with a real world case study at UiT The Arctic University of Norway, Narvik
campus. The result and discussion are given in Sect. 4. Finally, Sect. 5 concludes the
paper.

2 Method

In order to support the decision-making on service facility location problems, a novel


two-phase hybrid method combining both optimization model and agent-based simu-
lation is developed. Figure 1 illustrates a general framework of the method. First, based
on the problem identified, a mathematical optimization model is formulated to make the
optimal decisions on both facility location and demand allocation. After that, the result
is validated and is visualized in a realistic simulation environment. If the result fulfills
the performance required, it will be suggested and be visualized to the decision-makers.
Otherwise, all the previous steps need to be re-visited in order to identify the problem
of optimization.

Fig. 1. Framework of the method.

In this paper, the p-median model is employed to make the optimal decisions of the
locations of printers. Then, AnyLogic is applied to create a realistic simulation envi-
ronment and to validate the optimal result obtained.
Solving the Location Problem of Printers in a University Campus 579

2.1 p-median Location Problem


The p-median location problem was first put forward by Hakimi [3]. Since then, it has
been extensively investigated and the research focus has been given to both the
development of efficient computational algorithms and the application in real-world
network design problems. Wheeler [4] investigated a location problem of patrol areas
with the help of p-median model in order to improve both traveling distance to calls
and workload equality. Combining a p-median problem with a novel part assignment
procedure, Won and Logendran [5] studied the balanced cell formulation in a cellular
production process. Incorporating with the environmental evaluation, Pamučar et al. [6]
formulated a green p-median model for the optimization of city logistics terminals.
Taleshian and Fathali [7] proposed a fuzzy p-median location problem in order to
properly manage the uncertainty in decision-making. Adler et al. [8] investigated a p-
hub median problem for the network and hub design of air transport in order to deal
with the demand expansion in African aviation market. Yu and Solvang [9] employed
both maximal covering model and p-median model to improve the post office relo-
cation decisions in a city in Norway.
The purpose of the p-median model is, through the optimal location-allocation
decisions, to minimize the total travel distance in the service network. A mathematical
formulation of the p-median location problem is given in Eqs. (1–5) [9]. Herein, I and
J are the sets of customers and candidate locations for service facility, respectively. The
demand from customer i is represented by qi , and dij is the distance between i and
j. Variable xj determines if a facility is opened or not, and variable uij determines if the
demand from i is served by j. Finally, p specifies the number of service facilities in the
system.
X
Min qi dij uij ð1Þ
i2I

S.t.
X
uij ¼ 1; 8i 2 I ð2Þ
j2J

uij  xj ; 8i 2 I; j 2 J ð3Þ
X
xj ¼ p ð4Þ
j2J

uij ; xj 2 f0; 1g; 8i 2 I; j 2 J ð5Þ

Objective function (1) minimizes the total travel distance to satisfy all the customer
demands. Constraint (2) assigns each customer to one service facility. Constraint (3)
requires a customer can only be allocated to an opened facility. Constraint (4) is the
requirement on the number of service facilities. Constraint (5) is the binary requirement
of decision variables.
580 X. Sun et al.

2.2 AnyLogic Simulation


AnyLogic is a professional simulation software package with a graphical interface,
which can be used to create a realistic virtual environment for large and complex
systems with different types of behavior (discrete, continuous and hybrid) [10, 11].
AnyLogic is a powerful and flexible tool that enables the modeling of three main
features in a simulation: discrete event, agent-based system and system dynamics,
namely, which can be combined in order to create a more accurate representation of a
complex process or system in the real world.
AnyLogic equips a wide range of built-in modules and database that can easily and
quickly be used to create the simulation of a complex system in a great number of
industries and service sectors, i.e., manufacturing, logistics and supply chain, networks,
dynamic systems, business processes, healthcare, customer behavior, and transporta-
tion, etc. Furthermore, for obtaining the analysis and implication from the simulation,
AnyLogic has a set of analytical and optimization tools that can be accessed from the
modeling environment directly [12].
Except from the standard resources, AnyLogic also enables users to build a highly
customized simulation based on the features of the systems modeled. In this regards,
Yang et al. [10] investigated the passenger flow at the entrance of a subway station with
agent-based pedestrian library in AnyLogic in order to optimize the number of ticket
windows opened in peak and off-peak periods. In order to understand the influence of
the adoption of electric vehicles on pedestrian traffic safety, Karaaslan et al. [13] built
up and studied an agent-based simulation of a real intersection with AnyLogic. Kim
et al. [14] used AnyLogic to optimize the location-allocation problem of a biomass
supply chain.

3 Case Study

Combining with both p-median location model and agent-based simulation in Any-
Logic, we present a case study of the location problem of printers at the third floor of
the main building at UiT The Arctic University of Norway, Narvik campus. The
objective is to locate five printers in order to minimize the total travel distance. The
optimization process and result with p-median model have been given by Yu et al. [2].
In order to simplify the problem, several assumptions are made.
1. Each room is considered as a unique customer demand point and a set of candidate
locations is pre-determined.
2. The users are divided into three groups: academic employees, laboratory employees
and students. The demand for printing service from different types of users are by
no means identical.
3. The demand is aggregated at the center point of each room.
4. The demand is associated with three influencing factors: type of user, printing
frequency and number of user, respectively. Besides, it is also adjusted by the
sensitivity to distance from different types of users.
5. The distances between the customer locations and the candidate locations of printers
are approximated by the Manhattan distance.
Solving the Location Problem of Printers in a University Campus 581

Figure 2 illustrates the layout of the studied area. The original location plan (red
squares) and the optimal location plan (green squares) are also given in the figure.

Fig. 2. The building layout and the printer locations in the original plan and the optimized plan.

In this paper, both the original and the optimal location plans of printers are
simulated in AnyLogic in order to validate the optimization and visualize the result
under a realistic environment. In this modeling and simulation environment, there are
many independent objects/individuals (students and employees), so an agent-based
approach is used. Compared with the original optimization procedures, several realistic
assumption are made as follows in order to have a better representation of the real
problem and generate a more reliable analysis.
1. In order to maintain the consistency, the customer demand estimated by Yu et al. [2]
is used in the simulation for determining the generation of agents (number and
frequency).
2. Instead of aggregating all the customers in the center point of each room, the
customer demand can be generated at a random location within the room in the
simulation.
3. Instead of using the Manhattan distance to calculate the distances between the
customer locations and the candidate locations of printers, the real routes are
defined in the simulation.
4. Compared the optimization environment from Yu et al. [2] with the current layout
of the building, significant changes of the layout of the area served by the left most
printer in Fig. 2 are observed. Thus, this part is not taken into account in the
simulation and only the four-printer scenario is tested.
582 X. Sun et al.

Fig. 3. Agent-based mobile process diagram in AnyLogic.

The flow chart of agent movement is illustrated in Fig. 3. The sources (students and
employees) are randomly generated from respective classrooms or offices. They will go
to the printing area via the real routes defined, use the printers and then move to the
exit. In this case, the goal is to calculate the movement distance of all the agents within
the studied period. In order to simplify the calculation and reduce the simulation time,
we only considered the movement distance in one direction: from the offices or
classrooms to the printers. The distance in the reverse direction (back to the rooms) is
assumed to be the same as that in the forward direction, so it will not influence the
result of the comparison between different location plans. In addition, the movement
speed of all the agents is set to 1 m/s, so the movement distance is directly proportional
to the time consumed in the movement.

4 Result and Discussion

Table 1 illustrates the comparison between the performance evaluation conducted by


both optimization method and AnyLogic simulation. It is interesting to observe that,
with different approaches, controversial results may be obtained. In this case, the result
of the optimization by p-median model, which suggests the total travel distance may be
reduced by 10% through relocating the printers, is, however, not supported by the
simulation result that shows the original location plan has a better performance.

Table 1. Performance evaluation of the optimal location plan and the original location plan in
both optimization and simulation environments
Performance evaluation Optimization Simulation
1 month 3 month 5 month
Reduction of total travel distance 10% −8.06% −7.08% −7.36%
Solving the Location Problem of Printers in a University Campus 583

As the procedures given in Fig. 1, the mathematical modeling, assumptions and


solution code were carefully re-visited in order to identify the problems of the opti-
mization method. In this case study, the main problems of the use of p-median location
model are the two assumptions made in order to simply the optimization.
1. First, the aggregation of customer demand at the center point neglects the ran-
domness of the real demand generation, which may have a critical impact on the
objective value and decision-making especially in a small-scale problem.
2. The most critical problem is the Manhattan distance used in the optimization. The
distances between two points is one of the most important influencing factors in the
p-median location problem, However, as shown in Fig. 4, Manhattan distance
cannot always give a realistic representation of the actual distance traveled and thus
may lead to an improper result.

Fig. 4. Illustration of the difference between the Manhattan distance (blue) and the actual
distance (green).

In order to solve the aforementioned problems, a stochastic p-median model may be


formulated so that the randomness of customer generation can be accounted. Moreover,
a more realistic distance calculation should be used.

5 Conclusions

In this paper, the location problem of printers in a university campus in Norway is


investigated using a two-phase hybrid method combining both optimization method
and AnyLogic simulation. First, the p-median location model was used to optimize the
location plan of printers, and then the result was tested in AnyLogic simulation. The
simulation result has revealed the problems related to the assumptions and simplifi-
cations of the original optimization. The research has proved the effectiveness of
AnyLogic simulation in the validation and visualization of the result of optimization.
Future research may be conducted in order to address the problems identified in the
case study. Furthermore, with the help of AnyLogic simulation, not only the movement
584 X. Sun et al.

of agents can be estimated but also comprehensive analysis can be conducted to


analyze the overall performance of the printers, i.e., usage of different printers,
queueing time, etc.

References
1. Abareshi, M., Zaferanieh, M.: A bi-level capacitated p-median facility location problem with
the most likely allocation solution. Transp. Res. Part B Methodol. 123, 1–20 (2019)
2. Yu, H., Solvang, W.D., Yang, J.G.: Improving accessibility and efficiency of service facility
through location-based approach: a case study at Narvik University College. Adv. Mater.
Res. 1039, 593–602 (2014)
3. Hakimi, S.L.: Optimum distribution of switching centers in a communication network and
some related graph theoretic problems. Oper. Res. 13(3), 462–475 (1965)
4. Wheeler, A.P.: Creating optimal patrol areas using the p-median model. Polic.: Int. J. 42(3),
318–333 (2019)
5. Won, Y., Logendran, R.: Effective two-phase p-median approach for the balanced cell
formation in the design of cellular manufacturing system. Int. J. Prod. Res. 53(9), 2730–2750
(2015)
6. Pamučar, D., Vasin, L., Atanasković, P., Miličić, M.: Planning the city logistics terminal
location by applying the green-median model and type-2 neurofuzzy network. Comput. Intell.
Neurosci. 2016 (2016). http://downloads.hindawi.com/journals/cin/2016/6972818.pdf. Arti-
cle ID: 6792818
7. Taleshian, F., Fathali, J.: A Mathematical model for fuzzy-median problem with fuzzy
weights and variables. Adv. Oper. Res. 2016 (2016). Article ID: 7590492
8. Adler, N., Njoya, E.T., Volta, N.: The multi-airline p-hub median problem applied to the
African aviation market. Transp. Res. Part A Policy Pract. 107, 187–202 (2018)
9. Yu, H., Solvang, W.D.: A comparison of two location models in optimizing the decision-
making on the relocation problem of post offices at Narvik, Norway. In: Proceeding of IEEE
International Conference on Industrial Engineering and Engineering Management, Thailand,
pp. 814–818 (2018)
10. Yang, Y., Li, J., Zhao, Q.: Study on passenger flow simulation in urban subway station
based on anylogic. J. Softw. 9(1), 140–146 (2014)
11. Borshchev, A., Karpov, Y., Kharitonov, V.: Distributed simulation of hybrid systems with
AnyLogic and HLA. Future Gener. Comput. Syst. 18(6), 829–839 (2002)
12. Karpov, Y.G., Ivanovski, R.I., Voropai, N.I., Popov, D.B.: Hierarchical modeling of electric
power system expansion by anylogic simulation software. In: Proceeding of IEEE Power
Tech Conference, Russia, pp. 1–5 (2015)
13. Karaaslan, E., Noori, M., Lee, J., Wang, L., Tatari, O., Abdel-Aty, M.: Modeling the effect
of electric vehicle adoption on pedestrian traffic safety: an agent-based approach.
Transp. Res. Part C Emerg. Technol. 93, 198–210 (2018)
14. Kim, S., Kim, S., Kiniry, J.R.: Two-phase simulation-based location-allocation optimization
of biomass storage distribution. Simul. Model. Pract. Theory 86, 155–168 (2018)
Intelligent Workshop Digital Twin Virtual
Reality Fusion and Application

Qiang Miao1(&), Wei Zou2, Lilan Liu1, Xiang Wan1,


and Pengfei Wu1
1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,
Shanghai University, Shanghai, China
[email protected]
2
Aerospace Systems Engineering Shanghai, Shanghai, China
[email protected]

Abstract. With the intelligent development of the enterprise workshop, the


amount of monitoring data of the workshop equipment doubles, and the char-
acteristics of industrial data such as high speed, multi-source heterogeneity and
variability are presented, and it is difficult to meet the real-time monitoring and
health management of the workshop under dynamic and variable environment.
In view of the above problems, based on the intelligent workshop-based
equipment, combined with the application of digital twin key technology, multi-
source data acquisition and data fusion modeling development and application
of the workshop production line, the realization of the workshop based on
multiple feedback source data in the digital The real situation of physical entities
is presented in the world. According to the true reflection of the multi-source
data fusion digital world, it is possible to comprehensively supervise the various
operating parameters and indicators of the product, and realize the system health
management of the intelligent workshop.

Keywords: Digital hybrid  Virtual and real fusion  Data collection 


Intelligent workshop

1 Introduction

With the rapid development of industrial technology and the new generation of
information technology, equipments in various fields such as intelligent workshops and
industrial manufacturing have been upgraded, such as industrial robots, 3D printers,
and machining centers, and the integration and intelligence of typical equipment have
been continuously improved [1]. Along with the formation of information space, a
large number of sensors are needed to collect and collect various information of
complex equipment, and collect various information required for monitoring, con-
necting, and interacting with each other in real time, so as to realize the intelligence of
the workshop [2]. However, at present, the physical world and the information world of
the workshop are isolated from each other, and the data in the middle cannot be
transmitted or integrated, which leads to the inability to realize interaction and

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 585–592, 2020.
https://doi.org/10.1007/978-981-15-2341-0_73
586 Q. Miao et al.

integration in the virtual and real space, and the healthy quality management of the
entire workshop cannot be realized [3].
Digital twin is the best way to achieve smart factory upgrades. Workshop manu-
facturing data has large data features such as large-scale, multi-source heterogeneous,
multi-temporal scale, and multi-dimensional. Through the deep integration of digital
twinning and workshop management system, the multi-source data of the whole
workshop can be obtained, and the data correlation fusion modeling can be carried out
to realize the digitization of the workshop. More importantly, the introduction of digital
twin can make the traditional workshop Health management is more open and
expansive. At present, the research and application of digital fusion modeling based
on digital twinning in China is still immature and lacks experience in implementation
[4, 5].
This focus is based on the health monitoring of the whole life cycle of the work-
shop. It mainly uses the application of multi-source data acquisition and digital inte-
gration to realize the monitoring of the health status of the workshop under actual
operation, and does not fully apply all the technologies of digital twin key point.
Through the intelligent data mapping integration and digital fusion modeling tech-
nology of the heterogeneous equipment data in the workshop, the function of digital
health technology in the health monitoring of the workshop is realized, and the basis
for the subsequent in-depth development of the digitalization of the workshop is
provided.

2 System Framework

The digital twinning system of the intelligent workshop introduced in this paper is
mainly about several modules of multi-source data acquisition, data denoising mod-
eling, data fusion modeling and data analysis results. The overall framework of the
system is shown in Fig. 1.

Fig. 1. System overall scheme design drawing


Intelligent Workshop Digital Twin Virtual Reality Fusion 587

The main functions of the digital twin modules in the intelligent workshop are as
follows.
(1) Multi-source data acquisition module: The lowest-level information building
module of the intelligent workshop is also the most basic module for building
digital twins. For the data collection of multi-source heterogeneous equipment in
the workshop, the system adopts PLC, wireless AP and various sensor converters,
etc. Through the construction of the hardware network of the underlying equip-
ment, the robot movement information, the processing status of the processing
equipment, and the logistics equipment are obtained and the real-time informa-
tion. At the same time, the remaining sensor devices are used to obtain the state
information such as the temperature and speed of the actual processing equipment.
Since the data collected by different sensors often appear in different formats, the
multi-source heterogeneity data of the workshop is generated.
(2) Data pre-processing module: The massive data collected by the multi-source
acquisition module of the workshop will often generate some false information
due to the sudden events of the workshop or the error of data collection, which
will result in inaccuracy of the data. Before the information, the Web-Service is
used to cluster the massive data of the workshop, and the error and inaccurate
information are denoised and filtered to obtain the representative information of
the workshop data.
(3) Data fusion processing module: For the multi-source data information collected
by the workshop, after denoising and filtering the false information of the
workshop, an improved BP neural network fusion algorithm is adopted to inte-
grate redundant and complementary information according to certain rules. Pro-
cessing conflicting data yields an accurate judgment of the target.

3 System Structure Function Design and Implementation

3.1 Multi-source Data Collection Based on Intelligent Workshop


The data collection of multi-heterogeneous devices mainly relies on the centralized
network architecture design. Through the wireless intelligent gateway AP technology,
the network communication of various types of devices will be communicated, and the
basic conditions for the interconnection and intercommunication of data information of
heterogeneous devices will be realized. In the process of workshop equipment pro-
cessing, the OPC general protocol is used to obtain real-time data such as field device
operation and status through the underlying integrated heterogeneous equipment net-
work technology; and the data information of each PLC is classified and monitored
through multi-channel setting, which can be accurate and intuitive. Get and locate the
real-time information of the equipment during operation, and keep abreast of the real-
time operation and processing status of each equipment.
For the entire workshop acquisition system, as shown in Fig. 2, for the entire
automation workshop equipment, a number of channels are opened, each channel is
provided with a main device, operating parameters, dynamic data, switch information,
588 Q. Miao et al.

logistics information for specific equipment The workshop data is collected, and the
real-time data and information of the first part of the line are finally obtained through
the design and integration of the PLC network technology. The other channels are also
arranged in such a way that the orderly management of the corresponding data is
completed, and the real-time data of the workshop is effectively managed and
monitored.

Fig. 2. Multi-source data acquisition diagram

3.2 Intelligent Workshop Data Fusion Model Based on Digital Twins


3.2.1 Intelligent Shop Floor Data Filtering Based on Digital Twins
In the actual production workshop, the data of each production equipment, processing
workpieces, material inventory, etc. will change according to the time, and as the
production requirements of the smart workshop, it is necessary to accurately grasp this
information in real time, which will inevitably bring the workshop. Massive Data.
Moreover, due to the influence of the external environment and the mistakes of the data
collection methods, the massive data of the workshop is usually mixed with some
erroneous, redundant and uncertain data. The data filtering method based on Web-
Service is an optimization method for massive data filtering. Filter and selectively store
massive data according to requirements, avoid data redundancy, make data access more
efficient, and realize data sharing between different application systems.
In terms of business logic encapsulation, a REST (Representational State Transfer)
distributed architecture-style Web service encapsulates the business process interface of
data. Establish a set of friendly API functions, describe the terminal through URL, and
implement CURD (addition, deletion, and change) of resources by common HTTP
operations. This series of APIs can adapt to different platforms and have the advantages
of cross-platform and cross-language. In terms of data format: Select JSON (JavaScript
Object Notation) lightweight data format, JSON uses a key-value pair structure text
format, easy for people to read and write, but also easy to machine parsing and
generation, is an ideal data exchange language.
Intelligent Workshop Digital Twin Virtual Reality Fusion 589

According to the data collected by the intelligent workshop, it is mainly processed


in the Web system. It is divided into two steps: grouping and clustering. Through the
above two parts, the data filtering and cleaning work is completed:
(1) Data grouping
Through the category of the equipment in the process, the data information of the
product in the processing process is grouped into categories, and when grouping, the
data is converted into JSON format, and the data is stored and queried in the form of
Key-Value key-value pairs. At the time of grouping, it takes a lot of effort to transform
the results of specific data. Compared with the algorithm, the simplicity is not enough.
However, for the cleaning and filtering of the underlying data, the processing of
specific device data will greatly improve the device data. Accuracy and real-time, no
redundant device data is grouped.
(2) Data clustering
After the data group is completed, the data is already in JSON format and has been
stored in the database, but the data cannot be filtered and clustered, and data query and
utilization cannot be performed. The role of data clustering is to leave a filtering
channel in the grouped data set, that is, the API interface, and the interface reservation
matching mechanism. When the upper computer calls the interface at the upper layer, it
passes the filtered matching data through the reserved interface channel.
The data of the workshop equipment is processed by grouping and clustering, and
the data has been filtered and cleaned to the group according to the equipment category,
and the reserved interface is convenient for querying. The results of the query part are
as follows (Fig. 3).

Fig. 3. Denoising query processing result

3.2.2 Intelligent Workshop Data Fusion Modeling Based on Digital


Twins
The quantity of workshop equipment information, product information and material
information in the production process is huge. After being collected and filtered, it is
distributed in many nodes and cannot be classified and grouped autonomously. In order
to improve the efficiency of the data collection in the intelligent workshop of the
physical network, and to realize the data grouping according to the product indepen-
dently, an intelligent data fusion GAPSOBP (BP Neural Data Fusion algorithm opti-
mized by Genetic algorithm and Particle swarm) based on genetic algorithm and
particle swarm optimization algorithm is proposed. The GAPSOBP algorithm com-
pares the nodes of the wireless sensor into neurons in the BP neural network, extracts
590 Q. Miao et al.

the sensory data collected by the wireless sensor network through the neural network,
and combines the collected sensor data with the clustering route.
When applying the GAPSOBP algorithm, it is first necessary to determine the
structure of the BP neural network according to the topology map of the wireless
transmission network. The wireless sensor network forms clusters according to the
LEACH algorithm, and each cluster is regarded as a BP neural network structure. The
number of nodes in the cluster is the number of input layers of the BP neural network,
and the number of output layers, that is, the number of cluster heads is 1. The number
of hidden layers is determined according to formula 1, and then the number of optimal
hidden layers is determined by trial and error.
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
L= ðm þ nÞ þ a ð0\a \10Þ ð1Þ

Where: n is the number of input layers, and m is the number of output layers
The workflow of the GAPSOBP algorithm mainly consists of three phases:
(1) Particle initialization: Firstly, the BP neural network structure needs to be
determined. Then the initial weight and threshold of the BP neural network are
passed to the particle swarm algorithm to become the primary population. The
particle swarm algorithm encodes the neural network weights and thresholds, and
initializes the candidate solutions and particles speed.
(2) Solving the optimal BP neural network parameters: The particle swarm opti-
mization algorithm calculates the individual extremum and the group extremum
according to the actual output of the BP neural network and the error fitness
function of the expected output, and updates the particle optimization speed and
position of the PSO algorithm. In the optimization process of the particle swarm,
the crossover and mutation operations of the genetic algorithm are added, and then
the fitness value is recalculated to determine whether the termination condition is
satisfied. If the condition is satisfied, the optimization result is transmitted to the
BP neural network for network training, otherwise the iterative update particle
speed is continued. And position until the algorithm reaches the termination
condition.
(3) Training BP neural network: BP neural network uses the optimization results of
genetic particle swarm optimization algorithm to train the network, update the
weight and threshold, until the network parameters are determined, and then the
data fusion processing of the wireless sensor network can be performed.
Through the above three stages of operation, the grouped data is regarded as a node,
and the specific rules and methods are used to fuse the processing according to the
nodes of the specific object, and the representative information of the specific device
object is obtained, and the processing process of the algorithm is completed.
Intelligent Workshop Digital Twin Virtual Reality Fusion 591

4 Effect Application Examples and Effect Analysis

The digital twinning system designed in this paper has been successfully applied to the
production workshop of Shanghai Intelligent Manufacturing and Robot Key Labora-
tory, which satisfies the need for unified collection, management and operation of
workshop information, and solves the unified centralized data management of
heterogeneous equipment in the workshop. Provide a digital data foundation for
intelligent management of the shop floor. The system obtains the representative
information of the equipment through data collection, filtering and fusion processing,
and displays the running data and real-time status of the entire workshop production
line, as shown in Fig. 4, that is, the information collected on the workshop production
line is displayed in a fusion manner, and the collection is performed. Real-time
information, such as robot motion data and working status, is collected. Through the
filtering and denoising and fusion display processing of the collected multi-source data,
the accurate representative information of the robot arm can be obtained, and the
working state information, the grabbing object, and the six-joint rotation angle infor-
mation shown in the figure are displayed together.

Fig. 4. Fusion display of robotic arms

As shown in Fig. 5, real-time information such as machine motion data and


working status of the line movement is collected. Through the filtering and denoising
and fusion display processing of the collected multi-source data, the accurate repre-
sentative information of the machine tool can be obtained, and the industrial infor-
mation and temperature information shown in the figure are displayed together.

Fig. 5. Fusion machine tool display


592 Q. Miao et al.

5 Conclusions

This paper is based on the co-author’s “Research on the Virtual Reality Synchro-
nization of Workshop Digital Twin” and carries out in-depth research on data fusion.
This time, based on the health monitoring of the whole life cycle of the workshop, the
application of multi-source data acquisition and digital fusion modeling is used to
monitor the health status of the workshop under actual operation, and the digital health
is not fully applied. Technology and key points. Through the intelligent data mapping
integration and digital fusion modeling technology of the heterogeneous equipment
data in the workshop, the function of digital health technology in the health monitoring
of the workshop is realized, and the basis for the subsequent in-depth development of
the digitalization of the workshop is provided.

Acknowledgements. The authors would like to express appreciation to mentors in Shanghai


University for their valuable comments and other helps. Thanks for the pillar program supported
by Shanghai Economic and Information Committee of China (No. 2018-GYHLW-02020).

References
1. Liu, D., Guo, K., Wang, B., Peng, Y.: Summary and prospect of digital hygiene technology.
Chin. J. Sci. Instrum. 39(11), 1–10 (2018)
2. Guo, D., Bao, J., Shi, G., Zhang, Q., Sun, X., Weng, H.: Modeling of aerospace structural
parts manufacturing workshop based on digital twinning. J. Donghua Univ. (Nat. Sci. Ed.) 44
(04), 578–585 + 607 (2018)
3. Chen, Z., Ding, X., Tang, J., Liu, Y.: Exploration of production control model of aircraft
assembly workshop based on digital hybrid. Aeronaut. Manufact. Technol. 61(12), 46–
50 + 58 (2018)
4. Tao, F., Liu, W., Liu, J., Liu, X., Liu, Q., Qu, T., Hu, T., Zhang, Z., Xiang, F., Xu, W., Wang,
J., Zhang, Y., Liu, Z., Li, H., Cheng, J., Qi, Q., Zhang, M., Zhang, H., Yan, F., He, L., Yi, W.,
Min, C.H.: Digital hygiene and its application exploration. Comput. Integr. Manufact. Syst.
24(01), 1–18 (2018)
5. Jiakai, G.: Digital twins: the best bond to connect the manufacturing physics world and digital
virtual world. Softw. Integr. Circuits 09, 4 (2018)
6. Wu, P., Qi, M., Gao, L., Zou, W., Miao, Q., Liu, L.: Research on the virtual reality
synchronization of workshop digital twin. In: 2019 IEEE 8th Joint International Information
Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, pp. 875–879
(2019)
Harvesting Path Planning of Selective
Harvesting Robot for White Asparagus

Ping Zhang1, Jin Yuan2(&), Xuemei Liu3, and Yang Li3


1
College of Information Science and Engineering,
Shandong Agricultural University, Tai’an, China
2
College of Mechanical and Electronic Engineering,
Shandong Agricultural University, Tai’an, China
[email protected]
3
Shandong Provincial Key Laboratory of Horticultural Machinery
and Equipment, Tai’an, China

Abstract. In order to optimize the harvesting path of harvesting robot for white
asparagus, a global path planning algorithm based on multi-fork tree is designed
according to the location distribution of the harvesting point and the placed
collecting box point, the optimal harvesting path with the shortest harvesting
distance is obtained. On this basis, a path planning algorithm based on
sequential harvesting of white asparagus is proposed, which effectively increases
the speed of path planning when the distance of harvesting path is not much
different from the optimal path. The simulation results show that both algorithms
can effectively improve the harvesting efficiency of white asparagus. With the
global path planning algorithm, the moving distance of end-effector can be
saved by 42.83% on average when the number of white asparagus is different.
By adopting the path planning algorithm based on sequential harvesting, the
average distance of end-effector motion can be saved by 37.2%, and the real-
time performance is very good. The path planning of white asparagus harvesting
process has a great impact on improving the harvesting efficiency of white
asparagus.

Keywords: Harvesting path  Multi-fork tree  Global path  Sequential


harvesting  End-effector

1 Introduction

At present, China is the country with the largest production and export of white
asparagus, but its complicated harvesting process is the bottleneck of the development
of white asparagus industry. And the manual harvesting method is widely used at home
and abroad, the domestic white asparagus harvesting machinery has not been reported
[1, 2]. Based on the visual positioning and harvesting system of the selective harvesting
robot of white asparagus in the laboratory, this paper studied the harvesting path
planning of the end-effector, which is of great significance to improve the harvesting
efficiency of white asparagus [3].
Firstly, the image of the harvesting area is acquired by the machine vision system,
and the white asparagus in the current area is identified and located by the image
© Springer Nature Singapore Pte Ltd. 2020
Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 593–599, 2020.
https://doi.org/10.1007/978-981-15-2341-0_74
594 P. Zhang et al.

processing technology to obtain the position coordinates of the asparagus tip, according
to the location coordinates of white asparagus, the end-effector runs to the harvest point
on the X-axis and Y-axis screws [4–6]. After the white asparagus is clamped and cut,
the end-effector moves upwards by the Z-axis screw to bring the white asparagus out of
the ground, and the end-effector moves to the collecting box point to place the white
asparagus, a harvest is completed. The above process is repeated until the last white
asparagus is harvested in the current harvesting area, and the robot enters the next area
for harvesting [7–9]. Because the white asparagus is usually irregularly distributed on
the ridge surface, in order to improve the harvesting efficiency, the harvest path
planning is necessary and the optimal path is the shortest path taken by the end-effector
from the starting point of harvesting to the completion of all white asparagus in the
current region.
In this paper, the global path planning algorithm with the shortest distance and the
path planning algorithm based on sequential harvesting are proposed, and the simu-
lation analysis of the algorithm shows that the designed algorithm can effectively
improve the harvesting efficiency of white asparagus.

2 Global Path Planning Algorithm

2.1 Harvesting Path Planning


The schematic diagram of the harvesting path is shown in Fig. 1, there are collecting
box point on both sides of the harvesting robot, as shown in the yellow area in Fig. 1,
in order to ensure the reliable and damage-free placement of the asparagus, the har-
vested asparagus needs to be placed in the middle of the collecting box. For white
asparagus harvesting sequence and the location of the place have many choices, so
there are (N!*2N) paths for the end-effector to choose from. The global feasible path
planning algorithm is as follows:
(1) The end-effector randomly selects the first target from the initial point O,
assuming white asparagus A, and the specific location of collecting box point after
harvesting is related to the location of the next asparagus to be collected.
(2) Find the remaining target points (except the harvested A) and randomly select any
of them, assuming the target point B. Find out the mirror positions of collecting
box point on both sides of B, which are B′ and B′′, and the intersection of the line
connecting A and B′, B′′ with the middle of collecting box point is A1, A2
respectively, then the end-effector runs to position A1 or A2 to place asparagus A.
(3) According to step (2), to complete harvesting of other target points until finishing
the last target point in the current area.
(4) After harvesting the last target point, it is placed horizontally. Assuming that the
last white asparagus is C, it is placed at C1 or C2.
(5) Record the moving path and calculate the moving distance of end-effector.
The red line in Fig. 1 shows a feasible path, but it may not be the optimal path. In
order to find the shortest path, the distance of all feasible paths should be calculated.
Harvesting Path Planning of Selective Harvesting Robot 595

Fig. 1. Schematic diagram of white asparagus harvesting path by global planning algorithm

2.2 Global Optimal Path Planning Algorithm


Take the three white asparagus A, B and C in the harvesting area as an example, a path
planning decision tree as shown in Fig. 2 is constructed [10]. By traversing 48 paths,
one path with the smallest distance is selected as the optimal harvesting path.

Fig. 2. Path planning decision tree

The coordinates of each point are shown in Fig. 1, the boundary point of the har-
vesting area is Mðxmax ; ymax Þ, The vertical distance from the position of the collecting
box point to the boundary of the collecting box point is d. In Fig. 2, the harvesting path
marked with the red line in the decision tree is O ! A ! A1 ! B ! B1 ! C ! C1 .
Then calculate the distance the end effector moves.

3 Path Planning Algorithm Based on Sequential Harvesting

According to the above method of establishing a path planning decision tree to traverse
all feasible paths, although the shortest path can be found, as the number of white
asparagus in the harvesting area increases, it takes a long time to determine the optimal
596 P. Zhang et al.

harvesting path, and before the current target point is placed, the specific location of the
collecting box point should be calculated according to the location of the next target
point, the end-effector waits for the harvesting time to be much longer than the har-
vesting time, and the placement process requires the x-axis and the y-axis screw to
move at the same time, the control of the end-effector is complicated. In view of the
above characteristics, in the case where the number of white asparagus is relatively
large, the method of harvesting in the order may be adopted, the harvest path planning
algorithm of the end-effector is as follows:
(1) After obtaining the position coordinates of each target point in the harvesting area,
all the target points are sorted according to the order of the value of the ordinate y
from small to large.
(2) The end-effector starts from the initial point O and selects the target point with the
smallest y coordinate after sorting (assuming target point A) for collection.
(3) After harvesting, select the collecting box point close to the target point to place.
(4) Find the target point of the remaining (except for the target point A that has been
harvested), calculate the distance between point A and other target points, and
select the one closest to point A, if the distance between the target point and A is
less than L set previously, the target point is preferentially harvested, otherwise
the target points sorted in step (1) are sequentially collected.
(5) Repeat steps (3) and (4) until the target points of the current area are all harvested.
(6) Record the moving path and calculate the distance the end-effector moves.

O y
O
d1
A1
x d2 A2
d3 A xa,ya

B1 d4 B xb,yb B2
d5
C1
d6
C2
C xc,yc
M

Fig. 3. Schematic diagram of white asparagus harvesting path based on sequential algorithm

Take the three white asparagus A, B, and C in the harvesting area as an example.
The harvest path is shown in Fig. 3. First, all the target points are sorted according to
the order of the value of the ordinate y from small to large. If the initial planned path
after sorting is A ! C ! B, and calculate the distance between A and B, C, which are
d1, d2 respectively, if d1\L\d2, the harvesting path is adjusted to A ! B ! C, the
size of L is set according to the actual situation. Then calculate the distance the end
effector moves.
Harvesting Path Planning of Selective Harvesting Robot 597

4 Simulation Analysis of Path Planning Algorithm


4.1 Simulation Analysis of Global Optimal Path Planning Algorithm
According to Fig. 1, the vertical distance between the target point and the position of
the collecting box point is d 5 cm, the coordinates of the three white asparagus A, B,
and C are (30, 40), (45, 58), and (60, 77) respectively, the coordinates of O and M are
(0, 0), (80, 100), calculate the distance the end effector moves by the above algorithm,
the shortest distance among the 48 harvesting paths can be 258 cm.
The number of white asparagus in the harvesting area is generally no more than
five. In the process of path optimization, for every case where the number of asparagus
in the harvesting area is different, photos of 5 groups of harvesting areas are randomly
selected to obtain position coordinates for path optimization, and the results are shown
in Table 1.

Table 1. Simulation data of global optimal path planning algorithm


Shortest path Longest path Maximum efficiency Average efficiency
distance (cm) distance (cm) improvement (%) improvement (%)
Number of 2 172 335 48.66 36.90
white 3 212 471 54.99 47.89
asparagus 4 286 634 54.89 46.53
5 372 777 52.12 39.98
Average value – – – 52.67 42.83

By analyzing the data in the Table 1, white asparagus harvesting area number 2, 3,
4 and 5 respectively, each case respectively using five different groups of area har-
vested, the group with the highest efficiency is selected among the 5 groups, after
adopting the path optimization algorithm, the shortest path distance saves 48.66%,
54.99%, 54.89% and 52.12% respectively compared with the longest path distance,
with an average efficiency improvement of 42.83%. It can be seen that the path opti-
mization algorithm can effectively improve the harvesting efficiency of the end effector.

4.2 Simulation Analysis of Path Planning Algorithm Based on Sequential


Harvesting
In the process of path optimization, in order to verify the effectiveness of the algorithm,
the same five groups of harvesting areas as the above simulation experiment are
selected for the different numbers of asparagus in the harvesting area. According to
Fig. 3, the initial point coordinate is O(0,0), the vertical distance between the target
point and the position of the collecting box point is d 5 cm, and the distance of L is
20 cm, that is, on the basis of sequential harvesting, the harvesting path is slightly
adjusted according to the distance L. The coordinates of the three white asparagus A, B,
and C are (30, 40), (45, 58), and (60, 77) respectively. According to the path opti-
mization algorithm, the moving distance of the end effector is 267 cm.
598 P. Zhang et al.

Table 2. Simulation data of path optimization algorithm based on sequential acquisition


Shortest path Longest path Maximum efficiency Average efficiency
distance (cm) distance (cm) improvement (%) improvement (%)
Number of 2 180 229 46.27 31.94
white 3 228 303 51.59 41.85
asparagus 4 321 390 46.85 40.27
5 402 501 44.78 34.75
Average value – – – 47.37 37.20

The results is shown in Table 2, take the number of 3 white asparagus as an example,
the minimum and maximum values of the movement distance of the end effector are
228 cm and 303 cm respectively in five different harvesting areas, which is because the
position of white asparagus in the ridge is unevenly distributed and 5 groups of harvesting
areas is randomly selected. Compared with the longest path distance of the global optimal
path planning algorithm, the maximum moving efficiency of the end effector of each
group is 51.59%, and the average value of improving efficiency is 41.85%.
In the four cases, the maximum efficiency is 46.27%, 51.59%, 46.85% and 44.78%
respectively, and the average of the maximum efficiency is 47.37%. Considering the
five different harvesting areas, the average of the efficiency in the four cases is 31.94%,
41.85%, 40.27%, 34.75% respectively, the average efficiency increases by 37.20%.
Therefore, the path planning algorithm based on sequential harvesting can effectively
improve the harvesting efficiency of end-effector.

4.3 Comparison Between Global Optimal Path Planning Algorithm


and Path Planning Algorithm Based on Sequential Harvesting
By analyzing and comparing the average of the maximum improvement efficiency in
four different cases and the average of the improvement efficiency of the five different
collection areas in the four cases, the path planning algorithm based on sequential
recovery is compared with the global optimal path planning algorithm, it is about 5%
lower.
When the number of white asparagus in a harvesting area is more than 5, the path
planning algorithm based on sequential harvesting is obviously superior to the global
optimal path planning algorithm. And through practical test, when the number of
asparagus is 6, according to the global optimal path planning algorithm, a total of
46,080 paths need to be traversed, the time required to determine the optimal path of
harvesting is greater than 180 s. When the number of asparagus is 7, there are a total of
645,120 paths that need to be traversed, data is out of memory in the case of simulation
with a normal computer. When the number of white asparagus is 6, 7, 8 and 9, the
simulation time of harvesting sequence is much less than 1 s by using the sequential
harvesting path planning algorithm.
Harvesting Path Planning of Selective Harvesting Robot 599

5 Conclusion

In this paper, the harvesting path of the harvesting robot of white asparagus is planned,
and the global optimal path planning algorithm and the path planning algorithm based
on sequential harvesting are proposed. According to the global optimal path planning
algorithm, the path with the smallest running distance of the end-effector can be
obtained. The path planning algorithm based on sequential harvesting is adopted, the
harvesting sequence of target points can be obtained faster and the operation of the end
effector is simple when the white asparagus is placed at the collecting box point.
Simulation results show that the algorithm is effective. Both methods can effectively
improve the harvesting efficiency, especially in the case of a large number of asparagus,
the improved path planning algorithm based on sequential harvesting has a good real-
time performance and can better meet the actual needs.

Acknowledgements. This work is supported by National Natural Science Foundation of China


(51675317), Key R&D plan of Shandong Province (2017GNC12110) and National Key R&D
Program of China (2017YFD0701103-3).

References
1. Lu, B.: Development status and development trend of asparagus industry in China. Shanghai
Vegetables 12(4), 3–4 (2018)
2. Chen, D., Zhang, Q., Wang, S., et al.: Current status and future solutions for asparagus
mechanical harvesting. J. China Agric. Univ. 21(4), 113–120 (2016)
3. Dong, F., Heinemann, W., Kasper, R.: Development of a row guidance system for an
autonomous robot for white asparagus harvesting. Comput. Electron. Agric. 79(2), 216–225
(2011)
4. Li, Q., Hu, T., Wu, C., et al.: Review of end-effectors in fruit and vegetable harvesting robot.
Trans. Chin. Soc. Agric. Mach. 39(3), 175–179 (2008)
5. Yuan, J., Du, S., Liu, X.: A clip-cut white asparagus harvesting device and harvesting
method. ZL201610887545.8
6. Zhang, S., Ai, Y., Zhang, B., Sun, X., Zhang, M., Zhang, T., Song, G.: Path recognition and
control of cigarette warehouse robot based on machine vision. Shandong Agric. Sci. 51(03),
128–134 (2019)
7. Wang, X.: Studies on information acquisition and path planning of greenhouse tomato
harvesting robot with selective harvesting operation. JiangSu University (2012)
8. Chen, J., Wang, H., Jiang, H., et al.: Design of end-effector for kiwifruit harvesting robot.
Trans. Chin. Soc. Agric. Mach. 43(10), 151–154 (2012)
9. Liu, X., Du, S., Yuan, J., Li, Y., Zou, L.: Analysis and experiment on the operation of the
end actuator of the white asparagus selective harvester. Trans. Chin. Soc. Agric. Mach. 49
(4), 110–120 (2018)
10. Yuan, Y., Zhang, X., Hu, X.: Algorithm for optimization of apple harvesting path and
simulation. Trans. CSAE 25(4), 141–144 (2009)
Optimization of Technological Processes
at Production Sites Based on Digital Modeling

Pavel Drobintsev(&), Nikita Voinov(&), Lina Kotlyarova(&),


Ivan Selin(&), and Olga Aleksandrova(&)

Institute of Computer Science and Technology, Peter the Great St. Petersburg
Polytechnic University, St. Petersburg, Russia
{drob,voinov}@ics2.ecd.spbstu.ru

Abstract. Optimized technological routes at production sites provide more


balanced and effective organization of the whole manufacturing process. The
approach described in this paper is based on digital modeling and allows to
automatically obtain correct and verified technological routes. It includes for-
malization based on modular technology, optimization, simulation of a product
implementation and analysis of simulation results.

Keywords: Optimization of technological routes  Digital modeling 


Automation of technological processes  Simulation

1 Introduction

The main value of modern production is information, the amounts of which have
become too large for a human to process effectively. Technologies are changing faster
than enterprises manage to integrate them; the level of automation is constantly
growing. However, it is not enough only to provide modern equipment for a factory, it
is necessary to ensure the efficiency of its work. This can be achieved by an adequate
analysis of the incoming information and its subsequent processing. At the modern
mechanical engineering site, the main work on the manufacturing of products is carried
out on equipment with computer numerical control, therefore the optimization of the
technological process often comes down to the optimization of the program code for
these machines. At the same time, the work on the analysis and processing of infor-
mation is not always fully automated due to the need to operatively adapt to the
production environment, especially for small enterprises with small-scale production.
In this area, it is necessary to quickly create production plans that can change
depending on the state of the process equipment and manufactured products, and the
implementation of plans should be effectively automated.
Usually, the tasks of operational planning and automated production management
are carried out by the manufacturing execution systems (MES) [1]. They occupy an
intermediate place in the hierarchy of enterprise management systems between the level
of information collection from equipment in workshops done by supervisory control

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 600–607, 2020.
https://doi.org/10.1007/978-981-15-2341-0_75
Optimization of Technological Processes at Production Sites 601

and data acquisition (SCADA) systems [2] and the level of operations over a large
amount of administrative, financial and accounting information done by enterprise
resource planning (ERP) [3] systems. Nowadays on the Russian market there are three
most popular largest solutions: PHOBOS system [4], YSB.Enterprise.Mes system and
PolyPlan system [5]. PHOBOS is traditionally used in large and medium-sized
mechanical engineering enterprises. YSB.Enterprise.Mes originated from the wood-
working industry and focuses on the sector of medium and small enterprises. The
PolyPlan system has a smaller set of MES functions, but is positioned as an operational
scheduling system for automated and flexible manufacturing in engineering.
The developed solution given in this work is designed to solve a narrower class of
problems - to simplify the technological preparation of production for small-scale
mechanical engineering site, which can be based on the introduction of operative
digital modeling and analysis of the technological process of the production site in
order to optimize and generate programs for managing and monitoring the production
process.

2 Representation of an Initial Technological Route


as an MSC Diagram

Formalization of a detail uses modular technology, which implies an effective adap-


tation of the technological process to the product. The choice of surface and compound
modules (SMs and CMs) of a detail as objects of classification allows resolving the
contradiction between continuous change of products and the desire for consistency in
technological equipment. Since the detail is represented by a set of SMs and CMs, the
technological processes of details manufacturing are built by assembling them from the
modules of technological processes. In this case, the task of the technologist is to
provide each SM and CM with standard modules of technological equipment.
The process of the formalization is carried out by the technologist on the basis of its
drawing. He should highlight the modules to be processed in the drawing and provide
the description of each with a variety of design and technological parameters, such as
geometry, dimensional accuracy, surface hardness, processing method, necessary
equipment, cutting tools, cutting modes etc.
For these purposes, the automated workplace of the technologist (AWT) is used,
which is shown in Fig. 1.
602 P. Drobintsev et al.

Fig. 1. Using AWT to select a cutting tool and its parameters

The technologist obtains the necessary parameters from the drawing, reference
catalogues or other documentation. For a number of parameters, ranges of possible
values are specified.
In addition to the surface modules, the modules for the technological process of
manufactured detail, the modules for equipment and gear, the modules for instrumental
adjustment and the modules for measuring instruments are described in a similar way.
Using modular formalization in AWT, the construction of technological blocks,
modules to be processed in which use the same tool for the processing, and techno-
logical groups, which divide processing blocks into phases, is automated. As a result,
technological routes (TRs) for manufacturing of a detail are formed from technological
groups.
All this information is recorded in a specialized database. Info about each tech-
nological route contains a list of surface modules with specified values of parameters.
Information of each surface module contains a detailed description of the manufac-
turing operations necessary for its processing with symbolic parameters. By creating
queries to the database, a route with symbolic parameters, on the basis of which a
specific detailed route will be created, is automatically formed.
In the approach presented here we use the MSC language [6] for the encoding of
the technological route. MSC is a standardized language for describing behaviors using
message exchange diagrams between parallel-functioning objects (CNC, robots). The
diagram example is shown in Fig. 2.
Optimization of Technological Processes at Production Sites 603

Fig. 2. Example MSC diagram of a technological route

The following messages are used in the diagram:


1. Messages about the preparation of the next detail for processing and verification of
its suitability to the requirements of the route.
2. Messages checking the requirements for the necessary machinery and the avail-
ability of the machine.
3. Requests about processing tools and their working modes.
4. Requests about mounting fixtures.
5. Messages requesting a set of cutting tools from the storage to the CNC.
6. Messages about their retrieval on a pallet from the storage.

3 Optimization of a Technological Route

Technological route with symbolic parameters can be converted to a specific one by


replacing the character variables with their values. In a case when the range for a value
is specified, it is necessary to check its boundaries for the out-of-range error, which is
implemented using a symbolic verifier [7]. In the process of proving the correctness of
a route, it is possible to check various constraints on the sequence of surface modules
within the route formulated by means of the first order logic. The contradictions found
in the process of proof can be corrected by imposing additional restrictions on the
stated sequences in the route or on the ranges of the parameter values.
For the correct technological route with the help of the formulas stored in the
database, the technologist can estimate the time and cost of its processing. The frag-
ment of the set of such formulas is shown in Fig. 3.
604 P. Drobintsev et al.

Fig. 3. Formulas for turning time calculations

The relative estimate of the route shown in Fig. 4 can be obtained as the sum of the
estimates of each operation on each individual surface module that make up the route,
which is sufficient for ranking alternative solutions on the choice of parameters of the
route. To obtain absolute values, it suffices to use the multiplicative and additive
correction factors obtained on the basis of statistical estimates of the technological
processes of a particular production.

Fig. 4. A technological route consisting of 4 operations performed on 4 machines

The correct route can be optimized. By changing the parameters of the route within
the allowable ranges and re-calculating the indicators of processing time and cost, the
technologist can get a solution that meets the limitations of the management on a
particular job or get the Pareto-optimal solution [8]. However, it should be noted that
the mentioned optimization is valid provided that the production by the route is carried
out without taking into account the current state and restrictions on the resources of the
production site. Obtaining more realistic estimates is possible with the help of simu-
lation modeling of the distribution of resources for the routes simultaneously performed
at the production site. Theoretical basis for the modeling approach including formal-
ization of the structure of a technological process and technological matrix is described
in details in previous work of the authors [9]. Shown below in Sect. 4 is a software
module used for graphical representation.

4 Digital Modeling and Simulation

The digital model of the production site simulates the implementation of the production
of different batches of details by different specified technological routes. The site model
is built on the basis of information on the resources of the production site (CNC
Optimization of Technological Processes at Production Sites 605

machines, transport robots, warehouses, staff etc.) which include amounts of time for
their usages. The size of the batch of details is also associated with the route.
The model uses the method of dynamic priorities to simulate the workload of
resources of the production site and determine the duration of the realization of the
technological process for orders.
The result of simulation modeling is a schedule for the implementation of the
technological process shown in Fig. 5, which provides an estimate of the time to
manufacture a batch of details in accordance with a specific route along with an
estimate of the lead time for all routes shown in Fig. 6. A set of estimates of the time of
execution of the route can be analyzed for the fulfillment of certain criteria and
restrictions characterizing the conditions of the order.

Fig. 5. Example of the production schedule chart

Fig. 6. Time and cost estimations example of two technological routes


606 P. Drobintsev et al.

In this regard, the following tasks can be solved:


1. Estimation of the minimal amount of additional resources that need to be allocated
so that the total time for the implementation of the route is not more than the
specified value T0.
2. Redistribution of processing tools between individual operations in order to mini-
mize the total implementation time of the route (optimal transfer of resources from
non-critical operations to critical ones).
3. The use of time reserves T0 - T arising when the calculated time T of the imple-
mentation of the route is less than the specified value T0 in order to further improve
the process.
In the process of implementing a specific work schedule (in a certain sense, opti-
mal), various unforeseen failures are possible: machine breakage, shortage of com-
ponents, unforeseen delays in performing individual operations, etc. Therefore, the
management system should continuously monitor the entire process and should have a
mode for operative changing of the schedule for the implementation of the remaining
work in the new environment in order to optimize it. Thus, it turns out that it is
necessary to correct the process of implementing the set of necessary operations in real
time taking into account the set requirements for optimization and the formulated
criteria of optimality.
In addition, when forming the structure of the management system, it is necessary
to take into account the possibility of multi-criteria formulation of optimization
problems, when several particular indicators of the quality of the production site are set
[8]. In this case, the task of ensuring the work of the production site in some Pareto-
optimal mode can be set. Usually it is advisable here to use some physically justified
form of the convolution of the vector optimality criterion and proceed to optimization
by the corresponding generalized criterion.
When solving problems of managing the work of a production site with a hierar-
chical structure, it is necessary to take into account the organization of interactions of
processes at different hierarchical levels, both among themselves and with the main
control center. For this reason, it is advisable to refer to the principles of network-
centric management and methods of coordination in hierarchical systems. It is also
advisable to use the methods of hierarchical construction of Pareto sets at various
technological levels.
The analysis of the results of modeling a set of technological routes consists in
solving a multi-criteria task of selecting implementation options for a technological
process of the IoT system. It is assumed that the direct solution of the original multi-
dimensional problem with a set of difficultly computable criteria is either impossible or
impractical because of the limitations determined by the requirement of execution time
and consumed resources balance. The main difficulties are connected with the high
dimension of the vector of tunable (selectable) parameters of the IoT system and with a
large number of partial optimality criteria. The approach used is based on the appli-
cation of well-known system analysis procedures to the specific subject area under
consideration [10].
Optimization of Technological Processes at Production Sites 607

5 Conclusions

The final result of the work is the creation of a software system for the automation of
the preparation and control of the technological processes of the mechanical engi-
neering production site. Currently, a working prototype of the system has been
implemented, on which the following properties have been tested:
1. Ability to quickly adapt to specific production conditions: equipment, resources and
orders.
2. Optimization of the characteristics of specific production processes in accordance
with the selected set of criteria for its success, carried out on-line.
3. Efficiency assessment of the execution time and cost of the order, which is very
important for the small-scale production manager with the flow of orders for small
batches of different products that require different technological routes to be performed.
The platform provides a significant increase in the productivity of the technologist
at the technological preparation phase of production. Total preparation time decreases
to approximately 1 day per order.

Acknowledgements. The work was financially supported by the Ministry of Science and
Higher Education of the Russian Federation in the framework of the Federal Targeted Program
for Research and Development in Priority Areas of Advancement of the Russian Scientific and
Technological Complex for 2014–2020 (14.584.21.0022, ID RFMEFI58417X0022).

References
1. Miklosey, B.: The basics of MES. Assembly 62(3) (2019)
2. Ford, D.: SCADA is dead: Rethink your approach to automation. In: 91st Annual Water
Environment Federation Technical Exhibition and Conference, WEFTEC 2018, pp. 2781–
2785 (2019)
3. Potts, B.: ERP implementation: define what ‘best practice’ means. Plant Eng. 73(3), 10 (2019)
4. “PHOBOS” MES-system. http://www.fobos-mes.ru/fobos-system/fobos-MES-system.html.
Accessed 12 Aug 2019
5. “PolyPlan” MES-system. http://polyplan.ru/index.htm. Accessed 12 Aug 2019
6. Recommendation ITUT Z. 120. Message Sequence Chart (MSC), 11/2000
7. Drobintsev, P., Kotlyarov, V., Letichevsky, A., Selin, I.A.: Industrial software verification
and testing technology. In: CEUR Workshop Proceedings, vol. 1989, pp. 221–229 (2017)
8. Voinov, N., Chernorutsky, I., Drobintsev, P., Kotlyarov, V.: An approach to net-centric
control automation of technological processes within industrial IoT systems. Adv. Manufact.
5(4), 388–393 (2017)
9. Kotlyarov, V., Chernorutsky, I., Drobintsev, P., Voinov, N., Tolstoles, A.: Structural
modelling and automation of technological processes within net-centric industrial workshop
based on network methods of planning. In: Wang, K., Wang, Y., Strandhagen, J., Yu, T.
(eds.) Advanced Manufacturing and Automation VIII. IWAMA 2018. Lecture Notes in
Electrical Engineering, vol. 484, pp. 475–488 (2019)
10. Chernorutsky, I., Drobintsev, P., Kotlyarov, V., Voinov, N.: A new approach to generation
and analysis of gradient methods based on relaxation function. In: 19th IEEE UKSim-AMSS
International Conference on Modelling and Simulation, UKSim 2017, pp. 83–88 (2018)
Smart Maintenance in Asset
Management – Application
with Deep Learning

Harald Rødseth1(&), Ragnhild J. Eleftheriadis2, Zhe Li3,


and Jingyue Li3
1
Department of Mechanical and Industrial Engineering, Norwegian University
of Science and Technology (NTNU), 7491 Trondheim, Norway
[email protected]
2
Product and Production Development, SINTEF Manufacturing AS,
Vestre Toten, Norway
[email protected]
3
Department of Computer Science, Norwegian University of Science
and Technology (NTNU), 7491 Trondheim, Norway
{zhel,jingyue.li}@ntnu.no

Abstract. With the onset the digitalization and Industry 4.0, the maintenance
function and asset management in a company is forming towards Smart
Maintenance. An essential application in smart maintenance is to improve the
maintenance planning function with better criticality assessment. With the aid
from artificial intelligence it is considered that maintenance planning will pro-
vide better and faster decision making in maintenance management. The aim of
this article is to develop smart maintenance planning based on principles both
from asset management and machine learning. The result demonstrates a use
case of criticality assessment for maintenance planning and comprise compu-
tation of anomaly degree (AD) as well as calculation of profit loss indicator
(PLI). The risk matrix in the criticality assessment is then constructed by both
AD and PLI and will then aid the maintenance planner in better and faster
decision making. It is concluded that more industrial use cases should be con-
ducted representing different industry branches.

Keywords: Smart maintenance  Anomaly detection  Asset management

1 Introduction

The mission of asset management can be comprehended as the ability to operate the
physical asset in the company through the whole life cycle ensuring suitable return of
investment and meeting the defined service and security standards [1]. Further, it is also
stated in ISO 55000 that asset management will realize value from the asset in the
organization where asset is a thing or item that has potential or actual value for the
company [2]. The role of the maintenance function in asset management has been
further detailed in EN 16646 standard for physical asset management and considers the
relationship between operating and maintaining the asset is [3]. In particular it is

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 608–615, 2020.
https://doi.org/10.1007/978-981-15-2341-0_76
Smart Maintenance in Asset Management 609

recommended in this standard that dedicated key performance indicator (KPI) can be
applied in physical asset management. A proposed KPI that improves the integrated
planning process between the maintenance and the production function in asset man-
agement is denoted as profit loss indicator (PLI). This indicator evaluates the different
types of losses in production from an economic point of view. Also, PLI has been
tested in different industry branches such as the petroleum industry [4] and manufac-
turing industry [5].
With the onset of digitalization in industry enabled by breakthrough innovations
from Industry 4.0 changes the maintenance capability in the company. The shift is from
a “off-line” maintenance function where data is collected and analyzed manually,
towards a digital maintenance [6] and is often denoted as smart maintenance [7, 8].
Artificial intelligent (AI) and machine learning which is a central part of smart
maintenance is considered as a fundamental way to process intelligent data. Yet, there
is a difference between traditional machine learning and data driven artificial intelli-
gence [9]. The difference lies in the performance of feature extractions, in manufac-
turing often mentioned as machine learning or Advanced Manufacturing. In this article
anomaly detection for smart maintenance will be investigated more in details.
Application of AI is also relevant in order to improve the plant uptime. Anomaly in
mechanical systems usually cause equipment to breakdown with serious safety and
economic impact. For this reason, computer-based anomaly detection systems with
high efficiency are imperative to improve the accuracy and reliability of anomaly
detection, and prevent unanticipated accidents [10].
From a smart maintenance perspective, the result of a more digitalized asset
management should also include that maintenance is planned with insight from the
individual equipment in combination of the system perspective of the asset [6]. This
need is further supported with empirical studies that points out the necessity for crit-
icality assessment when increasing the productivity through smart maintenance
planning [8]. In fact, maintenance planning is regarded to be unlikely to achieve
optimum maintenance planning without a sound criticality assessment of the physical
asset such as the machines [8].
Smart maintenance has also been denoted with other terms such as deep digital
maintenance (DDM) [11] where application of PLI is of relevance. In DDM it still
remains to investigate in appropriate scenarios for the planning capabilities in smart
maintenance that includes anomaly detection and criticality assessment.
The aim of this article is to develop an approach for decision support in smart
maintenance planning based on principles both from asset management and machine
learning-based anomaly detection and criticality assessment.
The future structure of this article is as follows: Sect. 2 presents relevant literature
in smart maintenance whereas Sect. 3 demonstrates an essential application in smart
maintenance planning where criticality assessment is conducted based on anomaly
detection and PLI calculation. Finally, Sect. 4 discuss the results with concluding
remarks.
610 H. Rødseth et al.

2 Smart Maintenance in Asset Management


2.1 The Trend Towards Smart Maintenance
To succeed with a successful asset management strategy it has been considered that it is
vital to include PLI as an output for the strategy [12]. This has also been included in
maintenance planning in the concept deep digital maintenance (DDM) [11]. In DDM it
has also been demonstrated maintenance planning for one component. It remains to
evaluate several work orders in maintenance planning in DDM. In asset management
the machine learning method such as deep learning has gained popularity [13] where
e.g. diagnostics of health states of power transformers has been applied [14]. In smart
asset management a three-steps approach is proposed [13]:
1. Data gathering from observational data to evaluate the component condition and
defining threshold rules.
2. Analysis of historical data to identify patterns that support in predictions of future
failures.
3. Leverage the component condition with the defect of the failure. This step will also
evaluate the economic perspective in the analysis.
Also smart maintenance is outlined as a key element in the Industry 4.0 roadmap
for Germany [15]. In this strategic roadmap, smart maintenance is considered to
improve the competitive advantage for the maintenance function in the company and is
an “enabler” itself for successful Industry 4.0 implementation where maintenance data
is shared between manufacturer, operator, and industry service. Furthermore, smart
maintenance has also other important characteristics:
• A common “language” of maintenance processes defined in EN 17007 [16].
• Maintenance technology support with e.g., artificial intelligence (AI) [7].
In smart maintenance this has been addressed with the need for artificial intelli-
gence (AI) [7]. Despite that maintenance work supported by AI still has barriers to
overcome, it is considered to be an effort worth taking. With support from deep
learning, we can create knowledge of extracted features in an end-to-end process [9].
For instance, neural networks make the smart data to predict what will happen and take
proactive actions based on improved pattern.
To succeed with smart maintenance in asset management, the emphasizes of
specific plans for maintenance over a long span of time is expected to ensure the
greatest value of equipment over its life cycle [7].
In smart maintenance it is also stressed that it is vital to have established criticality
assessment in maintenance planning [6, 8]. In particular it is concluded that data-driven
machine criticality assessment is essential for achieving smart maintenance planning.

2.2 Smart Maintenance Framework


To ensure value creation in smart maintenance it is necessary to devise a sound
framework in smart maintenance. Figure 1 illustrates our proposed framework and is
Smart Maintenance in Asset Management 611

inspired from [17] and [18]. The starting point in this framework is the data source and
includes external data such as inventory data of spare parts from suppliers. In addition,
product data is from the equipment such as condition monitoring data, as well as
enterprise data from computerized maintenance management system (CMMS). All the
raw data sources are then aggregated in multiple formats in a data cloud.

Fig. 1. Value creation with data in smart maintenance framework inspired from [17] and [18].

The raw data is further applied as for smart data analytics including both predictive
and prescriptive analytics. In the maintenance field, the predictive analytics will
comprise e.g. forecasts of the technical condition of the machine. To ensure value
creation of the physical asset it is also important to include prescriptive analytics that
supports in recommended actions in maintenance planning. This will include anomaly
detection to evaluate the probability of future machine breakdowns. In addition, it is
also necessary to evaluate the consequences of the machine failure. In DDM, the PLI
seems promising for this purpose [11]. To assess the criticality of the machine in
maintenance planning, both the probability and the consequence can be combined in a
risk matrix.
The result of smart data is deeper insight in the business, where e.g., the plant
capacity has increased as well as deeper insights of the partners where e.g., spare part
supply is improved.

3 Smart Maintenance Planning with Criticality Assessment

As shown in Fig. 1 and explained above, criticality assessment [8] could be an essential
element of prescriptive analysis in our smart maintenance framework. In overall a
criticality assessment should evaluate both the probability and consequences for each
failure of the equipment. We hereby use a demonstration use case to explain our
proposal of criticality assessment with three steps: structured approach (1) PLI esti-
mation; (2) the anomaly degree calculation; (3) the criticality assessment. So far, few
612 H. Rødseth et al.

companies have collected data that can be used for PLI estimation and anomaly
detection, we could not get both data from the same machine. The data we present in
the case study come from two different machines. However, for the demonstration
purpose and for explaining our ideas, we believe it is applicable to merge the data to
explain our idea by assuming that the data come from the same machine.
Step 1: PLI estimation
The calculation of profit loss indicator is applied based on earlier case study from both
[11] and [5]. The case study considers the malfunction of an oil cooler in a machine
center. The malfunction was first observed when the machine cantered produced
scrappage. A quality audit meeting evaluated economic loss of this scrappage. In
addition, maintenance personnel conducted inspection on the machine center and found
that the cause of this situation was due to malfunction of the oil cooler. This oil cooler
was replaced, and the machine center had in total 6 days with downtime. In addition to
scrappage it was also evaluated that the machine had lost revenue due to the downtime.
Table 1 summarizes the different type of losses that occurred due to this situation of the
malfunction of machine center.

Table 1. Expected PLI of malfunction of a machine center based on both [11] and [5].
Situation Type of loss PLI value/sNOK
Damaged part (Scrappage) Quality loss 120 000
Quality audit meeting Quality loss 3 500
Maintenance labor costs Availability loss 21 570
New oil cooler Availability loss 47 480
Loss of internal machine revenue Availability loss 129 600
Sum 322 150

When the consequences for the failure has been estimated, the next step is to
calculate the anomaly degree (AI) for the physical asset and the industrial equipment’s.
Step 2: Anomaly degree (AD) Calculation
Figure 2 shows the obtained anomaly degree (AD) of one machine. An increasing AD
will indicate an increasing probability of equipment failure. When maintenance plan-
ning is conducted, updated information about the anomaly degree for each equipment
should be collected and analyzed.
Likewise the calculation of PLI, the data used for calculating AD is also from an
actual industry equipment. However, the data is not from a machine center and rep-
resents another industry branch. The primary datasets include failure records and
measurement data from the monitoring system. The target is to obtain the anomaly
degree of the equipment by using machine learning based analysis approaches. In the
experiment, we labelled both failure and normal records. Thus, the obtained anomaly
degree can describe the difference between the target observation and normal samples.
Smart Maintenance in Asset Management 613

0.3
Anomaly Degree (AD)
0.2

0.1

0
0 20 40 60 80 100
Time (measuring poitns)

Fig. 2. Change of anomaly degree (AD)

During the experiment to calculate AD, we adjusted the measurement data collected
in different scales to a common scale. Then, we applied standard normalization to pre-
process the raw data. The applied machine learning model is constructed through a
fully connected deep neural network with seven hidden layers. SoftMax is used as the
activation function of the final output layer. Leaky Relu is applied as the activation
functions of the hidden layers. The number of nodes in hidden layers of the constructed
deep neural network is 64, 32, 32, 16, 16, 8, 2, respectively, to train the neural network
smoothly. We selected Adam and categorical cross-entropy as the optimizer and loss
function during the training process. Results in Fig. 2 demonstrates the obtained
anomaly degree of the machine from the analysis using deep neural network, which
represents the degradation of the machine’s health state along the time.
Step 3: Criticality assessment
When both the probability and the consequences have been evaluated for a future
malfunction of a machine, the criticality assessment can be performed in a risk matrix.
Figure 3 illustrates a proposed risk matrix in smart maintenance that supports
planning of preventive work orders. In the consequence category, the PLI is established
for the physical asset and classified as “medium, high” in. The probability category is
evaluated with AD. By trending AD in the risk matrix it is possible to evaluate when a
preventive maintenance work order should be executed and the possible costs and
consequences.
The color code is following a traffic-light logic; if the equipment is located in green
zone, no further actions are necessary. If the equipment is in a yellow zone, it is an
early warning where maintenance actions should be executed. If the equipment is in the
red zone, it is an alarm where immediate maintenance actions should be executed.
In addition to the color-code system each field in the matrix is marked with a
number indicating a priority number. The criticality is of the machine has a yellow code
in the start but will have a red color code if no maintenance actions are performed.
When the maintenance planner has several machines that are being criticality assessed,
it will be possible to prioritize which machine that should be maintained first.
614 H. Rødseth et al.

Fig. 3. Risk matrix as criticality assessment for equipment

4 Discussion and Concluding Remarks

This article has demonstrated application of criticality assessment, which can be an


essential element of our proposed smart maintenance framework, with application of
both PLI as well as anomaly degree calculation. The benefit of this system is that the
maintenance planner will have a “digital advisor” for evaluating the anomaly that can
enable a faster and better decision-making process in maintenance planning. It is
expected that the deep learning method with deep neural network will be further
investigated and developed due to it’s promising results in AD.
Also, with the aid from PLI calculations, it is possible to improve the evaluation of
the consequences of e.g., machine breakdown. In a risk matrix it is then possible to
establish a work priority system where some equipment with anomaly should be pri-
oritized before others. For example, if there are future work orders both categorized in
yellow sector and red sector, it would recommend to prioritize the work in the red sector.
There are also some challenges with the criticality assessment that should be
addressed in future research in contribution to theory of criticality assessment. First, it
is of importance to improve the accuracy of both anomaly degree calculation as well as
calculation of PLI. Second, it will be of importance to evaluate sound criteria for each
category in the risk matrix. Yet, this seems to also be a challenge in existing risk
matrices. A more practical aspect that needs to be investigated is to evaluate how the
digital approach of the risk matrix will interfere with existing criticality assessment and
still not reduce the performance of the physical asset.
Although a use cases have been applied with data from different industry branches
to demonstrate the criticality assessment, further research will also require a coherent
demonstration in several industry sectors, including both manufacturing industry as
well as the process industry.

Acknowledgements. The authors wish to thank for valuable input from both the research
project CPS-plant (grant number: 267750), as well as the research project CIRCit – Circular
Economy Integration in the Nordic Industry for Enhanced Sustainability and Competitiveness
(grant number: 83144).
Smart Maintenance in Asset Management 615

References
1. Schneider, J., et al.: Asset management techniques. Int. J. Electr. Power Energy Syst. 28(9),
643–654 (2006)
2. ISO: ISO 55000 Asset management - Overview principles and terminology. Switzerland
(2014)
3. CEN, EN 16646: Maintenance - Maintenance within physical asset management (2014)
4. Rødseth, H., et al.: Increased profit and technical condition through new KPIs in
maintenance management. In: Koskinen, K.T., et al. (Eds.) Proceedings of the 10th World
Congress on Engineering Asset Management (WCEAM 2015), pp. 505–511. Springer,
Cham (2016)
5. Rødseth, H., Schjølberg, P.: Data-driven predictive maintenance for green manufacturing. In:
Advanced Manufacturing and Automation VI, pp. 36–41. Atlantis Press (2016)
6. Bokrantz, J., et al.: Maintenance in digitalised manufacturing: Delphi-based scenarios for
2030. Int. J. Prod. Econ. 191, 154–169 (2017)
7. Yokoyama, A.: Innovative changes for maintenance of railway by using ICT-to achieve
“smart maintenance”. Procedia CIRP 8, 24–29 (2015)
8. Gopalakrishnan, M., et al.: Machine criticality assessment for productivity improvement:
smart maintenance decision support. Int. J. Prod. Perform. Manage. 68(5), 858–878 (2019)
9. Wang, J., et al.: Deep learning for smart manufacturing: methods and applications.
J. Manufact. Syst. 48, 144–156 (2018)
10. Li, Z., et al.: A deep learning approach for anomaly detection based on SAE and LSTM in
mechanical equipment. Int. J. Adv. Manufact. Technol. 103(1), 499–510 (2019)
11. Rødseth, H., Schjølberg, P., Marhaug, A.: Deep digital maintenance. Adv. Manufact. 5(4),
299–310 (2017)
12. Rødseth, H., Eleftheradis, R.: Successful asset management strategy implementation of
cyber-physical systems. In: WCEAM 2108 (2019)
13. Khuntia, S.R., Rueda, J., Meijden, M.: Smart Asset Management for Electric Utilities: Big
Data and Future (2017)
14. Tamilselvan, P., Wang, P.: Failure diagnosis using deep belief learning based health state
classification. Reliab. Eng. Syst. Saf. 115, 124–135 (2013)
15. DIN, German Standardization Roadmap - Industry 4.0, in Version 3, Berlin (2018)
16. CEN, EN 17007: Maintenance process and associated indicators (2017)
17. Porter, M.E., Heppelmann, J.E.: How smart, connected products are transforming
companies. Harvard Bus. Rev. 93(10), 96–114 (2015)
18. Schlegel, P., Briele, K., Schmitt, R.H.: Autonomous data-driven quality control in self-
learning production systems. In: Proceedings of the 8th Congress of the German Academic
Association for Production Technology (WGP), Aachen, 19–20 November 2018, pp. 679–
689 (2019)
Maintenance Advisor Using Secondary-
Uncertainty-Varying Type-2 Fuzzy Logic
System for Offshore Power Systems

Haitao Sang(&)

College of Information Engineering, Lingnan Normal University,


Zhanjiang, People’s Republic of China
[email protected]

Abstract. Recently, Condition-based maintenance is a popular method to


minimize the cost of maintenance failures in power systems. In order to effec-
tively overcome the uncertainty of operational variables and information in
offshore substations, a Type-2 fuzzy logic approach is proposed in this paper.
The maintenance advisor optimize the maintenance schedules with multi-
objective evolutionary algorithm, considering only major system variables.
During operation, the offshore substation will experience continuing ageing and
shifts in control, weather and load factors, measurement and all other equip-
ments with uncertainties. More importantly, the advisor estimates the changes of
reliability indices by Type-2 fuzzy logic and sends the changes back to the
maintenance optimizer. At the same time, the maintenance advisor will also
report to the maintenance optimizer any drastic deterioration of load-point
reliability within each substation. The data analysis results shows this approach
avoids complex inference process, it significantly reduces the computational
complexity and rule base than conventional Type-1 fuzzy logic.

Keywords: Adaptive maintenance advisor  System maintenance optimizer 


Offshore substation  Multi-objective evolutionary algorithm  Type-2 fuzzy
logic

1 Introduction

Proper condition-based maintenance schedules are very desirable to extend component


lifetime in energy system. However, some uncertainties are associated with component
reliability in power systems due to lack of upgrading of data [1, 2]. Offshore systems
are often remotely located and the acquisition of data is more difficult than onshore
systems. Hence more powerful tools are needed to deal with those uncertainties for
continuous monitoring [3].
Fuzzy sets theory was proposed by Zadeh [4] to resemble human reasoning under
uncertainties by using approximate information [5] to generate proper decision. Some
attempt using type-1 fuzzy logic has also been carried out to handle uncertainties
related to component reliability in power-system maintenance problems [6]. Fuzzy
Markov model was employed to describe transition rates [7]. Zadeh further proposed
the alternative type-2 fuzzy logic in order to handle the uncertainties in type-1

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 616–624, 2020.
https://doi.org/10.1007/978-981-15-2341-0_77
Maintenance Advisor Using Secondary-Uncertainty-Varying Type-2 617

membership functions [8]. In power-system applications, planned and unplanned


operational variations occur continually, which can degrade the quality of maintenance
scheduling of power systems. Such degradations can even be more pronounced for
offshore power systems. The unplanned operational variations occurring in offshore
substations are represented in by an independent set of fuzzy memberships for ensuring
the quality of maintenance scheduling [9]. In this paper, type-2 fuzzy logic learning and
analysis system is linked to a hidden Markov model and the type-2 fuzzy hidden
Markov model analysis is proposed to analyze the reliability indices of the offshore
power system.
The remaining part of this paper is organized as follows. Section 2 presents the
reliability models in system maintenance optimizer. Section 3 describes the secondary-
uncertainty-varying type-2 fuzzy logic system for modeling operational variations and
uncertainties of key components. Section 4 presents the relative impacts of various
operational variations and uncertainties on the system reliability and maintenance
schedules. This section also illustrates the advantages of type-2 against type-1 fuzzy
logic. Section 5 concludes the paper.

2 Reliability Models in System Maintenance Optimizer

Figure 1 outlines the type-2 fuzzy hidden Markov model for individual offshore power
equipment. A regular Markov model has been used to provide a quantitative connec-
tion between maintenance and reliability [10]. Di, i = 1, 2, … N. D1 denotes the “as
good as new” state, D2, D3,…, Dn are the states with different levels of deteriorations,
and Df is the failed state. The transition rates among different states form the matrix K.

Fig. 1. Type-2 fuzzy hidden Markov model

DKðtÞ ¼ fT2 ðCðtÞÞ ð1Þ

where C(t) represents the different operational variations and uncertainties for indi-
vidual component in the time interval t. fT2 represents the mapping function of fuzzy
logic system from the input C(t) to output DK(t). Given DKðtÞ, the reliability indices,
such as MTTF and failure probability, which are obtained directly from the Markov
model can be updated according to the day-to-day operation conditions.
618 H. Sang

The configuration of a power system directly affects the load-point reliability.


Minimum cut set method is believed to be particularly well suited to the reliability
analysis of power systems [10]. By definition, a minimum cut set is systematic and
hence easily implementable on a computer. It is a unique and necessary combination of
component failures which cause system failure. From a reliability point of view, all the
component failures in a minimum cut set can be viewed as connected in parallel, while
all the minimum cut set associated with one event can be viewed as connected in series.
Therefore, a system can be converted into a reliability block diagram based on its
minimum cut sets and then be evaluated easily following the rules used for the simple
configurations (series or parallel).

3 Intelligent Maintenance Advisor with Type-2 Fuzzy Logic


System

In a fuzzy logic system, the overall effects of uncertainties on reliability are captured by
developing a rule-base expert system based on the available data. The rules are chained
together by a reasoning process known as inference engine. The methods to propagate
the uncertainties among the rules are essential for a inference engine, and are
accomplished by the experts who are well acquainted with the characteristics of the
operation in power systems. The inputs and outputs in a fuzzy logic system are
combined through “IF-THEN” rules given by experts using the fuzzy inference engine
to get the fuzzified output.
In this work, a simpler way to implement the type-2 fuzzy logic is proposed,
namely secondary-uncertainty-varying type-2 fuzzy logic system. The secondary
uncertainty is captured by initializing a group of primary membership functions. As
shown in Fig. 2, at a specific value x ¼ x0 ðx 2 XÞ, the membership functions take on
the values wherever the vertical line intersects them. As a result, there are a range of
membership values at x = x′, each of which is given by one specific membership
function.

A1
A2 B1 B2 B3 B4 B5
1 1
A3 Uupp(x')
A4
0.8 0.8
A5

0.6 0.6
U

0.4 0.4

0.2 0.2
Ulow(x')
0 0
x' Ulow y Uupp

Fig. 2. Secondary-uncertainty-varying membership functions


Maintenance Advisor Using Secondary-Uncertainty-Varying Type-2 619

Figure 3 is a schematic diagram the secondary-uncertainty-varying type-2 fuzzy


logic system used in the intelligent maintenance advisor. As can be seen in Fig. 3, the
implementation of this fuzzy logic system has two steps, (i) choice of primary mem-
bership functions, and (ii) mapping the primary input to output. For example, if the
input (primary variable) is “short” time from previous maintenance, and the previous
maintenance is “minor maintenance”, the choice of primary membership function is
chosen firstly, and then sent to the fuzzifier 1. After that, the inference 1 will map this
primary input to a output based on rules 1. Finally, a defuzzified output can be gen-
erated. If there are more operational variations to be considered, their influences can be
incorporated by more fuzzy inputs and modification of fuzzy inference engine.

Fig. 3. Schematic diagram of secondary-uncertainty-varying Type-2 fuzzy logic system

4 Results and Discussion


4.1 Case Study and Parameters
The adaptive maintenance advisor first obtains the initial maintenance plan from the
system maintenance optimizer. The method is presented in detail by application to a
ring bus configuration and used to evaluate the effects of offshore stations in an analysis
of the IEEE-reliability test system.
The basic failure data of the transformer without any maintenance in the 1st
maintenance interval. Different priorities are assigned to each load point to reflect the
importance of the load they transfer. In this work, load point 2 has priority 1 because it
transfers the load back to the medium-size system it connects, while load point 1 has
the lower priority because it provides the load to personal customers.

4.2 Advantage of Secondary-Uncertainty-Varying Type-2 Fuzzy Logic


in Reducing Computational Complexity
Take the fuzzy logic system designed for the transformer as the example to show the
ease to implement of this proposed type-2 fuzzy logic system. In type-2 fuzzy logic
620 H. Sang

system, age, load and time from previous maintenance are the input of transformer,
each input has three membership functions. In addition, three additional uncertainties
are superimposed on three inputs respectively, which are represented by three fuzzy
sets. If type-1 fuzzy system is used to express uncertainty, it needs 729 rules, while
type-2 fuzzy system needs 87 rules. Therefore, in dealing with uncertainty, type-2
fuzzy logic is superior to type-1 Fuzzy Logic in computational complexity.

4.3 Impacts of Operational Variations and Uncertainties on Optimization


of Maintenance Schedules
The simulation studies have been conducted to cover different aspects of the opera-
tional variations and uncertainties. We assume that the same type of components
experience the same operational conditions. The impact of each operational variation
on the optimization of maintenance schedule as well as system reliability is investi-
gated individually by assuming the rest operational variations remain static.
(a) Impacts of age: the age of the transformer is modeled as the operational variation,
and the component conditions at every age are the uncertainty, as shown in Fig. 4.
The Pareto front in Fig. 5 gives a holistic view of optimal solutions considering
different operational conditions. The maintenance schedule S provides the ENS of
1.36  104 MWh/y and failure cost of $2.5  105 with the operational cost of
$2.13  105. However, when considering the continuing aging, the same relia-
bility provided by solutions S can only be guaranteed by another solution A1 with
higher operational cost of $3.39  105, as shown in Fig. 6. Furthermore,
including uncertainty of age requires A2 to be chosen in order to provide the same
reliability with lower operational cost than A1.
The variations of ENS of both load points are plotted in Fig. 6(a) & (b), illustrating
the effects of different maintenance schedules. As stated before, all of the three
maintenance schedules S, A1, and A2 provide the same reliability. With initial oper-
ational conditions, maintenance schedule S continuously reduces the ENS. The ENS is
more significantly reduced by maintenance schedules A1 and A2 from the beginning in
order to counteract the increase of ENS caused by the aging of components from the
12th interval. However, the schedule A2 is less effective in reducing the ENS compared
to A1. This is because that the good component conditions as shown in Fig. 4 slow up
the increase of ENS caused by component aging, and require less efforts from the
maintenance to provide the satisfactory reliability. The different Pareto fronts and
variations of ENS indicate that type-2 fuzzy expert system successfully captures the
impacts of component aging and uncertainty. In addition, compare Fig. 6(a) to (b), it
can be seen that load point 2 suffers less ENS than load point 1. This is because that
operations and maintenance are planned in order to firstly ensure the reliability of load
point 2 which is more critical and assigned with higher priority.
Maintenance Advisor Using Secondary-Uncertainty-Varying Type-2 621

40

Age
(Yr)
20
0 5 10 15 20

Component
Conditions
Good
Normal
Bad
0 5 10 15 20
Maintenance Interval

Fig. 4. Age and uncertainty of transformer

1400 1400
With Initial Operational Condition With Initial Operational Condition
1200 With Continuing Aging using Type-1 1200 With Continuing Aging using Type-1

Operational Cost ($103 )


Operational Cost ($103 )

With Continuing Aging & Uncertainty using Type-2 With Continuing Aging & Uncertainty using Type-2
1000 1000

800 800
A1
A1
600 600

400 A2 400 A2

200 S 200 S

0 0
1.25 1.3 1.35 1.4 1.45 1.5 1.55 230 240 250 260 270 280
Energy Not Served (MWh/y) 4 3
x 10 Failure Cost ($10 )

Fig. 5. Pareto Fronts with the impacts of aging process and its uncertainty

9500 6000
S S
Energy Not Served
Energy Not Served

9000 A1 A1
A2 5500 A2
(MWh/y)
(MWh/y)

8500
5000
8000

7500 4500
0 5 10 15 20 0 5 10 15 20
Maintenance Interval Maintenance Interval

(a) Load Point 1 (b) Load Point 2

Fig. 6. Variation of ENS at two load points with different maintenance schedules

(b) Impacts of load: The load factor is modeled as an operational variation (Fig. 7),
and there is no uncertainty associated with load. Pareto fronts before and after
considering the varying load are shown in Fig. 8, showing the impacts of load on
the optimization of maintenance schedules. It can be seen that the fuzzy expert
system correctly relates higher load factor to higher ENS.
622 H. Sang

100

Factor
Load

(%)
50

0
0 5 10 15 20
Maintenance Interval

Fig. 7. Load factor of transformers

(c) Impacts of maintenance information: The time from previous maintenance for
the transformers is given in Fig. 9. The Pareto fronts are plotted to show the
impacts of different operational conditions on the scheduling of maintenance in
Fig. 10. As can be seen in Fig. 10, the longer time from previous maintenance in
intervals 4, 8, and 19 makes the schedule C1 be chosen rather than S to provide
the ENS of 1.36  105 MWh/y.
Figure 11 shows the variations of ENS due to performing the schedules S and C1.
As expected, the fuzzy system correctly relates the higher ENS with longer time lapse
from previous maintenance and the lower ENS with shorter lapse from previous
maintenance.

1400 1400
With Initial Operational Conditions With Initial Operational Conditions
1200 1200 With Varying Load using Type-1 or Type-2
With Varying Load using Type-1 or Type-2
Operational Cost ($10 )
3
Operational Cost ($103)

1000 1000

800 800
B1 600
600
B1
400 400

200 200 S
S

0 0
1.25 1.3 1.35 1.4 1.45 1.5 1.55 230 240 250 260 270 280
Energy Not Served (MWh/y) x 10
4
Failure Cost ($103)

Fig. 8. Pareto Fronts with the impacts of load factor

20
Maintenance
Time from
Previous

(mth)

10

0
0 5 10 15 20
Maintenance Interval

Fig. 9. Time from previous maintenance of transformer


Maintenance Advisor Using Secondary-Uncertainty-Varying Type-2 623

1400 1400
With Initial Operational Conditions With Initial Operational Conditions
1200 With Different Time from 1200 With Different Time from

Operational Cost ($103)


Operational Cost ($103)

Previous Maintenance using Type-1 or Type-2


1000 Previous Maintenance using Type-1 or Type-2
1000

800 800

600 600

400 C1 400 C1

200 S 200 S
0 0
1.25 1.3 1.35 1.4 1.45 1.5 230 240 250 260 270 280
Energy Not Served (MWh/y) 4 3
x 10 Failure Cost ($10 )

Fig. 10. Pareto Fronts including the impacts of different time from previous maintenance &
uncertainty

10000 Energy Not Served 7000


Energy Not Served

S S
9000 C1 6000 C1
(MWh/y)

(MWh/y)

8000 5000

7000 4000
0 5 10 15 20 0 5 10 15 20
Maintenance Interval Maintenance Interval
(a) Load Point 1 (b) Load Point 2

Fig. 11. Variation of ENS of two load points with different maintenance schedules

5 Conclusions

This paper proposes an approach for implementing a system-optimized maintenance


plan on each offshore substation, and for estimating the change of load-point reliability
due to operational variations and uncertainties of its key components. The maintenance
advisor will report any drastic deterioration of load-point reliability within the sub-
station, and requires the maintenance optimizer to re-optimize the substation’s main-
tenance activities for meeting its desired reliability during operation. Type-2 fuzzy
logic is demonstrated to be superior to type-1 fuzzy logic for modeling operational
variations and uncertainties arising from aging, load factor and time from previous
maintenance. The operational variations and uncertainties for the transformer are
shown to have a significant impact on the maintenance scheduling as well as load-point
reliability.

Acknowledgements. This work is supported by the Competitive Allocation of Special Funds


for Science and Technology Innovation Strategy in Guangdong Province of China (NO.
2018A06001)
624 H. Sang

References
1. Wang, K., Wang, Y.: How AI affects the future predictive maintenance: a primer of deep
learning. In: IWAMA 2017. Lecture Notes in Electrical Engineering, vol. 451 (2018)
2. Mo, H., Sansavini, G., Xie, M.: Performance-based maintenance of gas turbines for reliable
control of degraded power systems. Mech. Syst. Sig. Process. 103, 398–412 (2018)
3. Dai, Z., Zhang, T., Liu, X., et al.: Research on smart substation protection system reliability
for condition-based maintenance. Power Syst. Prot. Control 44(16), 14–21 (2016)
4. Zadeh, L.A.: Fuzzy sets. Inf. Control 8, 338–353 (1965)
5. Endrenyi, J.: Reliability Modeling in Electric Power Systems. Wiley, New York (1978)
6. Mohanta, D.K., Sadhu, P.K., Chakrabarti, R.: Fuzzy Markov model for determination of
fuzzy state probabilities of generating units including the effect of maintenance scheduling.
IEEE Trans. Power Syst. 20(4), 2117–2124 (2005)
7. Tanrioven, M., et al.: A new approach to real-time reliability analysis of transmission system
using fuzzy Markov model. Int. J. Electr. Power Energy Syst. 26(10), 821–832 (2004)
8. Zadeh, L.A.: The concept of a linguistic variable and its application to approximate
reasoning I. Inf. Sci. 8(3), 199–249 (1975)
9. Pająk, M.: Modelling of the operation and maintenance tasks of a complex power industry
system in the fuzzy technical states space. In: International Scientific Conference on Electric
Power Engineering. IEEE (2017)
10. Yang, F., Kwan, C.M., Chang, C.S.: Multi-objective evolutionary optimization of substation
maintenance using decision-varying Markov model. IEEE Trans. Power Syst. 23(3), 1328–
1335 (2008)
Determine Reducing Sugar Content in Potatoes
Using Hyperspectral Combined
with VISSA Algorithm

Wei Jiang, Ming Li, and Yao Liu(&)

School of Information Engineering, Lingnan Normal University,


Zhanjiang, China
[email protected]

Abstract. In order to explore nondestructive and rapid detection of reducing


sugar in potatoes, hyperspectral imaging technology was applied for quantita-
tively analyze reducing sugar in potatoes. A quantitative analysis model of
reducing sugar in potatoes was constructed by partial least squares method.
Sacitzky-Golay (SG) smoothing filter, standard normal variable transformation
(SNV), first derivative (FD), multivariate scattering correction (MSC) and other
optimization models were used. Variable Iterative Space Shrinkage algorithm
(VISSA) is proposed for feature wavelength selection, and compared with
competitive adaptive weighting algorithm (CARS). A total of 229 samples were
prepared, and the SXYP method was used to divide the samples. 181 samples
were selected as the correction set and the remaining 48 samples as the verifi-
cation set. The results showed that, the model of reducing sugar content in
potato spectrum pretreated by SG + SNV was the best, and the partial least
squares regression model (VISSA-PLS) based on VISSA algorithm to select
characteristic variables had good prediction ability. The determination coeffi-
cient of model validation set was 0.8144, and the root mean square error of
validation set was 0.0238. It was concluded that the model has good predictive
performance after optimization and achieves rapid and nondestructive detection
of reducing sugar in potatoes.

Keywords: Hyperspectral  Potato  Reducing sugar  Wavelength selection

1 Introduction

Reducing sugar content in potatoes is an important factor to determine the processing


quality of potato chips [1]. Because reducing sugar reacts with alpha-amino acids of
nitrogen compounds in the frying process, the surface color of potato chips becomes
brown and unpopular with consumers [2]. In order to further improve potato breeding
and deep processing technology, accurate and rapid determination of reducing sugar
content in potatoes is of great significance. Although the traditional method for detecting
reducing sugar content in potatoes has high accuracy, it is difficult to popularize in the
analysis and detection of a large number of samples because of its complicated oper-
ation, strong destructiveness and high cost. Rapid and non-destructive intelligent
detection of reducing sugar content in potatoes has important application prospects [3].

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 625–632, 2020.
https://doi.org/10.1007/978-981-15-2341-0_78
626 W. Jiang et al.

Hyperspectral imaging technology is a new non-destructive detection technology.


Because of its fast and nondestructive advantages, hyperspectral imaging technology
has been widely studied in the field of food quality detection. At present, many scholars
at home and abroad have established hyperspectral models for potato quality detection,
including potassium [4], water [5], dry matter [6, 7], starch [8], protein [9, 10] and
sugar [11, 12]. Angel and others [13–16] have studied the hyperspectral qualitative
detection methods for potato internal and external damage. These studies laid the
groundwork for the hyperspectral detection of potato quality. However, due to the large
amount of redundant information in the spectrum, which seriously affects the speed and
accuracy of modeling, and the strong correlation between bands, not all wavelengths
can provide useful information. Previous studies have shown that optimizing the
wavelength variables of hyperspectral detection, eliminating irrelevant or non-linear
variables, reducing redundant information in the spectrum, simplifying the model,
improving the prediction ability and robustness of the model [17].
In order to further explore the optimal method of hyperspectral characteristic
variables of potatoes, an optimized prediction model of reducing sugar content was
established. In this paper, VISSA algorithm is proposed for selection characteristic
variables, and compared with CARS algorithm. Partial Least Squares (PLS) model is
established and validated by validation set. The results of two selection methods in
predicting reducing sugar content in potatoes are compared comprehensively, and the
results suitable for reducing sugar content in potatoes are obtained. The optimal
variable selection method for quantitative analysis of reducing sugar content in pota-
toes was obtained, which provided theoretical basis for the development of portable
hyperspectral intelligent detection device for potato quality.

2 Materials and Methods

2.1 Sample Preparation and Determination


The fresh potatoes used in the experiment were purchased from Keshan 885 potatoes in
the agricultural market of Xiangfang District, Harbin. Before the experiment, the potato
surface was cleaned and the obvious surface defects were removed. A total of 229
samples were used for image acquisition. Then the content of reducing sugar in potato
was determined by 3,5-dinitrosalicylic acid colorimetry. SPXY method was used to
divide the samples into 3:1. 181 samples were selected as calibration set and 48
samples were selected as validation sample set. The calibration set sample is used to
build the model, and the validation set sample is used to test the prediction performance
of the model. Table 1 shows the distribution statistics of reducing sugar content in
potatoes from the calibration set and the validation set samples. The values of reducing
sugar content in 181 calibration sets ranged from 0.09 to 1.32, which basically covered
the distribution range of reducing sugar value of potatoes and was representative.
Determine Reducing Sugar Content in Potatoes 627

2.2 Hyperspectral Acquisition System


The hyperspectral reflectance image acquisition system produced by HeadWall Com-
pany of the United States was used in the experiment. The system consists of image
acquisition unit, light source and sample conveying platform. The image acquisition
unit includes image spectrometer, CCD camera and lens, and the light source is 150 W
adjustable power halogen lamp. The spectral resolution is 1.29 nm and the spatial
resolution is 0.15 mm. The schematic diagram of the potato reflectance hyperspectral
detection system is shown in Fig. 1.

Table 1. Statistics of reducing sugar content distribution in potato


Samples set Numble Minimum Maximum Mean
Calibration set 181 0.09 1.32 0.816
Validation set 48 0.12 1.28 0.936

1. Hyperspectral camera 2. Bracket 3. light source 4. Loading stage 5. Light box 6. Collector
7.Computer

Fig. 1. Schematic diagram of potato reflectance hyperspectral detection system

In the experiment, in order to reduce the interference of light source on image


caused by temperature change, every 10 sample images are collected, and the full white
and full black calibration images are collected once. According to formula (1), the
corrected hyperspectral images are obtained [12, 17].

Is  Id
I¼ ð1Þ
Iw  Id

In the formula: I is the corrected image; Is is the original image; Iw is the white
board image; Id is the black image.
628 W. Jiang et al.

2.3 Variable Iterative Space Shrinkage Algorithms


VISSA algorithm is a variable selection optimization strategy based on MPA frame-
work proposed by Deng et al. The basic idea of the algorithm is to make full use of the
statistical information obtained by MPA, select the best combination of variables with
the best prediction ability, and cross-validate as the object function of model selection
[18]. The flow chart of VISSA algorithm is shown in Fig. 2. In this study, VISSA
algorithm is used to select characteristic wavelength, X is the extracted 181 * 203
spectral matrix, Y is the corresponding concentration matrix size 181 * 1.

Fig. 2. Flow chart of VISSA algorithm

2.4 Running Environment and Software Code


The hardware information used in the experiment is as follows: InterCore Dual Core
(i7-8550) 1.99G, memory 8G, hard disk 500G. Software: Windows 7 is used as the
operating system, and the software The Unscramle X 10.3 and MATLAB 2013 are
used for spectral preprocessing, wavelength filtering and modeling.
Determine Reducing Sugar Content in Potatoes 629

3 Results and Analysis


3.1 Spectral Analysis and Pretreatment
The original reflectance spectra of potato samples in the region of interest (ROI) range
of 400–1000 nm were collected by diffuse reflectance method. As shown in Fig. 3(a),
the sampling interval was 3 nm and each spectrum contained 203 bands. Reduced
sugars are sugars containing reductive groups (e.g. aldehydes or ketones) in their
molecular structure, and their constituent elements are C, H and O. In Fig. 3, there are
obvious absorption peaks near 960 nm (Dotted line position), mainly the triple fre-
quency absorption of O-H group, because the internal components of potatoes contain
water [19]. Due to the rough skin of potato and stray light in the environment, there are
large scattering and baseline drift in the spectral region. Therefore, pretreatment is
needed before further spectral analysis. In this paper, Sacitzky-Golay (SG) smoothing,
standardization, maximum normalization, multivariate scattering correction (MSC),
first derivative (FD), standard normal variable transformation (SNV) and SG + MSC,
SG + FD, SG + SNV are used to pretreat the spectra respectively, and PLS models are
established. The original spectra and pretreatments are compared in turn. According to
the principle of maximum determination coefficient and minimum root mean square
error, the SG + SNV method is determined to have the best prediction effect, which can
improve the prediction ability of the calibration model. Figure 3(b) is a spectral image
preprocessed by SG + SNV. The trend of the curve is similar to that of the original
spectrum.

Fig. 3. Spectra curves of potatoes

3.2 Selection of Characteristic Wavelength


In order to simplify the model, after eliminating spectral noise by SG + SNV pre-
treatment, variable space iterative shrinkage algorithm (VISSA) was proposed to
selection the spectral bands of potatoes, and compared with CARS algorithm.
According to five-fold cross validation, the maximum number of latent variables
was set to ten, and the variables were selected. The position information of the
wavelength selected by the two methods on the potato spectral data set is shown in
630 W. Jiang et al.

Fig. 4(a)–(b). VISSA method screened out 32 optimal variables related to reducing
sugar, and CARS selected 30 characteristic variables. Compared to the full spectra
results, the number of variables decreased by 84% and 85%, respectively. Obviously,
the wavelength points near 430–440 nm, 520–530 nm, 750 nm and 970 nm are chosen
for both methods. The fourth and fifth frequency absorption peaks of O-H bond and
C-H bond corresponding to the chemical structure of reducing sugar near 750 nm are
also important wavelength points for establishing quantitative models of sugar content
commonly used in literatures. This also proves that CARS and VISSA are very
effective feature wavelength selection algorithms in potato sample system. However, if
we compare the details, we can find that the CARS algorithm also chooses some
wavelength points, such as the wavelength points near 810 and 830 nm, which is why
the prediction result of the CARS algorithm is slightly worse than that of VISSA.

(a) VISSA (b)CARS

Fig. 4. The wavelength selected by different methods on the potato. (a) VISSA; (b) CARS.

3.3 Establishment of Prediction Model


The characteristic variables selected by VISSA and CARS algorithm were used as
input variables of PLS model, and the reducing sugar content of potato was used as
dependent variables to establish PLSR regression model. In order to better analyze the
selection effect of feature variables, the full-band data are also used for modeling and
comparison. The results are shown in Table 2. The R2c values of PLS models based on
CARS and VISSA are higher than those of the whole spectrum, and RMSECV values
are lower than those of the whole band, which shows that the combination of these two
variable selection methods with PLS model has a good correction effect. The prediction
accuracy of VISSA-PLS model is the highest, with the highest R2p value of 0.8144 and
the smallest RMSEP value of 0.0228, and the number of variables selected by VISSA
is 32, which is far less than the number of variables in the whole band 203. The results
showed that VISSA-PLS model could predict the reducing sugar content of potato
quickly. Figure 5 is the prediction result of VISSA-PLS. It can be seen that the samples
are evenly distributed around the regression line (y = x), which indicates that the
Determine Reducing Sugar Content in Potatoes 631

significance level is high and the model established has good predictability. Therefore,
VISSA-PLS was selected as the prediction model of reducing sugar content in potatoes.

Table 2. Predicting results of by different selection methods


Methods N nLVs Calibration set Validation set
RMSECV R2 c RMSEP R2 p
Full-PLS 203 9 0.0461 0.7858 0.0275 0.7729
VISSA-PLS 32 4 0.0392 0.8255 0.0238 0.8144
CARS-PLS 30 8 0.0424 0.8027 0.0257 0.7992
Note: Calibration samples n = 181, Validation samples n = 48.
Partial least squares (PLS), Root mean square error (RMSE), N:
number of variables, nLVs: number of latent variables.

Fig. 5. Reducing sugar content predicted results of VISSA-PLS model

4 Conclusions

In this paper, hyperspectral imaging technology was used to detect the reducing sugar
content of potatoes rapidly and nondestructively. The average spectral data of samples
(400–1000 nm) were obtained. A new VISSA algorithm was used to extract charac-
teristic wavelengths representing effective spectral information. After VISSA extraction,
32 wavelength points were mostly concentrated between 420–440 nm and 520–
530 nm, which accorded with the characteristics of spectral curves. The PLSR regres-
sion model of reducing sugar content was established with 32 characteristic wavelengths
as input variables. The results were better than those of the whole band. The RMSECV
was 0.0392 and the RMSEP was 0.0238. The results showed that the calibration model
VISSA-PLS based on hyperspectral images had a higher value. Prediction accuracy.

Acknowledgements. The work is supported by Science and Technology Innovation Strategy


fund Project of Guangdong Province (Grant no. 2018A03017), Zhanjiang Science and Tech-
nology Project (Grant no. 2017B01143) and Special Innovation Projects of Universities in
Guangdong Province (Grant no. 2018KTSCX130).
632 W. Jiang et al.

References
1. Cui, H., Shi, G., An, J.: Comparison study on the testing method of the content of reduced
sugar in potato. J. Anhui Agri. Sci. 34(19), 4821–4823 (2006)
2. Horvat, Š., Roščić, M., Horvat, J.: Synthesis of hexose-related imidazolidinones: novel
glycation products in the Maillard reaction. Glycoconjugate J. 16(8), 391–398 (1999)
3. Yang, B., Zhang, X., Zhao, F., et al.: Suitability evaluation of different potato cultivars for
processing products. Trans. Chin. Soc. Agric. Eng. 31(20), 301–308 (2015)
4. Liu, C., Gao, H., Li, A., et al.: Near-infrared model establishment for testing potato tubers
potassium content. Chin. Potato J. 2, 65–68 (2011)
5. Song, J., Wu, C.: Simultaneous detection of quality nutrients of potatoes based on
hyperspectral imaging technique. J. Henan Univ. Technol. 37(1), 60–67 (2016)
6. Helgerud, T., Wold, J.P., Pedersen, M.B., et al.: Towards on-line prediction of dry matter
content in whole unpeeled potatoes using near-infrared spectroscopy. Talanta 143, 138–144
(2015)
7. Chen, Z., Feng, H., Yin, S., et al.: Assessment of potato dry matter concentration using VIS-
SWIR spectroscopy. J. Heilongjiang Bayi Agric. Univ. 30(2), 47–51 (2018)
8. Jiang, W., Fang, J., Wang, S., et al.: Detection of starch content in potato based on
hyperspectral imaging technique. Int. J. Sig. Process. Image Process. Pattern Recogn. 8(12),
49–58 (2015)
9. Chen, M., Chen, Y., Zhang, Y., et al.: Determination of soluble protein in potato by
attenvated total reflection mid-infrared spectroscopy. J. Chin. Cereals Oils Assoc. 33(12),
118–125 (2018)
10. López, A., Arazuri, S., Jarén, C., et al.: Crude protein content determination of potatoes by
NIRS technology. Procedia Technol. 8, 488–492 (2013)
11. Ahmed, R., Daniel, G., Lu, R.: Evaluation of sugar content of potatoes using hyperspectral
imaging. Food Bioprocess Technol. 8(5), 995–1010 (2015)
12. Jiang, W., Fang, J., Wang, S., et al.: Hyperspectral determination of reducing sugar in
potatoes based on CARS. Int. J. Hybrid Inf. Technol. 9(9), 35–44 (2016)
13. Dacal-Nieto, A., Formella, A., Carrión, P., Vazquez-Fernandez, E., et al.: Common scab
detection on potatoes using an infrared hyperspectral imaging system. Image Anal. Process.
6979, 303–312 (2011)
14. Ainara, L., Janos, C.K., Mohammad, G., et al.: Non-destructive detection of blackspot in
potatoes. by Vis-NIR and SWIR hyperspectral imaging. Food Control 70, 229–241 (2016)
15. Huang, T., Li, X., Jin, R., et al.: Non-destructive detection research for hollow Heart of
potato based on semi-transmission hyperspectral imaging and SVM. Spectrosc. Spectral
Anal. 35(1), 198–202 (2015)
16. Wang, C., Li, X., Wu, Z., et al.: Machine vision detecting potato mechanical damage based
on manifold learning algorithm. Trans. Chin. Soc. Agric. Eng. 30(1), 245–252 (2014)
17. Zheng, J., Zhou, Z., Zhong, M., et al.: Chestnut browning detected with near-infrared
spectroscopy and a random-frog algorithm. J. Zhejiang A & F Univ. 33(2), 322–329 (2016)
18. Deng, B., Yun, Y., Liang, Y., et al.: A novel variable selection approach that iteratively
optimizes variable space using weighted binary matrix sampling. Analyst 139, 4836–4845
(2014)
19. Xu, Y., Wang, X., Yin, X., et al.: Visualization spatial assessment of potato dry matter.
J. Agric. Mach. 49(2), 339 (2018)
Game Theory in the Fashion Industry:
How Can H&M Use Game Theory
to Determine Their Marketing Strategy?

Chloe Luo and Yi Wang(&)

Plymouth Business School, University of Plymouth,


Drake Circus, Plymouth, Devon PL4 8AA, UK
[email protected],
[email protected]

Abstract. The fashion industry is a very competitive industry and it is hard to


stay unique and stand out. It is hard to ensure a large consumer base and a high
profit margin but each company is differentiated by brand and what their mar-
keting strategy is. Many retailers like H&M are struggling to stand out and
maintain a unique brand whilst satisfying their consumer base. Game theory
allows a company to determine the best strategy in situations where you are faced
with competing strategies. This paper demonstrates how game theory is applied
in the interest of all participating businesses to conform rather than compete.

Keywords: Game theory  Retail management  Fashion industry  Critical


analysis

1 Introduction

The Game theory is a framework of theory to conceive social situations amongst


competing players. Game theory is like the science of strategy that deals with real life
situations. The key pioneers are John von Neumann and Josh Nash who were both
mathematicians [1]. The ‘prisoner’s dilemma’ is a prime example of game theory. The
example explains how two criminals can either betray one another to gain freedom
while increasing the others prison sentence, or maintain silent so both sentences can be
reduce but if they betray each other, then their sentence stays the same [2]. Game
theory is often used in business and most commonly known within an oligopoly, this
means that companies settle on a similar pricing structure that is agreed on by majority
of the businesses, or offer a lower price in competition with other businesses.
On the other hand however, once a company decides to not conform and take a
competitive advantage, all the other companies are typically either at a loss or follow in
line [3]. H&M is a company that is right in the middle of that as their business has
multiple competitors regarding product, price and structure. At the moment H&M are
struggling to find the best marketing strategy for them in order to help them meet their
goals as there are too many competing businesses in the same industry. H&M [4]
identify themselves as a sustainable fashion source which helps them stand out and
influence consumers to purchase their product, however, more and more fashion

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 633–638, 2020.
https://doi.org/10.1007/978-981-15-2341-0_79
634 C. Luo and Y. Wang

companies are going down the sustainable route so what else can they do to maintain a
large consumer base as well as make a profit. By having a strong marketing strategy it
should help them achieve both a large consumer base and high profits.
The structure of this paper is as follows: Sect. 2 presents an overview of the game
theory concept. Section 3 presents the application of game theory in the industry,
Sect. 4 identifies the limitation of game theory as an application and, lastly, Sect. 5
concludes the paper.

2 Literature Review

Game theory started out as an attempt to find solutions for duopoly, oligopoly and
bilateral monopoly problems. Within all these situations, a solution was difficult to
come to as the interests and strategies of the organisations or individuals were
conflicting. Hence why game theory was used in attempts to come to various equi-
librium solutions which is based on rational behaviour of the participants involved.
Companies are now increasingly utilising game theory to assist them in making
high risk/high reward decisions in highly competitive markets. Game theory has been
around for a long time and proven an ability to provide ideal strategic choices in
multiple different situations, companies and industries. This theory is a very useful tool
for predicting what may happen between a group of firms interacting, where the actions
of a single firm can directly affect the payoff of other firms. Each participating player is
a decision maker in the game of business [5]. So, when making a choice or choosing a
strategy, all players, also known as firms, must take in consideration the potential
choices and payoffs caused by other firms. This understanding that is quantified
through payoff calculations allows a company to form their best strategy. A properly
formed game can assist many businesses by reducing business risk. This can be done
by yielding valuable insights into competitors and improving internal alignments
around decision making which maximises strategic utility [6].
Not only is game theory used to gain valuable insight into competition but it can
assist majorly when trying to make strategic decisions in relation to any business
function. For example, game theory is excellent for situations like auctions, product
decisions, bargaining and supply chain decisions etc. By applying game theory to all
these functions that are used in the day to day running of a business it can help the
company make the most strategic choices as you will be able to see all the outcomes.
Peleckis [7] discusses how using game theory is effective in determining equilibrium
within a market. It is used during risk analysis to determine optimal price strategy,
number of customers and market share etc., to reduct business risk.

3 Game Theory in the Fashion Industry

Experience In relation to marketing in the fashion industry, Game theory can allow
companies to see what type of marketing will benefit them the most [8]. Researchers
debated whether it is possible to apply this theory to solve problems regarding mar-
keting, especially predicting competitive behaviour [9]. The discussion on whether
Game Theory in the Fashion Industry 635

game theory can be used in marketing then extended to other possibilities. Nash
equilibrium is also a very important concept to consider when referring to game theory
as it refers to a stable state in a game where no player can gain an advantage by
changing their strategy, though this theory is typically used in economics.
The fashion industry already uses game theory but they use it to determine how
customers will buy their clothes. The fashion industry is hard for both buyers and
sellers. Buyers are constantly trying to find good deals on clothes that suit them. Sellers
on the other hand want to move as much inventory as possible at the best price
possible. The solution to both worlds is sales. Sales move high inventory for sellers and
buyers get reasonably prices clothing [10]. This can be seen as using game theory as
they’ve found the best solution for both players. If sellers kept prices high all the time,
buyers wouldn’t buy meaning that inventory will be low. If sellers kept prices low then
inventory movement would increase, however, sellers wouldn’t be making the most out
of their products. So, by having sales every now and then, buyers maintain happy and
as do sellers.
Game theory is also used in this industry to protect designs and explain fashion trends.
For example, every season or phase there is always a trend that every retailer sells until the
next trend passes by. How this works is that the fashion industry is in an oligopolistic
competition with each other. Each firm’s product is typically unique to their own brand
but they all want to maximise profit so they compete by creating products on trend until
equilibrium is achieved. Game theory shows the outcomes and how people will benefit if
the designs are copied. If copying the exact design gives the copier a high incentive to
copy then the designers are more likely to legally protect their designs [11].
If the fashion industry already use game theory to determine how customer will buy
their clothes then why not see if game theory can determine their marketing. A good
marketing strategy plays a very important role in a market where competitors are
targeting the same consumers with either identical or very similar products. Companies
competing in this market have to choose from two main marketing strategies: product
discounting or advertisement expenditure. As expenditure on advertisement can help
the brand and differentiate the product, amplify the consumers perception about the
product and make sure that consumers are aware of the products advantages. However,
product discounting implies that the product is sold at a cheaper rate and assist the
brand in increasing the customer base so more people buy it. Both strategies have
benefits and limitations and by using game theory, it should assist the firm in finding a
balance between the two strategies for optimum payoffs. H&M [4] have the difficulty of
finding a suitable strategy as their products are of a reasonable price as well as have
excellent expenditure on their advertisement, hence why game theory should be able to
help them decide what route is best for them.

4 Critical Analysis of Game Theory

There are many limitations to using game theory when making marketing decisions.
Many marketers are against using game theory as they believe it’s not useful. They
believe that because game theory is very practical it doesn’t take in any consideration
of managerial insights regarding competition and co-operative decision making
636 C. Luo and Y. Wang

behaviour [12]. There are many criticisms regarding the usefulness of using game
theory in making marketing decisions, for example, in marketing the best choice can’t
be chosen due to how good the price is but instead chosen with irrational motives like
the emotional connection that the consumer may have [13]. In relation to H&M a
marketing decision that may seem logical might not be the best suited because H&Ms
consumers may not be able to relate or understand the marketing strategy.
While game theory is useful in indicating the outcomes of many different strategic
choices, it may be unlikely for it to yield precise solutions in regards to marketing
issues. This theory requires choosing the appropriate set of variables or available
options, the set of potential outcomes and the objectives assumed by the firms involved.
Due to the uncertainty presented in all of the different measure it means that the
precision of the choice is limited [14].
One of the main issues with using game theory as a marketing tool is that game theory
analyses behaviour of rational participants, with predictable decisions and easily
explained deviations if any. Marketing’s main reason for existence is to control consumer
behaviour, which is typically irrational and can be affected by multiple and usually
unidentifiable factors, like feelings and desires which cannot be predicted. Game theory
also doesn’t consider the marketing departments role in making sure that the brands
image is created and protected. Due to the uncertainty of the public opinion, it means that
a decision that may seem the most rational or logical could be the worst approach in terms
of publicity [13]. This means that a marketing idea that may be ideal, may not work as it
doesn’t work with H&Ms branding which can cause many issues with consumers.
The hypothesis on which game theory is mainly founded on can be seen as far from
the realities of this world, hence why game theory may be considered as useless in the
complex world of marketing. The criticism that keeps coming up in regards to the
application of game theory in marketing is that game theory analyses rational players
behaviours. In marketing, the relation between price and quality of goods are not the
main reason for a consumers purchase. Irrational factors as well as intangible factors
can sometimes come before physical and price factors. H&M could produce expensive
products but because the products are sustainable or related to consumers desires, they
won’t worry about how much the item costs, in the same way that if a product is very
cheap but harms a lot of animals in the process or isn’t good for consumers then they
wouldn’t purchase it as they don’t believe in the product.
Game Theory Benefits
Although there are many marketing practitioner’s who are against using game theory in
marketing, there are some who are for this theory. Competition by many other models
are often not handled well as other marketing models in the earlier days were mostly
optimising and asymmetric because they took the view of ‘a single active decision
maker’ [15]. Competitors are often thought of as non-reactive when in reality it is the
complete opposite. Game theory, however, is the ideal model for interdependence and
the effects of interactions that exists between competing firms as it doesn’t assume the
competition will not react but addresses the competition directly and makes it an
essential part of making a marketing decision [16]. Non-cooperative competition like
this links well with Nash equilibrium, as it takes in consideration ways competitors
may go against you which allows the business to prepare for the worse and use it to an
Game Theory in the Fashion Industry 637

advantage. This means that when deciding on marketing, you can see all potential
outcomes [17]. This means that H&M are able to observe everything that may happen
in regards to their competition and be able to find the best strategy suited to them
considering all the competition.
Bacharach [18] claims that there are many attributions to using game theory to
determine a marketing strategy. He believes that game theory provides a well defined
set of possibilities for all players, which allows all players to consider each possible
result and then choose the best path.
Big data allows game theory attribution to be feasible in this day and age because
theoretically the system becomes more precise each time data is collected on a con-
sumers buying journey. It may not be the ultimate solution as game theory’s main
incompatibility with marketing is when it studies rational decision makers, but it has
gone a lot further than before [19]. Instead of using typical models like the last click
attribution, the game theory can share credit for sales across multiple points of a
customers purchasing journey. This gives marketers a chance to paint a clearer picture
of what the person should do more and where they can save money.
The main reason why people don’t use game theory as a marketing tool is because
consumers don’t make choices by considering the costs and benefits, but instead by
thinking and choosing depending on how they emotionally feel about the product. This
defies game theory as it defies logic. However, as long as the situation can be ratio-
nalised then game theory is actually very helpful. Game theory hasn’t been used much
in marketing not because it is impossible but because it is a challenge [13]. Whether the
outcomes are worth the effort it will be dependable on the individual but marketing is a
notoriously competitive world and using game theory may just be the competitive edge
the fashion industry needs.
Lastly, Nash equilibrium focuses on non-cooperative competition which basically
takes into account the ways in which competitors may stray away and go against you.
Nash equilibrium is typically what companies want to consider when masking strategic
decisions when the market is stable as no particular benefits can be gained from drastic
changes. This is helpful to H&M incase their competitors aren’t in a cooperating
environment as it takes in to account what other retailers may do to try and go against
you. So, in H&Ms case, they have many competitors and it is hard to ensure that all
competitors will cooperate to ensure everyone benefits as the fashion industry is a huge
competitive market.

5 Conclusion

Ultimately, game theory has been used in many fields and is a risk as well as a
challenge when it is used in deciding marketing strategies as participants can be seen as
irrational. However, by using game theory, H&M can see all possible outcomes and
decide which strategy is best suited to them. Not only that, but they can also see
potential actions their competitors may pursue, so in hindsight H&M is much better off
using game theory to decide with route is best for them in terms of marketing strategy.
Game theory may work the best for them as it can mimic a lot of real life situations well
and it can be applied easily to any field.
638 C. Luo and Y. Wang

References
1. Montet, C., Serra, D.: Game Theory and Economics. Palgrave Macmillen, Houndsmill
(2003)
2. Ott, U.: International business research and game theory: looking beyond the prisoner’s
dilemma. Int. Bus. Rev. 22(2), 480–491 (2013)
3. Mohammadi, A., et al.: The combination of system dynamics and game theory in analyzing
oligopoly markets. Manage. Sci. Lett. 19(1), 265–274 (2016)
4. H&M.: H&M Group, Vision & Strategy. About.hm.com (2019). https://about.hm.com/en/
sustainability/vision-and-strategy.html. Accessed 29 Apr 2019
5. Azar, O.: The influence of psychological game theory. J. Econ. Behav. Organ. 20(2), 234–
240 (2018)
6. Smith, C., Neumann, J., Morgenstern, O.: Theory of games and economic behaviour. Math.
Gaz. 29(285), 131 (1945)
7. Peleckis, K.: The use of game theory for making rational decisions in business negations: a
conceptual model. Entrepreneurial Bus. Econ. Rev. 3(4), 105–121 (2015)
8. Dufwenberg, M.: Game theory. Wiley Interdisc. Rev. Cogn. Sci. 2(2), 167–173 (2010)
9. Herbig, P.: Game theory in marketing: applications, uses and limits. J. Mark. Manage. 7(3),
285–298 (1991)
10. Mediavilla, M., Bernardos, C., Martínez, S.: Game theory and purchasing management: an
empirical study of auctioning in the automotive sector. In: Umeda, S., (eds.) Advances in
Production Management Systems: Innovative Production Management Towards Sustainable
Growth, pp. 199–206. Springer, Cham (2015)
11. Wong, T.: To copy or not to copy, that is the question: the game theory approach to
protecting fashion designs. Univ. PA Law Rev. 160(04), 1139–1193 (2012)
12. Rivett, P., Wagner, H.: Principles of operations research. Oper. Res. Q. (1970–1977) 21(4),
484 (1975)
13. Dominici, G.: Game theory as a marketing tool: uses and limitations. Elixir Mark. 36, 3524–
3528 (2011)
14. Moorthy, K.: Using game theory to model competition. J. Mark. Res. 22(3), 262 (1985)
15. Chatterjee, K., Lilien, G.: Game theory in marketing science uses and limitations. Int. J. Res.
Mark. 3(2), 79–93 (1986)
16. Özer, O.: Determining the best sales time period for dried figs: a game theory application.
J. Int. Food Agribusiness Mark. 27(2), 91–99 (2015)
17. Possajennikov, A.: Imitation dynamic and nash equilibrium in Cournot oligopoly with
capacities. Int. Game Theory Rev. 05(03), 291–305 (2003)
18. Bacharach, M.: Economics and the Theory of Games. Westview Press, Boulder (1977)
19. Zheng, Z., et al.: Game theory for big data processing: multileader multifollower game-based
ADMM. IEEE Trans. Signal Process. 66(15), 3933–3945 (2018)
Multidimensional Analysis Between
High-Energy-Physics Theory Citation
Network and Twitter

Lapo Chirici1, Yi Wang2(&), and Kesheng Wang3


1
Department of Computer Science, University of Pisa, Pisa, Italy
2
The School of Business, Plymouth University, Plymouth, UK
[email protected]
3
Department of Mechanical and Industrial Engineering, NTNU,
Trondheim, Norway

Abstract. The knowledge of information propagation has always been the


subject of multiple studies. Recent researches have shown that network with a
certain degree of concentration of nodes act often as attractors to others, gen-
erating faster and more relevant connections. With this experiment a High-
energy-physics theory citation network was explored, in comparison with the
influence of a Twitter network. Despite the impact of scientific publication is
always not straightforward to capture and measure, a citation network can be
represented as a fitting example of a generative process leading to innovation.
The investigation has been carried out through network analysis tools, for the
purpose to examine common patterns regarding valuable metrics arisen from
both multidimensional graphs. Beyond a substantial difference in the usability of
the two channels, the results emerged have highlighted important aspects of how
the information propagation coefficients are based on similar principles for some
metrics, but distant for others, such as the closeness of nodes.

Keywords: Network discovery  Multidimensional analysis  Shortest path 


Network forecasting  Information flow

1 Introduction

The project wants to examine in a comparative manner two networks of citations


coming from different areas with the aim of highlighting common features and similar
parameters.
The research has been undertaken towards the choice of networks having com-
parable dimensions in terms of structure and behavior. On Twitter, the quotations of
another’s tweets or user are called retweets and mention, respectively. In a scientific
essay the mentions of parts of text produced by other writers are called citations.
Therefore the choice fell on the following direct and unweighted datasets:
1. Twitter mentions/retweet network
2. Network of citations of a body of scientific texts: High-energy physics [1]

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 639–645, 2020.
https://doi.org/10.1007/978-981-15-2341-0_80
640 L. Chirici et al.

The first network consists of a subnet of Twitter mentions and retweets related to a
given period (continuously updated) consisting of 3657 nodes and 188712 arches. Each
node represents a user (@user_name). If the user @A mentions the user @B or
retweets a tweet, a direct arc is created from the @A node to the @B node and not
necessarily an arc from @B to @A.
The second network concerns the e-prints of HEP-Th section, contained in arXiv
archive, starting from January 1993 until April 2003. This latter contains 27770 nodes
and 352807 arches. Each node represents a writer. If writer A cites writer B, there is a
direct arc from node A to node B, and not necessarily an arc from B to A.
The network of citations of theoretical physics has a giant connected component
[2], composed of 27,400 nodes, equivalent to 98.7% of the total network. A few other
components disconnected from the central one has been detected. In the Twitter net-
work of mentions and retweets, on the other hand, the connected giant component is
composed of 3,656 nodes, which equals 100% of the total network. Although the
number of nodes of the two giant components is different, the percentage ratio confirms
the possibility of comparing them. Network analysis produced the following results:

Table 1. Networks dimensions and valuable metrics


Dimensions/Metrics HEP-Th Twitter
Clustering coefficient 0,157 0,174
Connected components 2 1
Network diameter 37 12
Network radius 1 1
Shortest path 224589720 (29%) 12737676 (95%)
Characteristic path length 8,460 3,764
Avg. N. of neighbours 25,372 84,673
Number of nodes 27770 3657
Network density 0,00 0,00
Isolated nodes 1 0
Number of self-loops 39 2903
Multi-edge node pairs 483 30985
Analysis time 11965,33 10213,95

Degree Distribution of Networks


The degree of a node in a network corresponds to the number of its neighbors. If a
network is directed, the arc that connects the node to its neighbor points in a precise
direction and, therefore, the nodes have two different degrees: the in-degree, which
corresponds to the number of the incoming arcs, and the out- degree which corresponds
to the number of output arcs [3, 4].
Analyzing the degree distributions of the citations network and reporting the graphs
in logarithmic scale (Fig. 1) it is observed that both distributions are very regular.
A high number of nodes with a low degree of outgoing and incoming paths is reported
in both the graph. A decrease in the number of nodes as the degree increases is also part
Multidimensional Analysis Between High-Energy-Physics Theory Citation Network 641

will also be part of the study. In particular, in the in-degree, a greater accumulation of
nodes with a degree between 100 and 500 is remarkable, probably due to the presence
of texts that are more cited than others.
The red line in the figure identifies to the power law [5], which is corresponding to
both the trend’s evolutions, even if the in-degree distribution is closest.

Fig. 1. Logarithmic scale of In-degree and Out-degree of HEP-Th network

Analyzing the degree distributions of the Twitter network (Fig. 2), a more evident
uniformity regarding the output arcs is observable. That implies a regular trend with
respect to the power law, a factor that is highlighted by a greatest number of nodes
having only one arc in output. The effect is underlined by a decrease of the number of
nodes in conjunction with the increase in the degree. In in-degree distribution, on the
other hand, the trend is not so regular, since the greatest number of nodes has only one
path in input, but there is a peak corresponding to a group of nodes with a number of
incoming paths between 100 and 500, as in the citations network. Since the present
network is composed of both mentions and retweets, the peak is probably due to two
factors: a massive presence of users who are often cited or retweeted (like “influen-
cers”) or tweets related to trending topics. In the second case the popularity of a trend
(marked with hashtag #) can act as a “boost” to quickly increase the mentions of a
specific user whose tweet was precisely marked as particularly relevant [6].

Fig. 2. Logarithmic scale of In-degree and Out-degree of Twitter network


642 L. Chirici et al.

Both the graphs count a small number of nodes with a high degree: these are called
hubs and denote the presence of a scale-free network [7].

2 Shortest Path Length

In a Twitter network the distance between a randomly selected pair of


mentions/retweets indicates the minimum number of interactions between them. For
example, if the user @A mentions the user @B, and this in turn mentions the user @C,
the distance between the mention @A and the mention @C is equal to 2 (in the case in
which the user @A quoted the user @C, the distance between them would be reduced
to 1). Similarly, for the network of citations in which the nodes A, B and C represent
writers. For a shorter average distance (sum of minimum paths/total possible paths) a
higher speed of transmissible information corresponds. Analyzing the two networks,
the average shortest path is 8.46 in the citations network and 3.76 in the Twitter
network (Table 1). If at first glance these two values may seem rather distant, it is
fundamental to compare them to the size of the respective networks: the average
distance, in fact, varies with the number of nodes present in the network.
Although the average distance between two citations (value 8.46) exceeds the value
of the “Six degrees of separation” [7], it is necessary to consider the huge number of
nodes in the network (27.770). In this perspective, the 8 steps that separate a pair of
randomly chosen nodes indicate a good index of the small world effect [8]. Further-
more, the shortest path distribution of this network (Fig. 3) underlines an initial peak
indicative of the fact that more than half of the total number of possible paths (59%) has
a geodesic distance value lower than 8 (the remaining 41% of paths therefore have a
distance between 9 and 37) [9].

Fig. 3. High-energy physics theory citation Fig. 4. Twitter mentions e retweets network
network Shortest Path Length Shortest Path Length

The value of the average distance obtained in the network of mentions/retweets


(3.76) is instead considerably lower than that of the aforementioned theory (Fig. 4).
The network is therefore even less separated than a common real network. This feature
is typical of social networks and is called “ultra-small world” [10]. Twitter, even
compared to other social networks, is by nature an extremely dynamic dialogue
Multidimensional Analysis Between High-Energy-Physics Theory Citation Network 643

platform and many of the mentions made over a certain period of time concern a
limited number of popular themes (and users) at that time (trend-topic).
Therefore, assuming that the user @C is a popular user, the probability that both the
user @A and the user @B would mention it increase, contributing to a decrease in the
average distance.

3 Clustering Coefficients Comparison

The third metric analyzed is the clustering coefficient, which represents the estimation
of how many nodes adjacent to another node are also related to each other. The
detection of this index is useful in order to analyze the potential dissemination of
information between the various nodes. The greater the coefficient, the lower the
network’s efficiency in disseminating information, as it is a manifestation of greater
closure of the network itself. In social networks, where arcs represent an interaction, the
clustering coefficient provides an estimate of how close the group, or community, is
respect to other nodes in the network. The expected result before proceeding with the
scanning of the data let intuitively foresee a considerable gap between the two coef-
ficients, meaning by far more (and therefore much more passive) that of textual quo-
tations than that of Twitter. On the contrary, by scanning the two datasets, the result
that emerges is a slight discrepancy of only a few tenths, with the social network
coefficient even slightly higher: 0.157 found for citations against 0.174 of Twitter.
However, before proceeding with definitive conclusion, a further observation based
on results obtained by the interpretation of the different sizes of the networks has been
turned necessary. In order to do so, the average of the number of neighboring nodes has
been analyzed, since it represents the first index responsible for determining the
clustering coefficient. The values obtained are 84.673 for Twitter against 25.372 for
HEP-Th citations. Despite the average of the number of neighbors obtained in social
network is more than 3 times that of citations, the clustering coefficient is almost
similar. A result which is very low on Twitter, considering the different size in
numerical terms of the two networks. Direct interpretation of this comparison is pre-
cisely the greater fluidity in finding, being part of it and sharing the flow of information
on social networks, compared to other types of networks. In case of Twitter this flow
can be triggered by retweeting or searching via hashtag. Not to be overlooked is also
the fact that the interaction process on Twitter can be done very easily even among
non-neighbors in the same “network route”. In fact, the information can be found not
only through a tweet of a following, but also (and very often) by virtue of “threads of
discussion” generated by the Trend-Topics of that particular moment.
Even the conformation of the two distributions confirms what has been mentioned.
The graph of Twitter (Fig. 5) appears in fact with a flow of nodes that does not follow a
regular line of cohesion, despite the presence of areas characterized by hubs with
greater density of relationships. Reading the graph of the physical citations network
(Fig. 6) an apparent cleanliness in the distribution of relational flows is here reported,
developed with curvilinear proportionality from left to right (the peak of the number of
neighboring nodes we have between 0.1 and 0.01). This index shows how the
644 L. Chirici et al.

coefficient decreases with the increase in the number of neighbors and therefore there is
more openness in the dissemination of information.

Fig. 5. High-energy physics theory citation Fig. 6. Twitter mentions e retweets network
network Clustering Coefficient Clustering Coefficient

The relational dimension of Twitter, on the other hand, in addition to being very
jagged, does not follow the theoretical logic of clustering coefficient (> close
nodes ) < closure), since most of the aggregated data shows how the increase in the
number of close nodes increases as well the coefficient. It follows, therefore, that the
relational dynamics of retweets and mentions are not mainly based on the principle of
“closeness” [11], but it gives privilege to other parameters such as the “richness” and
“popularity” of the contents of the tweets.

4 Conclusion

In conclusion it can be inferred that the analysis of the in/out degree, average shortest
path length and clustering coefficient fully confirm Millgram’s studies on the “six
degree of separation”, from which the small world effect is then extrapolated.
According to this theory, each node is connected to a few other nodes, but it can reach
any other node in the network thanks to the presence of hubs.
Directly connected to these phenomena and widely developable in other environ-
ments (always with the same networks) could be the experiment of the information
cascade, according to which the behavior of a node in managing the incoming com-
munication flow is influenced by the degree of similarity of the others neighboring
nodes in making choices of this type.
Ultimately, it can be seen that the major difference between the behaviors of the two
datasets lies in the regularity or otherwise in managing information flows. Where the
textual citations correctly respond to the forecasts of the expected values, those of
Twitter presumably suffer the influence of external factors that cannot be directly
monitored.
Multidimensional Analysis Between High-Energy-Physics Theory Citation Network 645

References
1. Leskovec, J., Kleinberg, J., Faloutsos, C.: Graphs over time: densification laws, shrinking
diameters and possible explanations. In: ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining (KDD), pp. 149–151 (2005)
2. Klimkova, E., Senkerik, R., Zelinka, I., Sysala, T.: Visualization of giant connected
component in directed network - preliminary study, Mendel, pp. 412–416 (2011)
3. Zhang, J., Luo, Y.: Degree centrality, betweenness centrality, and closeness centrality in
social network. Adv. Intell. Syst. Res. 132, 300–303 (2017)
4. Wang, Y., Ma, H.-S., Yang, J.-H., Wang, K.-S.: Industry 4.0: a way from mass
customization to mass personalization production. Adv. Manuf. 5(4), 311–320 (2017)
5. Barabasi, A.-L., Jeong, H., Neda, Z., Ravasz, E., Schubert, A., Vicsek, T.: Evolution of the
social network of scientific collaborations. Physica 311, 590 (2002)
6. Newman, M.E.J.: Scientific collaboration networks. I. Network construction and funda-
mental results. Phys. Rev. E 64, 016131 (2001)
7. Barabási, A.: Linked: How Everything is Connected to Everything Else and What It Means
for Business, Science, and Everyday Life. Plume, New York (2003)
8. Marvel, S.A., Martin, T., Doering, C.R., Lusseau, D., Newman, M.E.J.: The small-world
effect is a modern phenomenon (2013)
9. Shamai, G., Kimmel, R.: Geodesic distance descriptors, pp. 3624–3632 (2017)
10. Sampaio, C., Moreira, A., Andrade, R., Herrmann, H.J.: Mandala networks: ultra-small-
world and highly sparse graphs. Sci. Rep. 13, 9082 (2015)
11. Okamoto, K., Chen, W., Li, X.-Y.: Ranking of closeness centrality for large-scale social
networks. In: Preparata, F.P., Wu, X., Yin, J. (eds.) Frontiers in Algorithmics, pp. 186–195.
Springer, Heidelberg (2008)
Application of Variable Step Size Beetle
Antennae Search Optimization Algorithm
in the Study of Spatial Cylindrical Errors

Chen Wang1,2, Yi Wang3, and Kesheng Wang2(&)


1
College of Mechanical Engineering, Hubei University of Automotive
Technology, Shiyan, China
[email protected]
2
Department of Mechanical and Industrial Engineering, NTNU,
Trondheim, Norway
[email protected]
3
The School of Business, Plymouth University, Plymouth, UK
[email protected]

Abstract. Based on the establishment of mathematical model of spatial


cylindrical error evaluation, this article solves the objective function of the
minimum cylindrical error area by beetle antennae search algorithm. At the same
time, the problem of the beetle antennae search algorithm is not high, and it is
easy to fall into the local optimal solution. The variable step beetle antennae
algorithm is designed to improve the calculation accuracy. According to the
actual data collected by three coordinate machine, the calculation results of this
algorithm are compared with those of other methods, and the feasibility and
superiority of this algorithm are verified.

Keywords: Spatial cylindrical error  Variable step  Beetle antennae search


algorithm

1 Introduction

In the field of modern manufacturing, the requirement of manufacturing accuracy of


parts is increasing, and precision measurement technology is developing constantly.
Intelligent precision measurement of parts is also an indispensable part in the field of
modern manufacturing. Spatial cylindrical error, as an important form and position
error standard for axle and tube parts, its accuracy of evaluation results affects the
evaluation accuracy of the whole part [1]. Minimum area method, traditional method
and intelligent optimization algorithm are still the main methods for evaluating spatial
cylindrical error. Intelligent optimization algorithm, such as GA, PSO, DE algorithm
are also widely used in this field. However, there intelligent optimization algorithm not
only have a great relationship with the selection of algorithm parameters, but also need
to improve the complexity, accuracy and robustness of the algorithm.
He Changyun, Wang Pei et al. [2] used least squares method to measured cylinder
is roughly positioned, and the evaluation model of cylinder error is simplified by
coordinate transformation based on least squares method. Finally, the minimum value
© Springer Nature Singapore Pte Ltd. 2020
Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 646–653, 2020.
https://doi.org/10.1007/978-981-15-2341-0_81
Application of Variable Step Size Beetle Antennae Search Optimization Algorithm 647

of the model is solved by sequential quadratic programming algorithm. This method


reduces the computational complexity, the accuracy and speed of calculation are
improved, and the problem was avoided at the same time. In many software, the least
square method does not satisfy the minimum area principle. This algorithm only uses it
for rough positioning, and the final quadratic programming algorithm conforms to the
minimum area method. Xue Xiaoqiang and Feng Yong [3] proposed an iterative
weighted least squares method by improving the constraints of measurement points. By
modifying the weighted coefficients, the least squares method was continuously
approximated to the minimum region, and the minimum cylindrical error was obtained.
This method ultimately depends on the accuracy of the measured data, so it has a
certain significance in practical application, but the results will not converge when the
measurement points cannot meet the small error conditions.
Based on the principle of minimum containment region, the unconstrained cylin-
drical error model is established by Chelinxian and Yi Jian [4]. The improved adaptive
chaotic difference algorithm is used to solve the model. Compared with genetic
algorithm and bee colony algorithm, the experimental results show that the algorithm is
better. The cylindricity error evaluation method proposed by Lu Yuming and Wang
Yanchao [5] is a biogeographic optimization algorithm with double mechanism. The
algorithm combines the local search mechanism of the difference algorithm with the
previous step of the biogeographic algorithm, and improves the mutation algorithm and
the migration operator, thus improving the convergence and optimization ability of the
biogeographic algorithm. Finally, the objective function of cylindrical error model is
solved by the smallest area normal number, and the algorithm has a better solution
effect. Zhao Yibing, Wen Xiulan, et al. [6] used quasi-particle swarm optimization to
solve spatial cylindrical error, and achieved good evaluation accuracy. Considering the
high accuracy and complexity of genetic algorithm and immune algorithm, setting
more parameters and slower convergence speed, Ning Huifeng and Ma Guanglong [7]
used the dichotomy iteration method to realize the online accurate evaluation of
cylindrical error. Rossi and others [8] put forward a heuristic sampling strategy, which
can calculate the center coordinates of the minimum area method and good evaluation
accuracy, but the calculation process of the algorithm is complex and the calculation
efficiency is low. Zhao Yibing et al. [9] established a mathematical model of the
minimum area cylindrical error solution based on coordinate measuring machine
detection, and proposed the minimum area cylindrical error method based on quasi-
particle swarm optimization. Wen et al. [10] proposed a cylindrical error evaluation
method based on Monte Carlo and GUM methods and a quasi-particle swarm opti-
mization algorithm, and calculated the uncertainty of measurement results. The above
methods have achieved good results in solving the problem of spatial cylindrical error
evaluation, but there are still some problems such as low accuracy and general effi-
ciency. At present, the BAS algorithm has the characteristics of fewer parameters, fast
solving speed and high precision, and has been applied to many practical fields of
engineering application.
The above research shows that in the current evaluation methods of spatial cylin-
drical error, most of them adopt intelligent optimization algorithm to solve the math-
ematical model of spatial cylindrical error. However, there is still room for
improvement when using intelligent optimization algorithm to solve the problems of
648 C. Wang et al.

more parameters and stronger sensitivity of optimization results to parameters. In this


paper, a variable step beetle antennae search algorithm (VSBAS) is designed to further
improve the accuracy and speed of solving the mathematical model of spatial cylin-
drical error.

2 Establishment of Mathematical Model of Spatial


Cylindrical

In fact, the evaluation of spatial cylindrical error by using minimum region hair is to
measure the deviation of a cylinder relative to an ideal cylinder, that is, to determine the
two best coaxial cylinders containing measuring points. The measured cylinder will
have many spatial measurement points. These measuring points will be contained by
different coaxial cylinders. According to the principle of minimum region, there will be
an optimal two coaxial cylinders containing all measuring points [16]. The spatial
cylindrical error is the radius difference between two coaxial cylindrical surfaces
containing the measured points. Here it is shown as the difference between the max-
imum distance and the minimum distance between the measured point and the ideal
axis. The spatial cylindrical error schematic diagram is shown in Fig. 1.

Fig. 1. Schematic diagram of spatial cylindrical error

As shown in the Fig. 1, the axis direction of the cylinder is assumed to be (a, b, c),
and the coordinate origin is used as the plane in which (a, b, c) is found. The inter-
section point between the plane and the axis of the cylinder to be measured is set as (x0,
y0, z0). Then the expression of the ideal axis of the cylinder to be measured is
x  x0 y  y0 z  z 0
¼ ¼ ð1Þ
a b c

When a group of measuring points of the cylinder to be measured are obtained by


using a coordinate measuring machine, and the number of measuring points is Pi (xi, yi,
zi) at any Pi (i = 1, 2… k0, k0), the ideal axis distance from the measuring point Pi to the
cylindrical to be measured is ri by distance formula.
Application of Variable Step Size Beetle Antennae Search Optimization Algorithm 649

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
A2 þ B2 þ C 2
ri ¼ ð2Þ
a2 þ b2 þ c 2

Finally, the objective function of spatial cylindrical error can be expressed as follows:

f ¼ minðmax ðri Þ  min ðri ÞÞ ð3Þ

According to the minimum zone method, the solution of cylindrical error is the
optimization problem of the objective function (3) with respect to the non-linear
function of variables (x0, y0, z0).

3 Proposed Algorithm

3.1 Algorithm Flow


BAS algorithm is a new intelligent optimization algorithm. The original BAS algorithm
imagines the beetle antennae as a point, two points must be on both sides of the point,
the moving length of the point is proportional to the distance between the two fixed
points, and the original BAS algorithm has a standard fixed step length. When the point
moves at a fixed step distance, the two antenna directions of the antenna rotate ran-
domly, which simulates the rotation of the head direction of the antenna and ensures
that the antenna has a good search ability.
Bas algorithm is similar to PSO algorithm. Compared with PSO algorithm, BAS
algorithm needs less parameters. Even beetle antennae can search the solution space
better without gradient information, which makes bas algorithm greatly reduce the
calculation and achieve effective optimization. However, because the step length of
standard bas algorithm is fixed, the search speed and efficiency of BAS algorithm in
global search and local search are general.
In this paper, the fixed step size of BAS algorithm is optimized, and the variable
step size bas algorithm is designed. By changing the parameters of the step size, we can
shorten the time of approaching the optimal solution by using the step length at the
beginning of the iteration, and find the optimal solution accurately by using the small
step length at the end of the iteration. The solution process of VSBAS is as follows:
1. First, the measured data are brought into the objective function (3), and then the
algorithm is initialized. The initialization data of the variable step-size beetle
antennae algorithm include: the distance DX between the two antennae, the step of
beetle antennae, the variable step parameter Alpha, the dimension D of the problem,
the initial solution x, and the total number of iterations n.
2. The initial solution x obtained in D − 1 is introduced into the algorithm and iter-
ation begins. Firstly, the algorithm calculates the two-needle coordinates:
The left beetle antennae
650 C. Wang et al.

XL ¼ x þ dx  rand/2 ð4Þ

The right beetle antennae


XR ¼ x  dx  rand/2 ð5Þ

Rand is a random value in D − 1.


3. Calculate the functional fitness (odor intensity) of the desired location.

FL ¼ f ðXLÞ ð6Þ

FL ¼ f ðXLÞ ð7Þ

4. The next solution Calculated with variable step size algorithm:



x þ Alpha  step  randðXL  XRÞ; FL\FR
x¼ ð8Þ
x  Alpha  step  randðXL  XRÞ; FL [ FR

5. If the number of iterations does not reach the maximum number of iterations, return
to step 4 for iteration. When the number of iterations reaches the maximum number
of iterations, all calculations stop. The result of the iteration is the error value of the
space cylinder. The algorithm flow chart is shown in Fig. 2.

3.2 Algorithm Test


DTLZ suits is usually used as a test function. DTLZ 1-DTLZ 4 was selected as a test
function to test the performance of VSBAS algorithm.
At present, many experts put forward a variety of MOEA performance indicators. It
can be divided into two categories: (1) one is used to evaluate the actual convergence of
Pareto optimal solution; (2) the other is used to evaluate the distribution of solution.
Classical quality indexes such as error ratio (ER), Supervolume (HV), generation
distance (GD), inverse generation distance (IGD). In the further study of MOEA
performance index, GD and IGD can be widely used to test the convergence and
diversity of algorithms, and have achieved good results in mops test. GD reflects the
convergence of the algorithm. IGD reflects that the convergence and diversity of
solutions can be recorded at the same time.
1. Generational Distance (IGD) indicator
Let P be the set of final non-dominated points obtained from the objective space, and P
be a set of points uniformly spread over the true PF. The GD can indicate only the
convergence of an algorithm and a smaller value indicates better quality. The GD is
computed as:
Application of Variable Step Size Beetle Antennae Search Optimization Algorithm 651

Establishment of objective function and


acquisition of measurement point data

Initialization of algorithmic parameters of BAS

Calculating the coordinates of the left and


right by beetle antenae

Calculate the required odor intensity (function


fitness value)

No

Calculate the next step of longicorn (variable


step method)

Whether the iteration parameters are


reached or not

Yes

The calculation is completed, and the error value


of spatial cylindrical and the solution of objective
function are obtained.

Fig. 2. Algorithm flow chart

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P
u2P dðu; P Þ


GDðP; P Þ ¼ ð13Þ
jPj

2. Inverted Generational Distance (IGD) indicator


Let P be the set of final non-dominated points obtained from the objective space and P
be a set of points uniformly spread over the true PF. The IGD can indicate both the
convergence and diversity, and a smaller value indicates better quality. The IGD is
computed as:
P
 dðv; PÞ
IGDðP; PÞ ¼ v2P  ð14Þ
jP j

4 Experimental Results and Discussion

The measurement data of CMM in this paper come from reference. See Table 1 for the
original data. Then the variable step size bas algorithm is verified by experiments. The
parameters of VSBAS are as follows: population size NP = 100, variable step
652 C. Wang et al.

parameter a = 0.95. The algorithm is programmed in MATLAB 2018a. The computer


is configured with 16 g memory, 3.20 GHz dual core CPU and windows 10 profes-
sional operating system.

Table 1. Cylindrical measurement point coordinates


X Y Z X Y Z
1 11.0943 0.4522 65.2328 11 −7.0459 −9.8731 75.0528
2 5.0940 10.8450 65.0765 12 4.95447 −9.8720 75.2204
3 −6.9063 10.8439 65.0089 13 10.8150 0.5918 85.2304
4 −12.9065 0.4498 65.0879 14 4.8148 10.9846 85.0740
5 −6.9063 −9.9429 65.0540 15 −7.1855 −9.8033 85.0516
6 5.0940 −9.9418 65.2216 16 −13.1858 0.5849 84.8952
7 10.9546 0.5220 75.2316 17 −7.1855 −9.8033 85.0516
8 4.9544 10.9148 75.0752 18 4.8149 −9.8022 85.2171
9 −7.0459 10.9137 75.0770 19 10.6754 0.6616 95.2291
10 −13.0461 0.5196 74.8964 20 4.6752 11.0544 95.0940

Table 2 shows the final results obtained by using secondary annealing teaching and
learning algorithm (2ATLBO), particle swarm optimization (PSO), and variable step
size teaching and learning algorithm VSBAS designed in this paper based on the data
measured in Table 1. According to Table 2, compared with other algorithms, the
VSBAS algorithm has the highest accuracy and better convergence speed. It shows that
the VSBAS algorithm has better results in solving the problem of spatial cylindrical
error evaluation.
Figure 3 shows the iteration curves of the three algorithms mentioned above. As
shown in Fig. 3, VSBAS algorithm has better accuracy and convergence speed than
2ATLBO and PSO algorithm.

Fig. 3. Iterative curve of the algorithms


Application of Variable Step Size Beetle Antennae Search Optimization Algorithm 653

Table 2. Calculation results


Method Iteration Cylindrical error Improvement rate
PSO 300 0.00213 /
2ATLBO 300 0.00192 9.8%
VSBAS 300 0.00175 17.8%

5 Conclusions

In this paper, the BAS is improved and a variable step BAS algorithm (VSBAS) is
designed. The VSBAS algorithm is applied to solve the mathematical model of spatial
cylindrical error. Variable step size method can help BAS algorithm avoid falling into
local optimal. The VSBAS algorithm is tested by the test functions GD, and IGD. The test
results show that the algorithm has good convergence and convergence speed. It is found
that the VSBAS algorithm is superior to other algorithms (2ATLBO, PSO) in solving
accuracy and convergence speed. At the same time, compared with other algorithms,
VSBAS algorithm has fewer parameters and more convenient programming solution.
The data measured by the three coordinate machine are brought into the algorithm and
applied to the evaluation of spatial cylindrical error, better results can be obtained.

Acknowledgements. The work is supported by MonitorX project, which is granted the


Research Council of Norway (grant no. 245317).

References
1. Kjølle, A.: Mechanical Equipment. Hydropower in Norway, Trondheim, December 2001
2. Mobley, R.K.: An Introduction to Predictive Maintenance, 2nd edn. Butterworth
Heinemann, Boston (2003)
3. Wang, K., Wang, Y.: How AI affects the future predictive maintenance: a primer of deep
learning. In: Wang, K., Wang, Y., Strandhagen, J., Yu, T. (eds.) Advanced Manufacturing
and Automation VII, IWAMA 2017. Lecture Notes in Electrical Engineering, vol. 451,
pp. 1–9. Springer, Singapore (2017)
4. Wang, Y., Ma, H.-S., Yang, J.-H., Wang, K.-S.: Industry 4.0: a way from mass
customization to mass personalization production. Adv. Manuf. 5(4), 311–320 (2017)
5. Bram, J., Ruud, T., Tiedo, T.: The influence of practical factors on the benefits of condition-
based maintenance over time-based maintenance. Reliab. Eng. Syst. Saf. 158, 21–30 (2017)
6. Gao, Z., Sheng, S.: Real-time monitoring, prognosis, and resilient control for wind turbine
systems. Renew. Energy 116(B), 1–4 (2018)
7. Matheus, P.P., Licínio, C.P., Ricardo, K., Ernani, W.S., Fernanda, G.C., Luiz, M.: A case
study on thrust bearing failures at the SÃO SIMÃO hydroelectric power plant. Case Stud.
Therm. Eng. 1(1), 1–6 (2013)
8. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780
(1997)
9. Martin, L., Lars, K., Amy, L.: A review of unsupervised feature learning and deep learning
for time-series modeling. Pattern Recogn. Lett. 42, 11–24 (2014)
10. Subutai, A., Alexander, L., Scott, P., Zuha, A.: Unsupervised real-time anomaly detection
for streaming data. Neurocomputing 262, 134–147 (2017)
A Categorization Matrix and Corresponding
Success Factors for Involving External
Designers in Contract Product Development

Aleksander Wermers Nilsen1(&) and Erlend Alfnes2


1
Inventas AS, Innherredsveien 7, 7014 Trondheim, Norway
[email protected]
2
NTNU, Trondheim, Norway
[email protected]

Abstract. This article addresses the involvement of external designers in


contract development projects. A matrix is proposed for how buyers can
structure the role and involvement of design suppliers in a joint development
team. The degree of involvement depends on the buyer’s need for capacity and
competence from external designers, and the development risk in the project.
Four main roles are proposed for the design supplier: Purchased design capacity;
Module design specialist; Design team member; System architect. A set of
success factors for team involvement and information sharing is proposed for
each role.

Keywords: Supplier involvement  Contract development  Product


development  Success factors

1 Introduction

Contract product development is common in engineer-to-order projects. The main


difference from conventional market-driven product development is that the develop-
ment process is based on a contract with a buyer [1]. The sale takes place first and then
the main part of the development is done after the buyer has committed to the purchase.
The engineer-to-order business is characterized by a high degree of volatility and
uncertainty. The type of products and the amount of design workload can change
dramatically from one year to another. The product development times is often short,
and end-users require a solution in a hurry. To only rely on internal design capacity and
competences can be risky. Many companies therefor buy external design services to
ensure end-user satisfaction. The services can range from the design of a minor
component to being responsible for the entire systems design. Not only do the need for
design capacity vary, the buyer’s needs may also vary regarding level of competence,
as some projects may require specialists to be involved. For these reason design
suppliers are prevalent in contract product development.
This paper addresses the buyer’s involvement of design suppliers. The objective is
to investigate the role of the design supplier in contract development and corresponding
success factors for involvement of design suppliers.

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 654–661, 2020.
https://doi.org/10.1007/978-981-15-2341-0_82
A Categorization Matrix and Corresponding Success Factors 655

The structure of the paper is as follows: a brief theoretical background of supplier


involvement and established success factors in the field. The results of a survey are
presented. The survey was designed to study Norwegian companies and learn how they
involve design supplier in their product development. The paper is concluded with a
discussion which provides a matrix to categorize the role of a design supplier. The
discussion also includes success factors for involvement of design supplier in contract
product development corresponding to each of the roles in the matrix.

2 Supplier Involvement

Historically, developing new products was mainly considered an in-house expertise.


Involving suppliers and the supporting academic research into this field has been
conducted since the 1980’s, see [2]. In the 1980’s the leading research, led by the
automotive industry, focused on the performance gap between US and Japanese
manufacturers. This was of interest at the time as US car companies usually did not
involve suppliers in development while the Japanese car companies increasingly started
to include suppliers. The findings, according to [3], showed that the Japanese com-
panies which involved suppliers in product development, had a reduced time to market,
resulting in a more economic production, technologically superior products and a
sustained competitive advantage.

2.1 Relevant Supplier Categories for Contract Product Development


According to [4], when a supplier is involved early in the development process it is
critical that the supplier not only have the technical abilities needed but also have the
correct culture. Not all supplier involvement is equal, four scenarios with different
communication setups or categories are suggested in [5]. It identifies four scenarios
with different communication setups or categories. The model identified in [5] con-
siders two axes, the y axis is the degree of supplier responsibility, from low to high, and
on the x axis the development risk, from low to high. This article will consider the
categories that have a high degree of responsibility. The categories with low degree of
supplier responsibility are characterized by little to no supplier involvement; thus, the
focus is on the buyers needs and less on the buyer-supplier relationship. The degree of
development risk is divided into two categories: arm’s length development, and
strategic development. Previous research by [5] finds that projects in the arms-length
category require less information sharing than the strategic development category. In an
arm’s- length development project, the supplier requires little direct communication,
usually the only communication is centered around status of the project and time to
completion. This communication is usually initiated by the supplier. The strategic
development group requires a close collaboration between supplier and buyer, resulting
in technical information sharing, in face-2-face meetings and working groups by many
individuals across several fields in the two different companies.
Research such as, [6] argues that it is difficult to gain a positive effect from
involving suppliers, as increase communication and coordination in projects lead to
longer development time and increased costs. Furthermore [6] postulates that two
656 A. W. Nilsen and E. Alfnes

elements must be in place for sufficient supplier involvement: contingency factors on


the organizational level, and management of supplier involvement on the project level.
A categorization is proposed in [6] in order to reduce excess coordination by defining
two types of projects; know-how projects and capacity projects. The first, know-how
projects, are when then supplier has the in-depth knowledge and technical under-
standing needed to carry out the project. Capacity projects are when the buyer’s needs
more resources to complete the project. The capacity project buyer has a goal of
overcoming the shortages of their own organization, often the supplier takes on less
important responsibilities in order for the buyer to focus on the critical elements of the
project. The know-how project buyer realizes that they do not possess the knowledge
required to perform a task and therefore the supplier is given responsibility for that
component or part. The component or part may or may not be critical as far as the
overall project is concerned.

2.2 Success Factors


In order to successfully involve suppliers in the development process, several reports
have assessed what factors need to be in place for such an endeavor to be deemed
successful. There are three main factors according to [2], that influence the success of
supplier involvement; supplier selection, supplier relationship development and adap-
tation, and internal customer capabilities. Selection of supplier concerns which sup-
pliers to use in the development process and which to involve early. The second factor
is of interest for the purpose of this article, namely the supplier relationship develop-
ment and adaptation, which [2] documents to be frequently overlooked by managers.
Supplier relationship development and adaptation is achieved by looking at mutual
trust, commitment, and mutual understanding of performance targets. One way to build
supplier relationships is by including suppliers’ employees on the development team as
discussed by [4]. The third factor found in [2], is about the buyer’s internal capabilities,
such as the ability to manage internal cross-discipline teams.
The research by [7] finds that the three factors found in [2] are among the factors
that promote successful supplier involvement. The article goes into greater depth,
finding that supplier membership on the buyer’s development team is the greatest
success factor. They found that open and direct intra-company communication most
often resulted in a rapid fix of most problems. Co-location was found to be more
relevant with technologically complex projects, as well as in long term development
projects. Furthermore, factors such as formal trust, customer requirements sharing,
technology information sharing, and shared physical assets also contribute to successful
supplier integration. The article by [7], groups the factors that lead to successful
supplier integration into two groups: relationship structuring factors, and asset alloca-
tion factors. They find that the asset allocation factors directly influence the new
product development, while the relationship structuring factors are what they describe
as facilitating factors, by which that they mean the relationship structuring factors
facilitate the sharing of assets.
A Categorization Matrix and Corresponding Success Factors 657

3 Methodology

A list of success factors for supplier team involvement and information sharing was
derived from literature. The success factor that were found in research pertain to off-
line product development [2, 4] and [7]. The impact of these factors for design supplier
involvement in contract development project success was investigated through a
survey.
Data was collected via a survey from three groups. Group A are companies which
recently had one or more projects that included design supplier integration. Group B
are companies that do not integrate design suppliers. Group C consist of consultants,
project managers and professionals that are not employed by the design supplier or the
buyer. These professionals work with coordinating development projects that involve
design suppliers. The companies selected are involved with the authors company,
therefore some prior knowledge about their business is known. This article focusses on
technology companies based in Norway that do product development. The companies
in focus have products that have low volume single batch production, i.e. they only
make one or few batches of a products before they change the design. The goal for
collecting this data is to investigate if it is possible to create a model for categorizing
the role of the design supplier in contract product development along with corre-
sponding success factors.

4 A Categorization Approach/Matrix

In the literature section, we described two important design supplier categorizations. As


mentioned earlier, while [6] focusses on the types of projects, [5] looks at two over-
arching supplier dimensions – capacity and knowledge. The surveyed companies
indicated that communication is an important factor when choosing to involve a design
supplier in development. Companies that have not involved design suppliers, group B,
indicated that communication would be critical if they did decide to involve design
suppliers in the future. Companies not currently involving design suppliers (group B)
indicated that limitations on their own internal capacity would also be critical if they
were to involve design suppliers. Companies currently involving design suppliers,
group A, consider the experience of the design supplier as well as cost to be important
when choosing which design supplier to use.
Furthermore, the companies currently not involving design suppliers, group B,
indicated that they would, on average, consider capacity to be a more important motive
for design supplier integration. Conversely, the companies involving design suppliers,
group A, considered performance quality, reduced cost and the experience of the
design supplier to be their motivation. Thus, a parallel to [6] can be drawn, two project
types for design supplier involvement: know-how projects, and capacity projects,
which relates directly to the buyer’s needs. The buyer’s needs can range needing a
highly specialized and skilled competence to no real qualification. The latter implying
that the buyer just needs more manpower. This article will use this distinction between
project types going forward. The distinction between degrees of development risk
658 A. W. Nilsen and E. Alfnes

involved, as found in [7] are: arm’s length, and strategic involvement. The develop-
ment risk concerns the complexity of the project, and thereby the degree of involve-
ment. High risk implies long development time and high degree of supplier
involvement, leading to the buyer’s company needing to strategically choose their
collaboration partners wisely. Low risk involvement puts less critical decisions on the
supplier, utilizing them more as support.
In this study, we combine the two categorization approaches found in [5] and [6]
for project type and supplier involvement respectively. Using the survey data this
article proposes combining the type of project and the degree of risk into a matrix, as
shown in Fig. 1. The figure shows a categorization of the roles of the design suppliers
in contract development, split into four groups. Firstly, a split of the buyer’s needs
between capacity and know-how projects. Secondly, a split between the degree of
development risk (low and high) which correspond to arm’s-length involvement and
strategic involvement of the supplier.

Supplier’s involvement
Arm’s length Strategic

Buyer’s Capacity Purchased design capacity Design team member


needs Know-How Module design specialist Systems architect

Fig. 1. Design supplier roles based on buyers needs and supplier involvement.

The first role, “purchased design capacity”, pertains to projects where the design
suppliers have responsibility for less critical parts or components. The project is firmly
inside the design supplier’s main core working area and they are considered competent
in their field. The buyer typically asks for a solution to their problem, the design
supplier delivers the part or component with minimal interaction after the first inquiry.
While the part or component in question is designed by the supplier it may be a
standard solution but looking from the buyer side this is a component developed by the
supplier.
The second role, “module design specialist”, is somewhat similar in that the
supplier-buyer interaction is limited. However, “module design specialist requires more
information sharing as the design supplier provides a custom product that meets the
specifications of the buyer. This may be a customized made-to-order development and
the design supplier will customize one of their standard products to meet the buyer’s
needs. The design supplier is considered an expert in the field and has been selected by
the buyer for this reason.
The “design team member” role requires a significant amount of information
sharing as the design supplier is involved in development of a complex system. The
design supplier responsibility is not on the critical components, but the complex nature
of the development requires coordination of information such as product specifications,
interfaces and other non-trivial information. The design supplier is considered part of
the development team but does not take the lead role in specification of the entire
A Categorization Matrix and Corresponding Success Factors 659

system. The buyer requires the capacity of the design supplier in order to complete the
project.
The last role is “systems architect”, here the design supplier has special knowledge
of the critical sub systems of the project. The design supplier is included in the
development team. Information sharing is critical to the success of the product. Product
specification, interfaces, production methods and most of the key decisions concerning
the development is done in coordination with the buyer. Often the design supplier and
buyer will co-locate in order to maximize the coordination and allow for informal
information sharing. The design supplier is considered an expert in the field. The
design supplier will design critical parts or components and make technical decisions
concerning the development.
From the survey, the results for companies not currently involving design suppliers
(group B), show that they would consider involving design suppliers using the “pur-
chased design capacity” or “design team partner” roles. The respondents in general
want to supplement their current activities by outsourcing some of the development of
less complex tasks, this is done to free up internal capacity or general lack of project
engineers. Costs and quality are important. While they could have involved design
suppliers, they have chosen to do the development in-house.
The survey results for companies involving design suppliers (group A) correlate to
the know-how projects. They want to reduce cost and increase quality by having a
specialist perform the development. The reasoning may be that a specialist can perform
the task in less time than developing the expertise in-house. It is not evident how the
respondents are distributed between the “module design specialist” and “systems
architect” roles.
The professionals survey data (group C) show no clear signs of belonging to either
the capacity or know-how projects, which is natural as they have worked in a wide
variety of projects, leading to no clearly defined position. However, this article assumes
that they were involved in the high degree of development risk roles, “design team
partner” or “systems architect”. This is because the professionals are hired to lead
projects that have a high degree of technical complexity.

5 Success Factors

In order to determine the success factors that correlate to each role, this article leans on
the research done by [2, 4] and [7]. In [7] success factors are grouped in two: rela-
tionship structuring factors and asset allocation factors. The asset allocation factors are
directly linked to the successful involvement of suppliers on the development team.
The relationship structuring factors improve the effect of the asset allocation factors.
Using the success factors found and the survey data this article suggests a structuring of
the factors so that they are associated with the roles presented in Fig. 1. The success
factors are shown in Fig. 2 for each categorization of design supplier roles.
Factors are organized so arm’s-length development (“purchased design capacity”
and “module design specialist”) have less long-term focus. The success factors are
organized in a way that the relationship structuring factors are more prevalent in the
strategic involvement roles (“design team member” and “systems architect”). Each
660 A. W. Nilsen and E. Alfnes

Co-location
Joint agreement on system functions and perfor-
mance
Buyer and supplier management commitment
Shared end user requirements
Common and linked information systems
Supplier is trusted partner on development team
Joint agreement on module function and perfor-
mance
Technology sharing
Design team member

Buyer confidence in suppliers’ capabilities


Systems architect

Joint agreement on module performance


Module design

Specify functions and performance


Purchased
specialist

Coordinating development activities


capacity
design

Formulated communication and sharing guidelines

Fig. 2. Success factors for the design supplier roles in contracted development.

level in Fig. 2 should include the factors of the levels below, so the systems architect
level includes factors from all four groups. For all four roles the success factors;
“specify functions and performance”, “coordinating development activities with sup-
pliers” and “formulate communication and information sharing guidelines” are all
included. The last factor is of special interest as establishing and formulating infor-
mation sharing guidelines are directly connected to the quality of the relationship
between buyer and supplier. These guidelines should be introduced at the start of a
project or collaboration. This will ensure that both the buyer and supplier have agreed
on the reporting methods. The buyer and supplier also agree how information is shared
within the development team. Note that in Fig. 2, the factors “joint agreement on
module/system functions and/or performance” replace each other in the different
supplier roles.
The goal of the model (Figs. 1 and 2) is to increase trust, commitment, information
sharing and cooperation so that the design supplier can maximize the development
performance. The insight into the role of the design supplier, based on the buyer’s
needs (project type) and the risk the design supplier takes on, provides decision makers
with an ability to assess their allocation of resources. For example, if the project is a
capacity project, the design supplier need not allocate their own internal expert. The
categorization can also limit ambiguity regarding whom should make the critical
decisions. The two axes represent the two sides of the relationship, the project type
describes the buyer’s needs, the supplier’s development risk is the willingness to take
an active role in the development. Pairing the role of the supplier and the corresponding
success factors also allows for the buyer and supplier to effectively set the parameters
of the project, for example co-locating only if the role of the supplier indicates that it is
prudent.
A Categorization Matrix and Corresponding Success Factors 661

6 Conclusion

This article proposes a new model for considering the role of a design supplier in a
development team. The degree of development risk and buyer’s needs will identify the
role of the design supplier. This article applies the theory for supplier involvement in
product development to the field of contract product development.
By using the identified roles, a buyer can facilitate the type of relationship needed
to have an effective collaboration with a design supplier. A reflected approach to why a
certain design supplier is selected and what kind of role the design supplier has in the
project can help define what success factors that need be in place in order to suc-
cessfully involve the design supplier in contract product development. The model
presented helps decision-makers to facilitate an effective cooperation between a buyer
and a supplier, thereby increasing the likelihood for a successfully completed project.

Acknowledgements. We would like to acknowledge the Norwegian Research Council and


Inventas AS for their support.
Informed consent was obtained from all individual participants included in the study.

References
1. Alderman, N., Thwaites, A., Maffin, D.: Project-level influences on the management and
organisation of product development in engineering. Int. J. Innov. Manage. 05(04), 517–542
(2001)
2. Johnsen, T.: Supplier involvement in new product development and innovation: taking stock
and looking to the future. J. Purchasing Supply Manage. 15(3), 187–197 (2009)
3. Clark, K.B.: Project scope and project performance: the effect of parts strategy and supplier
involvement on product development. Manage. Sci. 35(10), 1247–1263 (1989)
4. Ragatz, G.L., Handfield, R.B., Petersen, K.J.: Benefits associated with supplier integration
into new product development under conditions of technology uncertainty. J. Bus. Res. 55(5),
389–400 (2002)
5. Wynstra, F., Ten Pierick, E.: Managing supplier involvement in new product development: a
portfolio approach. Eur. J. Purchasing Supply Manage. 6(1), 49–57 (2000)
6. Wagner, S.M., Hoegl, M.: Involving suppliers in product development: Insights from R&D
directors and project managers. Ind. Mark. Manage. 35(8), 936–943 (2006)
7. Ragatz, G.L., Handfield, R.B., Scannell, T.V.: Success factors for integrating suppliers into
new product development. J. Product Innov. Manage. 14(3), 190–202 (1997)
Engineering Changes in the Engineer-to-Order
Industry: Challenges of Implementation

Luis F. Hinojos A.(&), Natalia Iakymenko, and Erlend Alfnes

Department of Mechanical and Industrial Engineering,


Norwegian University of Science and Technology, Trondheim, Norway
[email protected]

Abstract. Market success in the Engineer-to-Order (ETO) industry, implies,


among other things, competitive prices, no delays in delivery and high cus-
tomization. Engineering Changes (EC) have a large impact on delivery time,
cost, and resources allocated, but they are also desirable because they enhance
the product. ETO companies present characteristics that differentiate their
products and processes from those in mass production. Also, they experience
frequent changes to product specifications. Hence, ETO requires a different
approach for Engineering Change Management (ECM) thus becoming more
challenging. The purpose of ECM is to implement the modifications to products
in a controlled manner. Efficient EC implementation is necessary to guarantee
customer satisfaction and to remain successful. The objective of this research is
to analyze the factors that influence the implementation of EC in the ETO
industry.

Keywords: Engineer-to-order  Engineering changes  Engineering change


management

1 Introduction

The ETO production environment has a range of characteristics that differentiate it


from other types of environments. ETO products are typically complex with deep
product structures, are highly customized, and often one-of-a-kind. Production is done
in low volumes with no stock of finalized goods, and the product design does not
commence until there is an order placed. Each customer order requires engineering to
create or adapt a product [1]. The high level of customization makes the repetitiveness
of the production processes low [2]. The ETO production environment is characterized
by having a complicated information flow. Besides, activities in ETO (e.g. engineering,
production, and procurement) in some cases overlap during the project rather than
occurring consecutively like they do in mass production [3].
ETO manufacturing companies experience frequent changes to product specifica-
tions [4]. The implementation of EC generates time delays, increases project costs and
uses up a considerable amount of engineering capacity and other resources. Poorly
managed EC lead to affected product customization and market opportunities, obsolete
inventories, materials shortages, and decreased quality [5–8]. Despite the negative
effects, EC are desirable because they improve the product [9]. They enable customer

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 662–670, 2020.
https://doi.org/10.1007/978-981-15-2341-0_83
Engineering Changes in the Engineer-to-Order Industry 663

satisfaction, especially in ETO. Also, efficient EC implementation can achieve higher


quality on the end-product [10]. The purpose of ECM is to implement the modifications
to products in a controlled manner [6]. Negative effects can be reduced by efficient
ECM [10].
There is a need to study the practices for ECM under the ETO context. Manu-
facturers in the ETO environment cannot implement EC in the same way as those in
mass production. The customers have direct contact with manufacturers and demand
products that satisfy unique needs. EC are an approach to implement the input coming
from the customer along the product life cycle [10]. To remain competitive in the
market, these companies need to be flexible and to be able to incorporate customer
requirements to their products, with an attractive price and delivery time [11]. The
implementation of EC cannot be postponed and the modifications have to be introduced
when requested since the one-off product made is linked to a customer order [12]. In
addition, most of the research available in ECM does not differentiate between pro-
duction environments. Even though recent research papers study the overlap in ECM
and ETO, Iakymenko, Romsdal [12], reveal the needs of further research of ECM from
the ETO perspective.
The objective of this research is to explore what influences the implementation of
EC in the ETO production environment. The structure of the paper is as follows: a brief
theoretical background for ECM, where after the factors influencing EC implementa-
tion are listed. A Norwegian company is presented as a case study which operates in
the maritime industry and provides customized propulsion systems for advanced ves-
sels. The factors are then analyzed in the case company studying six EC. The paper
continues with the results of the focus group. Finally, a discussion provides an
understanding of the areas of opportunity to improve EC implementation.

2 Methodology

This research has been performed as a single in-depth case study in an ETO manu-
facturing company. Under the ETO context, the practices for EC implementation were
investigated. Furthermore, the case studied six EC as multiple embedded units of
research. Efficient EC implementation is affected by factors that contribute to negative
effects. These factors are any circumstance, fact or influence that contributes to ECM
performance. A range of factors that potentially influence EC implementation were
identified through a literature study. The data collection methods included semi-
structured interviews with four project managers, field observations and documentation
review. The final step in the research process was to perform a focus group that was
performed to validate the list of factors. The participants all have leadership positions
from both engineering and project management. The participants’ work experience
ranged from six to over 20 years of relevant experience. The participants were asked to
rate the influence each factor has on the negative effects of EC implementation, on a
scale from one to five (strongly disagree to strongly agree).
664 L. F. Hinojos A. et al.

3 Engineering Change Management

EC refers to modifications to the released structure (fits, forms and dimensions, sur-
faces, materials etc.), behavior (stability, strength, corrosion etc.), function (speed,
performance, efficiency, etc.), or the relations between functions and behavior (design
principles), or behavior and structure (physical laws) of a technical artefact [9]. EC are
categorized as either emergent or initiated [13]. From a simplified point of view,
emergent EC are requested to remove errors from a product and initiated to enhance it
in some way [6]. Furthermore, EC can be distinguished as early, mid-production and
late EC in the product development process [14]. This distinction is relevant because
the degree of negative impact will vary according to the point of time of the project
when the EC is requested. As established, EC lead to negative effects such as increased
delivery time, reduced profit margins, production schedules disturbances and increased
resource allocation [1, 6].
ECM refers to the organization and control of the process of making alterations to
products [6]. ECM involves planning, controlling, monitoring and recording the EC
within many departments of a manufacturing company and systematic means of com-
munication are required. Several researchers have studied the process for ECM
[5, 6, 14, 15]. The following six steps are referred to as the ECM generic process for this
paper:
– Identify and request engineering change
– Identify possible solutions to change request
– Perform an impact evaluation of possible solutions
– Select and approve a solution
– Release and implement engineering change
– Perform post-implementation review
The following factors are identified to influence EC implementation:
1. Product complexity. Refers to the number of components, the number of levels in
the product structure and the number of design interdependencies as defined in the
Bill of Materials [6, 7, 15, 16].
2. Product customization. This refers to the level to which a product accommodates
the customer-specific and individual requirements [17]
3. Product innovation. Refers to the introduction of new technology in the product
[14, 16].
4. EC timing. Refers to the project stage at which the EC is requested [14, 16, 18].
5. EC propagation. Refers to the phenomenon by which one single modification to a
component initiates a series of other changes to parts and systems the component
interacts with [13, 16].
6. Intra-organizational integration. This refers to adequate information sharing and
interaction between internal disciplines involved in EC implementation [8, 15, 18].
7. Cross-organizational integration. This refers to adequate information sharing and
interaction across organizations involved in EC implementation [1, 10].
8. Established ECM process. Refers to whether the company has adopted and follows
the activities involved in the ECM generic process [5, 6, 14, 15].
Engineering Changes in the Engineer-to-Order Industry 665

4 Results

The case company provides customized propulsion and position system for advanced
vessels. First, it was found that the company does not have dedicated tools for ECM.
Although the ERP system and the IBM planning system aid the ECM process, they are
only used in their respective departments. The business systems do not have ECM ded-
icated capabilities and do not provide a streamlined workflow nor decision-making
support for EC implementation. Second, project managers are responsible for the ECM
process. Even though they utilize aids, such as a sales configurator and the ERP system,
the EC impact evaluation was found to be very dependent on the experience of the project
manager. Third, EC approval from the customer is done through EC forms exchanged by
email. Lastly, it is difficult to assess the impact evaluation accuracy and implementation
performance since the company does not perform a post-implementation review of EC.
The study selected six EC as multiple units of research. The primary criterion of
selection was disruptive EC with significant negative effects. In addition, the six EC
differed in timing (i.e. early, mid-production, and late) and origin (i.e. internally or
externally). They were selected from different projects and customers to have a rep-
resentative sample. The projects varied in terms of duration and number of engineering
hours. In every case, the amount of equipment provided was diverse and so the degree
of product customization. All six EC incurred in either delay in delivery times,
increased costs or other unwanted consequences. Table 1 provides a summary of the
characteristics of the EC mapped during the semi-structured interviews.

Table 1. Summary of engineering changes


EC # Engineering change Reason for Project Initiated Project Engineering
description change stage by duration hours
EC 1 Modification of gravity- Error Terminated Customer. 4 years 500
based tanks to correction ship- (terminated)
pressurized tanks for the owner
tunnel thrusters
EC 2 Introduction of a big Improve Engineering Customer. 3 years 500
two-piece shim ring to quality and shipyard (terminated)
replace multiple shim reliability
rings in the bolt
connections
EC 3 Change of position for Error Production Customer. 2 years 682
stiffeners, change of correction shipyard (estimated)
shape tunnel and paint
specification
EC 4 Change the shape of the Error Engineering Customer. 8 months 387
tunnel for the thruster correction shipyard (estimated)
EC 5 Change to a multi- Technological Tendering Internal. 15 months 265
operational control evolution sales dep. (estimated)
panel
EC 6 Change to an electrical Error Delivery Customer. 6 months 84
motor with a higher IP correction shipyard (estimated)
grade
666 L. F. Hinojos A. et al.

Next, it is described the EC where each factor was identified to influence the EC
implementation.
1. Product Complexity.
– EC2: The manufacturing complexity of the ring was underestimated and
increased the effort and time required in its manufacturing processes.
2. Product innovation.
– EC2: The adoption of the new solution led to unforeseen complications in the
project and subsequent projects.
– EC5: Since it was the first time this solution was offered, it was uncertain the
cost and the hours needed for development.
3. Product Customization.
– EC1, EC2, EC3: The high number in engineering hours is an indication of a high
level of product customization.
– EC5: The solution required the development of customized software in addition
to the customized propulsion equipment. There was no access to the number of
engineering hours used in customizing the control panel. Early in the project, the
specifications were unknown for the engines and the power management system.
4. EC timing.
– EC1: The timing of the change had the greatest influence on the negative effects
since the ship was already built, and the thrusters and the tanks were already
installed. Hence, the development of pressurized tanks consisted of a more
complex solution and had to be implemented by the ship-owner on site.
– EC4: In this case a positive circumstance, the change was identified early before
the project was issued for production and avoided large increments in cost.
– EC6: The mistake was identified once the equipment was produced and ready
for delivery. This increased the negative effects considerably.
5. EC propagation.
– EC1: The EC led to other changes in the ship, the tanks required a pressurized
air source which had to be made available where the tanks were placed.
– EC2, EC6: The implementation of the ring caused additional changes in the
motor frame. Also, technical documentation became outdated.
6. Intra and cross-organizational integration.
– EC3: The lack of efficient information flow internally and externally led to
misunderstandings.
– EC4: Poor communication with the customer lead to mistakes in the
specifications.
Engineering Changes in the Engineer-to-Order Industry 667

7. Established ECM process.


– EC3 and EC4: The lack of formal processes for ECM did not allow to perform a
post-implementation control to verify if the costs were covered by the charge
made to the customer.
8. Different objectives between Sales and Project Management.
– EC 6 led to identifying a factor that had not been discussed earlier. Production
and sales have different objectives in terms of good performance. Each
department had its objectives to fulfill. Sales wanted to retain customer satis-
faction by offering the lowest price possible, ignoring internal costs. On the
other hand, project management wanted to retain profit margin and stick to ECM
procedures.
The most recurrent and relevant factors influencing EC implementation in the ETO
context were identified by the previous factor analysis and by the focus group. Table 2
presents the average scores of the factors and recapitulates the EC where they were
present. In general, the highest-rated factors were related to the increase in costs and
delay in delivery time.
EC propagation achieved a top rating score and EC timing had the second-highest
rating. Both EC-related factors were considered extremely influential according to the
score and they were the most encountered in the case study. The respondents agreed
that late EC (post equipment delivery) are the hardest to implement because normally
this must be on-site (shipyard). In addition, it was identified that late EC leads to
resources being allocated from other projects.
The factor of Different objectives between sales and project management was
acknowledged both as highly influential and challenging to address. The challenge
originates from cultural aspects in the industry, where the customer has a high bar-
gaining power when it comes to EC implementation.
Cross-organizational integration and Intra-organizational integration were factors
affirmed by most respondents to be notably influential in EC implementation. The
opportunity for improvement regarding internal collaboration was highlighted during
the discussion. Manual methods of communication were related to increasing the time
spent in ECM.
Product complexity was linked to change propagation. Product Innovation had
contradictory reactions. Some respondents found it as a cause for further changes and
others noted that it did not have any influence on negative effects but only had a
positive effect (i.e. positive for later projects).
668 L. F. Hinojos A. et al.

Table 2. Most relevant factors


Most relevant factors Present in EC Score
EC propagation EC1, EC2, EC6 4.8
Different objectives between sales and project management EC6 4.8
EC timing EC1, EC4, EC6 4.4
Product complexity EC2 4.4
Intra-organizational integration EC3, EC4 4.2
Cross-organizational integration EC3, EC4 4.2
Product customization EC1, EC2, EC3, EC5 4
Established ECM process EC3, EC4 3.8
Production innovation EC2 3

5 Discussion

The case study allowed the identification of areas of opportunity to improve EC


implementation. The factors of EC propagation and Product Complexity show the
importance of predicting the change propagation to improve the evaluation of the scale
and cost of the EC. The uncertainty coming from change propagation is difficult to be
systematically assessed. Change propagation identification helps to identify the con-
sequences of EC as early as possible by identifying dependencies in a system.
The factors of EC timing and Established ECM Process show the importance of
having streamlined procedures for ECM in place. Late EC changes are the most
damaging to project planning and required quick implementation. Besides, the impact
evaluation was dependent on the experience of the project managers in the case
company. The lack of dedicated impact assessment tools exposes project managers to
calculate inaccurate time and cost for implementation.
The factors of Cross and Intra-organizational integration show the importance of
encouraging collaboration and enhance the information flow among actors that are
dispersed at different physical locations, including customers. Collaboration can be
challenging in ETO manufacturing companies due to the number and location of actors
involved. In the case company, the collaboration between departments in all cases is
done manually through meetings, email exchange, and phone calls. The process is not
aided by automatic notifications or prompts to facilitate the approval workflow.
The factor of product innovation and product customization shows the importance
of early identification of customer requirements. Product innovations are likely to
increase the risk associated with its adoption resulting in EC to propagate along in the
product structure. But adequate requirement identification can potentially avoid EC. It
can prevent mistakes in product development by correctly translating customer
requirements to product specifications. Also, it can reduce the risk of further changes by
ensuring the product being built is the product needed by the market.
Tools and practices to support ECM were studied as part of this research. A clas-
sification is proposed according to the area of opportunity they propose to improve in
EC implementation. However, it is not included in the scope of this paper.
Engineering Changes in the Engineer-to-Order Industry 669

6 Conclusion

This research provides an understanding of the production environment where the EC


occurs and identifies areas of improvement of EC implementation in industry. The
basic assumption of the factor analysis is that there are underlying causes that can be
used to explain the negative effects caused by EC. The identified factors provide
managers with a set of potentially important considerations that should help them
mitigate the negative effects generated by disruptive EC. There are limitations in this
study, besides the factors considered in this study, other environmental, organizational
or technological factors might have been missed by this research that may also affect
the implementation of EC. This research was conducted in the context of only one
company case. Further case studies should be done to deepen the knowledge and
generalizability of the study. It would be interesting to perform the factor analysis
performed in this research in a manufacturing company at the end of the value chain.
Also, companies who experience a larger quantity of EC requests per project should be
considered.
Informed consent was obtained from all individual participants included in the
study.

Acknowledgements. This work was supported by The Research Council of Norway.

References
1. Mello, M.H.: Coordinating an engineer-to-order supply chain: a study of shipbuilding
projects. Norwegian University of Science and Technology, Faculty of Science and
Technology, Department of Production and Quality Engineering, Trondheim (2015)
2. Adrodegari, F., et al.: Engineer-to-order (ETO) production planning and control: an
empirical framework for machinery-building companies. J. Prod. Plann. Control 26, 910–
932 (2015)
3. Semini, M., et al.: Strategies for customized shipbuilding with different customer order
decoupling points. J. Eng. Marit. Environ. 228(4), 362–372 (2014)
4. Stavrulaki, E., Davis, M.: Aligning products with supply chain processes and strategy. Int.
J. Logistics Manage. 21(1), 127–151 (2010)
5. Terwiesch, C., Loch, C.H.: Managing the process of engineering change orders: the case of
the climate control system in automobile development. J. Prod. Innov. Manage 16(2), 160–
172 (1999)
6. Jarratt, T., Clarkson, J., Eckert, C.: Engineering change. In: Clarkson, J., Eckert, C. (eds.)
Design process Improvement, pp. 262–285. Springer, London (2005)
7. Wänström, C., Jonsson, P.: The impact of engineering changes on materials planning.
J. Manuf. Technol. Manage. 17(5), 561–584 (2006)
8. Lin, Y., Zhou, L.: The impacts of product design changes on supply chain risk: a case study.
Int. J. Phys. Distrib. Logistics Manage. 41(2), 162–186 (2011)
9. Hamraz, B., Caldwell, N.H.M., Clarkson, P.J.: A holistic categorization framework for
literature on engineering change management. J. Syst. Eng. 16(4), 473–505 (2013)
670 L. F. Hinojos A. et al.

10. Wasmer, A., Staub, G., Vroom, R.W.: An industry approach to shared, cross-organisational
engineering change handling-the road towards standards for product data processing.
J. Comput. Aided Des. 43(5), 533–545 (2011)
11. Zennaro, I., et al.: Big size highly customised product manufacturing systems: a literature
review and future research agenda. Int. J. Prod. Res. 57, 1–24 (2019)
12. Iakymenko, N., et al.: Managing engineering changes in the engineer-to-order environment:
challenges and research needs. IFAC-PapersOnLine 51(11), 144–151 (2018)
13. Eckert, C., Clarkson, P., Zanker, W.: Change and customisation in complex engineering
domains. J. Res. Eng. Des. 15(1), 1–21 (2004)
14. Reidelbach, M.A.: Engineering change management for long-lead-time production. Prod.
Inventory Manage. J. 32(2), 84 (1991)
15. Tavčar, J., Duhovnik, J.: Engineering change management in individual and mass
production. J. Robot. Comput. Integr. Manuf. 21(3), 205–215 (2005)
16. Jarratt, T., et al.: Engineering change: an overview and perspective on the literature. J. Res.
Eng. Des. 22(2), 103–124 (2011)
17. Hicks, C., McGovern, T., Earl, C.F.: Supply chain management: a strategic issue in engineer
to order manufacturing. Int. J. Logistics 65(2), 179–190 (2000)
18. Fricke, E., et al.: Coping with changes: causes, findings, and strategies. J. Syst. Eng. 3(4),
169–179 (2000)
Impact of Carbon Price on Renewable Energy
Using Power Market System

Xiangping Hu1(&), Xiaomei Cheng2(&), and Xinlu Qiu3


1
Industrial Ecology Programme, Department of Energy and Process
Engineering, Norwegian University of Science and Technology,
Trondheim, Norway
[email protected]
2
Department of Electric Power Engineering,
Norwegian University of Science and Technology, Trondheim, Norway
[email protected]
3
Department of Industrial Economics and Technology,
Norwegian University of Science and Technology, Trondheim, Norway
[email protected]

Abstract. Reducing anthropogenic greenhouse gas emissions is a critical ele-


ment to keep global warming below 2 °C. In terms of the IEA report, the largest
sources of emission in 2016, which approaches to 42% of global total, is gen-
erated by power sector and heat sector. This indicates that reducing the emission
in the power sector can play a crucial role to limit global warming. Large shares
of low-carbon generators such as renewables, power plants with carbon capture
and storage and implementing a sustainable environmental tax or carbon price
are the possible approaches to reduce the emissions from the power sector. The
paper investigates how carbon prices affect the Northern European power sys-
tem. The power system model is net transfer capacity-based model which aims
to minimize economic performance, such as operational cost and environmental
cost, with the common power system constraints and large expansions of sus-
tainable energy development, i.e., solar and wind energy. The carbon prices are
based on scenarios of the Shared Socioeconomic Pathways (SSPs) which aims
to limit global warming to below 2 °C with a probability greater than 66%. Four
scenarios are conducted based on SSPs carbon prices. Results show that the
carbon prices have a great impact on the economic performance of the power
system, i.e., the higher carbon price, the higher power prices. Increasing carbon
prices result in decreasing of coal production including hard coal and lignite coal
production but increasing the gas production. This is due to different fuel carbon
prices. Furthermore, renewable energy such as wind production continues to
increase. This implies a positive relationship between renewable energy and
carbon prices, such as the higher the carbon prices, the higher renewable energy
production.

Keywords: Carbon price  Sustainable energy development  Power market


model

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 671–677, 2020.
https://doi.org/10.1007/978-981-15-2341-0_84
672 X. Hu et al.

1 Introduction

The ambition of the Paris Agreement on climate change is to keep a global temperature
rise this century below 2 °C compared to pre-industrial level, which changes the path
of the energy sector development. In 2016, 42% of global CO2 was from the electricity
and heat generation [1]. Therefore, this implies that the power sectors have great
potential for limiting global warming to no more than 2 °C. Many approaches, such as
increasing the share of renewable energy and introduce the carbon prices into power
sectors, have been developed to reduce the emission from power sectors. Introducing
the carbon price into the energy sector is one of the important approaches to alleviate
the carbon emission, and hence achieving decarbonization goal [2, 3]. To achieve low
emissions scenarios, an implicit carbon shadow price is usually assumed, and this price
can also be used as policy instruments [3]. The carbon prices vary with different
scenarios for climate change mitigation and adaptation, especially in the long run [3–6].
The Shared Socioeconomic Pathways (SSPs) are established by the scientific
community and are part of a new scenario framework [7]. These pathways describe five
different development trends in future by considering different scenarios for climate
change projections, challenges for mitigation and adaptation to climate change,
socioeconomic conditions and policies [7–10], and they are used to facilitate a har-
monized framework for integrated analysis for interdisciplinary research of climate
impact, and the aim of these pathways is to investigate future changes in different
sectors or countries [7, 11].
In this paper, we investigate the impact of carbon prices on the environment and
sustainable energy development based on the Northern European power system. The
power system model used in this paper is the net transfer capacity-based model (NTC).
These carbon prices are based on different pathways in the SSPs framework. Scenarios
with possibility to achieve the 2 °C target are considered, and therefore, only four
scenarios, i.e., SSP1, SSP2, SSP2, and SSP5, with different carbon prices are used. The
rest of the paper is organized as follows. Section 2 introduces the data and method-
ology, followed by results and discussions in Sect. 3. Conclusions in Sect. 4 ends the
paper.

2 Data and Methodology


2.1 Power System Modeling
The power system model is used to simulate the Northern European power grid for six
countries. The countries include Norway, Sweden, Finland, Denmark, Germany, and
Netherland. The objective of the model is to minimize the operating cost and envi-
ronmental cost to confirm the hourly energy balance, transfer capacity limits and
operational security standards [12]. Fundamental input parameters for power plants,
i.e., fixed cost, marginal cost, start-up cost, transmission capacity limitations and so on,
are obtained from [13].
The production capacity, demand and transmission constraints in the state of 2010
are used as the initial conditions for simulation in our model. The last point of the
Impact of Carbon Price on Renewable Energy 673

previous year’s hydro reservoir level is used as the initial condition of the next year’s
hydro reservoir level. Environmental costs for thermal units are equal to the environ-
mental tax multiplying by the total amount of emissions within the planning periods.
The environmental variables, i.e., emission factor, energy efficiencies, and energy
conversion factors, originates from the International Energy Agency (IEA) [14]. The
main outputs for the model are the spot power prices, mixed production and the amount
of carbon emissions. The detailed desperation of the numerical model can be found in
[2] and the optimization is conducted using GAMS [15].

2.2 Carbon Price and Scenarios Design


For the sake of generality, the environmental tax mentioned in the previous paragraph
is the amount that must be paid for the right to emit one ton of carbon dioxide into the
atmosphere. The main target for these carbon prices is to reduce the amount of carbon
emission, and further reflects the carbon price’s influence on the energy system. The
carbon prices used in this work are abstracted within the Shared Socioeconomic
Pathways (SSPs) framework [3, 7, 16]. With the same target as the carbon price in the
energy system, i.e., reducing the carbon emission amount, the other meaning of the
carbon prices under the SSPs framework is to explain the carbon prices’ socioeconomic
impact, not only the impact on the energy system. There are five different carbon prices
within the SSPs framework [7, 16–21]. However, only four types of carbon prices are
examined, i.e., SSP1, SSP2, SSP4, and SSP5, since it is not possible to achieve the
2 °C target under SSP3 [3]. The carbon prices of four scenarios until the year 2050 are
shown in Fig. 1.

500
US$2005/t CO2

400
300
200
100
0
2015 2020 2025 2030 2035 2040 2045 2050 2055
Year

SSP1 SSP2 SSP4 SSP5

Fig. 1. Carbon prices under SSP1, SSP2, SSP4, and SSP4 for achieving 2 °C target
674 X. Hu et al.

3 Results and Discussions

In this section, we investigate these four scenarios and analyze the results in terms of
economic and environmental perspectives. Figure 2 shows the annual power prices
which is equal to average all countries’ prices by year, together with the carbon prices
for each scenario. It can be observed that power prices are increased with the rise in
carbon prices. This reflects that one of the carbon prices’ key role in economic per-
formance in the power system is to regulate power price.

400.00 500.00
Euro/Mwh (power prices)

US$2005/t CO2 (carbon


300.00 400.00
300.00
200.00

prices)
200.00
100.00 100.00
0.00 0.00
2020 2025 2030 2035 2040 2045 2050
Year
SSP1_Power SSP2_Power SSP4_Power
SSP5_Power SSP1_Carbon SSP2_Carbon
SSP4_Carbon SSP5_Carbon

Fig. 2. Annual power prices and carbon prices for each scenario: The annual power prices are
given in solid lines with values shown in the left y-axis, and the carbon prices are shown in
dashed lines with values shown in the right y-axis.

Figure 3 illustrates the energy mix for each scenario. We can observe that the
variations are primarily with gas and coal. The gas production is increased with the rise
in the carbon prices, while the coal production including hard coal and lignite coal is
decreased. The reason is that with the increasing carbon prices over time, the gas power
with low carbon prices is increased to replace power production with high carbon
prices, such as hard coal and lignite coal. The wind power continues to increase due to
the cheaper power prices. The hydropower production is stable due to the same
reservoir level for each year. The reservoir level assumptions for future years could be
an interesting topic to be investigated for future study.
Impact of Carbon Price on Renewable Energy 675

1200.00
Production (TWh) 1000.00

800.00

600.00

400.00

200.00

0.00
SSP1 2020
SSP1 2025
SSP1 2030
SSP1 2035
SSP1 2040
SSP1 2045
SSP1 2050

SSP2 2020
SSP2 2025
SSP2 2030
SSP2 2035
SSP2 2040
SSP2 2045
SSP2 2050

SSP4 2020
SSP4 2025
SSP4 2030
SSP4 2035
SSP4 2040
SSP4 2045
SSP4 2050

SSP5 2020
SSP5 2025
SSP5 2030
SSP5 2035
SSP5 2040
SSP5 2045
SSP5 2050
Scenarios

Gas Oil Oil-Gas Hardcoal Lingitecoal


Hydro Wind Solar Nuclear Biofuel

Fig. 3. The energy mix for each SSP scenario

The environmental impact, i.e., the total amount of carbon emission, is shown in
Fig. 4. From this figure, we can notice that before the year 2030, the highest carbon
prices scenario, i.e., SSP5, has the lowest total amount of carbon emission, which is
opposite to the lower carbon prices scenario, i.e., SSP1. This implies that the carbon
prices before the year 2030 have a positive impact on the total amount of carbon
emission. However, the total amount of carbon emission converges to the same amount
around 220 Mton. This indicates that carbon prices have limited impacts on the total

400.00
350.00
300.00
250.00
Mton

200.00
150.00
100.00
50.00
0.00
2020 2025 2030 2035 2040 2045 2050
Year
SSP1 SSP2 SSP4 SSP5

Fig. 4. Total amount of emission for each SSP scenario


676 X. Hu et al.

amount of carbon emission in the long-term. This result illustrates that further
increasing carbon prices might not have any impacts or may only have a few influences
on carbon emission from a long-term perspective.

4 Conclusions and Discussion

In this paper, the environmental impact of carbon prices based on SSPs Scenarios on
the Northern European power system is investigated. Four scenarios are conducted.
The fact that the carbon prices play an important role in the power prices is illustrated,
such as the higher carbon prices, the higher power prices. Within the increasing carbon
prices framework, coal production including hard coal and lignite coal is decreased,
which is opposite to the gas production. This could be explained by the low gas carbon
prices and high coal carbon prices. In addition, renewable production, for instance,
wind power production continues to increase as carbon price rises. Furthermore, our
simulation results also illustrate that further increasing carbon prices might have few
influences on carbon emission in long-term. There is a potential limitation in our
simulation that the reservoir level is similar for each year, which leads to stable
hydropower production. This could be an interesting topic for further study.

References
1. IEA, Birol, F. (ed.): CO2 Emissions from Fuel Combustion Highlights. International Energy
Agency, France (2016)
2. Cheng, X., Korpås, M., Farahmand, H.: The impact of electrification on power system in
Northern Europe. In: 2017 14th International Conference on the European Energy Market
(EEM). IEEE (2017)
3. Guivarch, C., Rogelj, J.: Carbon price variations in 2 °C scenarios explored (2017)
4. Creti, A., Jouvet, P.-A., Mignon, V.: Carbon price drivers: Phase I versus Phase II
equilibrium? Energy Econ. 34(1), 327–334 (2012)
5. Feng, Z.-H., Zou, L.-L., Wei, Y.-M.: Carbon price volatility: evidence from EU ETS. Appl.
Energy 88(3), 590–598 (2011)
6. Chevallier, J.: A model of carbon price interactions with macroeconomic and energy
dynamics. Energy Econ. 33(6), 1295–1312 (2011)
7. Riahi, K., et al.: The Shared Socioeconomic Pathways and their energy, land use, and
greenhouse gas emissions implications: an overview. Glob. Environ. Change Hum. Policy
Dimensions 42, 153–168 (2017)
8. O’Neill, B.C., et al.: A new scenario framework for climate change research: the concept of
shared socioeconomic pathways. Clim. Change 122(3), 387–400 (2014)
9. Ebi, K.L., et al.: A new scenario framework for climate change research: background,
process, and future directions. Clim. Change 122(3), 363–372 (2014)
10. Van Vuuren, D.P., et al.: A new scenario framework for climate change research: scenario
matrix architecture. Clim. Change 122(3), 373–386 (2014)
11. Hu, X.P., Iordan, C.M., Cherubini, F.: Estimating future wood outtakes in the Norwegian
forestry sector under the shared socioeconomic pathways. Glob. Environ. Change Hum.
Policy Dimensions 50, 15–24 (2018)
Impact of Carbon Price on Renewable Energy 677

12. Farahmand, H.: Integrated power system balancing in Northern Europe-models and case
studies, p. 150 (2012)
13. Farahmand, H., et al.: Possibilities of Nordic hydro power generation flexibility and
transmission capacity expansion to support the integration of Northern European wind power
production: 2020 and 2030 case studies. SINTEF Energy Research (2013)
14. IEA: World Energy Outlook 2011 (2011). www.worldenergyoutlook.org/weo2011. Acces-
sed 12 Feb 2019
15. GAMS: General Algebraic Modeling System (GAMS), Washington, DC, USA (2017)
16. Bauer, N., et al.: Shared socio-economic pathways of the energy sector - quantifying the
narratives. Glob. Environ. Change Hum. Policy Dimensions 42, 316–330 (2017)
17. O’Neill, B.C., et al.: The roads ahead: narratives for shared socioeconomic pathways
describing world futures in the 21st century. Glob. Environ. Change Hum. Policy
Dimensions 42, 169–180 (2017)
18. Kriegler, E., et al.: Fossil-fueled development (SSP5): an energy and resource intensive
scenario for the 21st century. Glob. Environ. Change Hum. Policy Dimensions 42, 297–315
(2017)
19. Fujimori, S., et al.: SSP3: AIM implementation of Shared Socioeconomic Pathways. Glob.
Environ. Change Hum. Policy Dimensions 42, 268–283 (2017)
20. Fricko, O., et al.: The marker quantification of the Shared Socioeconomic Pathway 2: a
middle-of-the-road scenario for the 21st century. Glob. Environ. Change Hum. Policy
Dimensions 42, 251–267 (2017)
21. Calvin, K., et al.: The SSP4: a world of deepening inequality. Glob. Environ. Change Hum.
Policy Dimensions 42, 284–296 (2017)
Author Index

A E
Aleksandrova, Olga, 600 Eleftheriadis, Ragnhild J., 373, 608
Alfnes, Erlend, 654, 662
Alonso-Ramos, Victor, 402
F
Antequera-Garcia, Gema, 402
Feng, Bowen, 342
Aschehoug, Silje, 358
Feng, Xiangcai, 480
Aukrust, Trond, 98
Feng, Xiong, 451
Azarian, Mohammad, 258
Fernandez-Gonzalez, Carmen, 402
Fordal, Jon Martin, 317
B
Ban, Shuhao, 517
Barnard, Taylor, 396 G
Batu, Temesgen, 106 Gamme, Inger, 358
Berg, Olav Åsebø, 98 Gao, Lingyan, 242, 275
Bernhardsen, Thor Inge, 317 Gao, Xiue, 283, 379
Gao, Zenggui, 267
C Ge, Yang, 37, 59, 176, 309
Cao, Jiejie, 126 Ghosh, Tamal, 283, 379
Chelishchev, Petr, 434 Guan, Xin, 292
Chen, Bo, 283, 379, 410 Gui-qin, Li, 203
Chen, Shifeng, 283, 379 Guo, Lanzhong, 44, 52, 142, 169, 176, 309
Cheng, Bin, 480
Cheng, Xiaomei, 671 H
Chirici, Lapo, 639 Hinojos A., Luis F., 662
Hong, Zhenyu, 185
D Hovig, Even Wilberg, 98, 466
Dacal-Nieto, Angel, 402 Hu, Chaobin, 151, 160
Deng, Jiaming, 517 Hu, Xiangping, 3, 11, 134, 418, 427, 671
Deng, Xuechao, 234 Huang, Junjie, 67
Dong, Jinzhong, 151 Huang, Qi, 242, 342, 457
Dong, Mengyao, 267
Dou, Yan, 44, 309, 523
Drobintsev, Pavel, 600 I
Du, Zhipeng, 219 Iakymenko, Natalia, 662

© Springer Nature Singapore Pte Ltd. 2020


Y. Wang et al. (Eds.): IWAMA 2019, LNEE 634, pp. 679–681, 2020.
https://doi.org/10.1007/978-981-15-2341-0
680 Author Index

J Luo, Chloe, 633


Jian, Wu, 52 Luo, Yinhua, 480
Jiang, Wei, 625 Lv, Leibing, 560
Jiang, Xiaomei, 59, 142, 151, 160 Lv, Yana, 410
Jiao, Peijun, 52, 169
Jie, Cao, 52
M
Jin, Yujie, 211
Ma, Haishu, 504
Johannessen, Espen, 250
Ma, Haoyu, 81
Ma, Hongliang, 388
K
Ma, Jiaxin, 44, 176, 523
Kang, Yuchi, 134
Ma, Junwei, 535
Karlsen, Øyvind, 116
Ma, Zhanrong, 176
Kotlyarova, Lina, 600
Ma, Zongzheng, 504
Martinsen, Kristian, 283, 379
L
Miao, Qiang, 585
Lan, Jian, 219, 234
Mitrouchev, Peter, 20, 29, 203, 211, 219, 227,
Lemu, Hirpa G., 106, 116
234, 442, 552, 560
Leng, Xuemei, 480
Myklebust, Odd, 373
Li, Guiqin, 20, 29, 211, 219, 227, 234, 442,
451, 552, 560
N
Li, Jinguang, 495
Namokel, Michael, 151, 160
Li, Jingyue, 195, 608
Nie, Jianjun, 504
Li, Ming, 625
Nilsen, Aleksander Wermers, 654
Li, Wenmeng, 44, 309
Niu, Shuguang, 67
Li, Xiaolong, 242
Niu, Ziru, 72
Li, Xinyong, 37, 52, 169
Noureddine, Rami, 250
Li, Yang, 20, 593
Li, Zhe, 195, 608
Li, Zhengqian, 451 P
Li, Zhiqiang, 366 Pan, Chengsheng, 410
Liao, Taohong, 3, 427 Pedersen, Tom I., 299
Lin, Shengyi, 227
Liu, Changzheng, 388 Q
Liu, Fuyu, 418 Qiu, Xinlu, 671
Liu, Gong, 72
Liu, Hongbin, 511 R
Liu, Hongmei, 517 Ren, Xiaolei, 185
Liu, Junjun, 176 Ríos, Cristian, 402
Liu, Lilan, 242, 267, 275, 333, 342, 457, 473, Rødseth, Harald, 317, 608
585
Liu, Lin, 517 S
Liu, Meihong, 134, 427 Sang, Haitao, 616
Liu, Shouzheng, 473 Schjølberg, Per, 299, 317
Liu, Xiaoyu, 366, 511 Selin, Ivan, 600
Liu, Xinghua, 89 Shi, Jinfeng, 20
Liu, Xuedong, 517 Shu, Beibei, 258
Liu, Xuemei, 81, 89, 349, 495, 593 Solhaug, Harald, 98
Liu, Yao, 625 Solvang, Wei Deng, 250, 258, 567, 577
Liu, Yunfei, 488 Song, Laiqi, 89
Li-xin, Lu, 203 Song, Pengyun, 11
Lodgaard, Eirin, 358 Song, Xiaolei, 134
Lu, Jianfeng, 169 Sørby, Knut, 434, 466
Lu, Lixin, 29, 211, 442, 552, 560 Sun, Junfeng, 3, 427
Lu, Yang, 219 Sun, Xu, 567, 577
Author Index 681

Sun, Xuejian, 11 Xu, Yuanzhi, 325


Sziebig, Gabor, 545 Xu, Zhen, 3, 427

T
Y
Tan, Chao, 480
Yan, Xiupeng, 504
Tang, Hehui, 442
Yang, Ge, 52
Tang, Xiuying, 480
Ye, Gu, 203
Tian, Ran, 151, 160
Ye, Zhiwen, 67
Tian, Yonglin, 410
Yin, Ranguang, 349
Yu, Hao, 250, 258, 567, 577
V
Yu, Haoshui, 418
Voinov, Nikita, 600
Yu, Tao, 227
Yuan, Jin, 72, 81, 349, 379, 495, 593
W
Wan, Xiang, 242, 275, 333, 342, 457, 473, 585
Wang, Chao, 37 Z
Wang, Chen, 646 Zhang, Baodi, 134
Wang, Hanlin, 227 Zhang, Baosheng, 418
Wang, Jianhua, 535 Zhang, Guichang, 185
Wang, Kesheng, 283, 379, 457, 473, 639, 646 Zhang, Guowei, 366
Wang, Sen, 333, 511 Zhang, Haishu, 89
Wang, Weicong, 29 Zhang, Haohan, 366, 511
Wang, Yi, 283, 333, 379, 396, 633, 639, 646 Zhang, Li, 511
Wu, Fang, 275, 473 Zhang, Ping, 593
Wu, Jian, 59, 126, 169, 176 Zhang, Ronghua, 388
Wu, Pengfei, 585 Zhang, Shijin, 20
Zhang, Tianshu, 283
X Zhang, Xiangyu, 457
Xi, Zhang, 325 Zhang, Xuedong, 3
Xia, Hong, 234 Zhao, Weixing, 552
Xie, Wenxue, 379 Zhao, Zhiping, 37
Xin, Wang, 325 Zhou, Faqi, 517
Xin, Zhenbo, 72 Zhou, Maoheng, 219, 234
Xing, Zhiwei, 185 Zhu, Qiuyu, 219, 234
Xu, Jingjing, 488 Zou, Liangliang, 81, 495
Xu, Weigang, 517 Zou, Wei, 585

You might also like